Additions, Changes and Removals in Gurobi 13.0#
Release Highlights#
Gurobi V13 provides performance improvements accross a variety of model families, notably on MIP and MINLP. No parameter settings or application code changes are necessary to benefit from these performance improvements. The details for these improvements will be provided after the beta period.
A new nonlinear barrier method is included in Gurobi V13 as a preview feature. This solver makes it possible to find local optima for nonconvex continuous models more quickly than the global solver.
Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms for solving linear programs (LPs). By default it will run on the CPU, but it has optional GPU acceleration.
New Features#
NL Barrier Method for Solving NLPs to Local Optimality#
Important
We consider this feature a preview in this release. This means that it is fully supported and expected to work correctly, but it will likely undergo significant changes in subsequent Gurobi technical or major releases, potentially including breaking changes in API, behavior and packaging.
You can now ask Gurobi to look for a locally optimal solution to your nonlinear continuous optimization problems (NLPs). It will do so using a variant of the barrier algorithm.
For problems without discrete elements (such as integer variables, SOS constraints, or piecewise-linear functions), this solver might be preferable to the global MINLP solver when the latter takes too long; for example, due to a large number of variables and/or constraints. It does not guarantee a globally optimal solution, unless the problem is convex. Instead, it looks for feasible solutions for which the objective value cannot be easily improved by small changes in the optimization variables. These local optima can typically be computed much faster and are sufficient in many settings, particularly when a good initial guess of the optimal solution is available.
The optimization status codes LOCALLY_OPTIMAL and LOCALLY_INFEASIBLE have been added to represent the possible outcomes for the nonlinear barrier algorithm.
You can enable this solver by setting OptimalityTarget to 1, and you can adjust the behaviour of the algorithm with the parameters NLBarIterLimit, NLBarCFeasTol, NLBarDFeasTol, and NLBarPFeasTol.
After a successful optimization run, you can obtain the number of iterations that the NL barrier method performed with the NLBarIterCount attribute.
PDHG Algorithm#
Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms
for solving linear programs (LPs). You can enable it when solving an LP or MIP
by setting the Method parameter to GRB_METHOD_PDHG
(6).
The termination criteria for PDHG are controlled using new parameters PDHGAbsTol, PDHGConvTol, PDHGRelTol, and PDHGIterLimit. A new attribute, PDHGIterCount has been added to return the number of iterations performed by PDHG.
PDHG on Nvidia GPUs#
Important
We consider this feature a preview in this release. This means that it is fully supported and expected to work correctly, but it will likely undergo significant changes in subsequent Gurobi technical or major releases, potentially including breaking changes in API, behavior and packaging.
By default the PDHG algorithm will run on the CPU, but it can take advantage from Nvidia GPUs: The new PDHGGPU parameter controls whether PDHG should run on the GPU if available.
NoRel Heuristic for a Limited Number of Solutions#
The new parameter NoRelHeurSolutions allows to specify that the NoRel heuristic should run, and stop when it has found a specific number of solutions. This can be useful when the time to find these solutions is difficult to predict beforehand.
NoRel Heuristic with Variable Hints#
The NoRel heuristic now takes variable hints (see the VarHintVal attribute) into account that have been provided by the user. This could lead to finding solutions faster that are in the neighborhood of the variable hint values.
Specify where
Flags for Callbacks#
You can now specify for which where
flags a callback should be
invoked. This allows to
only send the information from a remote worker to the client when that information will be useful. That has
a positive performance impact since the remote worker does not need to wait
for the client to acknowledge these messages. In particular, when the solution
vectors are not needed during the solve, we observed a performance
improvement of more than a factor 2 for instances which produce a lot of
solutions during the solve.
The following changes have been made in the APIs:
In C, function
GRBsetcallbackfuncadv
allows to specify a bit vector which defines for whichwhere
flags the callback should be invoked.In C++, function
setCallback
now accepts an additional optional argument. The optional argument allows to specify a bit vector which defines for whichwhere
flags the callback should be invoked.In Java, an additional version of the function
setCallback
has been added, which allows to specify a bit vector which defines for whichwhere
flags the callback should be invoked.In .NET, function
SetCallback
now accepts an additional optional argument. The optional argument allows to specify a bit vector which defines for whichwhere
flags the callback should be invoked.In gurobipy,
Model.optimize
,Model.optimizeAsync
, andModel.computeIIS
now all accept an optionalwheres
argument. The optional argument allows to specify a list ofwhere
flags for which the callback should be invoked.
Additional Option for Thread Usage#
The Threads parameter has an additional value of -1. When you use this value, Gurobi may use as many threads as there are virtual processors. The automatic setting (0), which is the default value, would limit the number of threads to use to 32. See the description of the parameter for more details.
Additional Operations in Nonlinear Expressions#
Our Nonlinear Constraints can handle two new nonlinear operations:
the hyperbolic tangent function (OPCODE_TANH)
the signed power function (OPCODE_SIGNPOW). The signed power function is defined as \(\text{signpow}(x, a) = \text{sign}(x) |x|^a\), where \(\text{sign}(x)\) denotes the sign of \(x\) and \(a \in \mathbb{R}_{\geq 1}\). For example, \(\text{signpow}(x, 2) = x |x|\).
Extended Log Output#
The statistics on the model at the beginning of the log (Header) now contain the model sense and the number of non-zero linear objective coefficients.
If you interrupt a MIP optimization and then resume it by calling the optimize method again, the final log output that displays the number of processed nodes, simplex iterations, runtime and work spent has slightly changed in Gurobi 13. In prior versions, the log output only showed the time and work spent on the most recent optimize call. With Gurobi 13, this log line displays the total, accumulated time and work spent on solving the given MIP model, and it adds another log line to display the time and work spent just on the most recent optimize call.
For example, consider a case in which you set a time limit of 2 seconds and call the optimize method on a MIP model twice. This way, the optimizer will run at most 4 seconds in total, 2 seconds for each of the two optimize calls. In prior versions of Gurobi, the final log line after the second solve could have looked like this:
Explored 5274 nodes (75223 simplex iterations) in 2.00 seconds (0.67 work units)
It would have shown the total, accumulated number of explored nodes and simplex iterations since the initial optimize call, but only the time and work spent on the second optimize call.
With Gurobi 13, you would get the following log output:
Explored 5274 nodes (75223 simplex iterations) in 4.01 seconds (1.05 work units)
Most recent optimization runtime was 2.00 seconds (0.67 work units)
The first line shows total, accumulated values for all statistics: node counts, interation counts, time and work. The additional log line shows the time and work spent on the second optimize call.
Ignore Parameter Settings for Tuning#
With the new parameter TuneIgnoreSettings, you can now specify
parameter settings that the tuner should skip during its run. This is
particularly useful when continuing an interrupted tuning
process: by providing the parameter settings already tested
in the previous run, the tuner avoids re-evaluating them. To support this, the
tuner writes a parameter file at the end of each tuning run, listing all tested
parameter configurations. The default name for this file is tune-all.prm
.
Branching Priority and Multiple MIP starts in Tuner#
In addition to MIP starts, the tuner now considers branching priorities. These can be provided using an ATTR file or by setting the variable attribute BranchPriority. If multiple MIP starts are given, the tuner also includes them all. For more details, see the respective sections of the Parameter Tuning Tool.
Control Parameter Inheritance#
When working with Concurrent Environments or Multiobjective Environments, the new parameter InheritParams controls whether parameters from a main environment should be inherited. This is, for example, useful when tuning multi-objective models.
Multi-objective attributes#
After solving a multi-objective model, you can retrieve information about the optimization pass during which each objective was solved. For each objective, the following attributes can be queried. Use the parameter ObjNumber to specify the objective you’re interested in:
Attribute name |
Short description |
---|---|
Number of optimization passes that were conducted in the last solve |
|
Index of the optimization pass in which the selected objective was processed |
|
Number of simplex interation in the optimization pass in which the selected objective was processed |
|
MIP gap for the optimization pass in which the selected objective was processed |
|
Number of explored nodes in the optimization pass in which the selected objective was processed |
|
Objective bound for the optimization pass in which the selected objective was processed |
|
Objective value for the optimization pass in which the selected objective was processed |
|
Number of unexplored nodes in the optimization pass in which the selected objective was processed |
|
Runtime for the optimization pass in which the selected objective was processed |
|
Status for the optimization pass in which the selected objective was processed |
|
Deterministic work for the optimization pass in which the selected objective was processed |
Barrier Optimization Status before starting Crossover#
When solving an LP with the barrier algorithm and crossover, the new attribute BarStatus accesses the solution status of the barrier optimizer before starting crossover. This can help in interpreting the solution vectors that can be accessed via the BarX and BarPi attributes.
Changes to Gurobipy#
The behaviour of the global methods
setParam
andresetParams
, which use the default environment, has changed. In previous versions, these functions applied parameter changes to anyModel
objects found in the__main__
namespace of a Python script. This is inconsistent with the behaviour of other environments, and withModel
objects stored within other data structures. These functions no longer impact already createdModel
objects.Model.optimize
,Model.optimizeAsync
, andModel.computeIIS
now all accept an optionalwheres
argument. The optional argument allows to specify a list ofwhere
flags for which the callback should be invoked.A callback function can now be provided to
Model.tune
. This callback functionality allows the tuner to be programmatically terminated from a callback. See Callbacks in the Tuner for details.A
LinExpr.linTerms
method has been added which iterates over the individual terms of aLinExpr
expression object.New methods
QuadExpr.linTerms
andQuadExpr.quadTerms
have been added which iterate over the individual linear and quadratic terms, respectively, of aQuadExpr
expression object.New method
Model.getQ
andModel.getQCMatrices
have been added which query quadratic objective terms and quadratic constraint terms, returningscipy.sparse
representations.Model.setAttr
can now be called for array attributes without the need to pass modeling objects.The
loadModel
function has been added, which allowsModel
objects to be built directly from input data without creatingVar
orConstr
objects.
Changes to MATLAB API#
The solution pool return field
xn
has been renamed topoolnx
, see thegurobi
function.
Changes to R API#
The solution pool return component
xn
has been renamed topoolnx
, see thegurobi
function.
Changes to C++ API#
GRBException
now inherits fromstd::runtime_error
which allows you to catch Gurobi exceptions via standard library types.
Changes to JSON solution file format#
The fields
Xn
andPoolObjVal
in the JSON solution file format have been renamed toPoolNX
andPoolNObjVal
, respectively. See the JSON solution format section.
Other Notable Changes#
Methods
GRBModel::resetParams
that allow to reset parameters on a given model to their default values were added inC++
,Java
and.NET
. Similar functions existed already inC
andPython
.Added FixVarsInIndicators parameter to control how indicator constraints are treated when creating the fixed model.
Added StartTimeLimit and StartWorkLimit parameters to set limits on the sub-MIP solve for a partial MIP start.
A new default value of
-1
has been introduced for the LPWarmStart parameter. This is equivalent to the previous default value of1
for all algorithms except the new PDHG algorithm. For PDHG, the default value is equivalent to2
.Added ImproveStartWork parameter to set the amount of work after which the solver should switch to the solution improvement phase.
If the ImproveStartTime or ImproveStartWork parameters are set and this limit is hit during the root node processing, Gurobi will now interrupt the root node processing and go directly into the solution improvement phase.
Added MasterKnapsackCuts parameter to control the generation of cuts derived from the master knapsack polytope.
Added PoolNMaxVio, PoolNBoundVio, PoolNBoundVioIndex, PoolNBoundVioSum, PoolNConstrVio, PoolNConstrVioIndex, PoolNConstrVioSum, PoolNIntVio, PoolNIntVioIndex and PoolNIntVioSum attributes to query quality data for all solutions in the MIP solution pool.
New values have been added to the NLPHeur parameter. With values
2
and3
, the NLP heuristic is called more aggressively than before. A new default value of-1
for NLPHeur was also introduced.The
PRESOLVE
callback is now invoked by the solver even when using Gurobi Remote Services.
Deprecated functionality#
If you are upgrading from a previous version of Gurobi, we recommend
first running your code with Gurobi 12 and warnings enabled to catch
deprecations in gurobipy
. Fixing these deprecated usages will help
to keep compatibility for Gurobi 13 and later versions. Warnings can be
enabled by running your code with the -X dev
or -W default
flags. See the Python Development
Mode or
warnings package
documentation for further details.
In Gurobi 13, the following usage is deprecated and will be removed in a future version:
The attributes Xn and PoolObjVal are deprecated. Use the PoolNX and PoolNObjVal attributes instead. This is to unify the attribute naming with the new solution pool quality attributes like PoolNMaxVio and also with existing multi-objective attributes (e.g., ObjNPriority) and multi-scenario attributes (e.g., ScenNLB).
Removal of deprecated functionality#
Removal of the interactive shell#
The interactive shell has been removed from the Gurobi installation. You can
achieve similar functionality by installing gurobipy
in any Python
environment and running from gurobipy import *
when you first start the
Python interpreter. While such wildcard imports may be convenient for
interactive use of the Python interpreter, we recommend using the pattern
import gurobipy as gp
with gp.Env() as env, gp.Model(env=env) as model:
pass
in optimization applications.
Removals from Gurobipy#
A number of deprecated functions have been removed from gurobipy
: