Additions, Changes and Removals in Gurobi 13.0#
Release Highlights#
Gurobi V13 provides performance improvements across a variety of model families, notably on MIP and MINLP. No parameter settings or application code changes are necessary to benefit from these performance improvements. The details for these improvements will be provided after the beta period.
A new nonlinear barrier method is included in Gurobi V13 as a preview feature. This solver makes it possible to find local optima for nonconvex continuous models more quickly than the global solver.
Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms for solving linear programs (LPs). By default, it will run on the CPU, but it has optional GPU acceleration.
New Features#
NL Barrier Method for Solving NLPs to Local Optimality#
Important
We consider this feature a preview in this release. This means that it is fully tested and supported, but will likely undergo significant changes in subsequent Gurobi technical or major releases, potentially including breaking changes in API, behavior and packaging.
You can now ask Gurobi to look for a locally optimal solution to your nonlinear continuous optimization problems (NLPs). It will do so using a variant of the barrier algorithm.
For problems without discrete elements (such as integer variables, SOS constraints, or piecewise-linear functions), this solver might be preferable to the global MINLP solver when the latter takes too long; for example, due to a large number of variables and/or constraints. It does not guarantee a globally optimal solution, unless the problem is convex. Instead, it looks for feasible solutions for which the objective value cannot be easily improved by small changes in the optimization variables. These local optima can typically be computed much faster and are sufficient in many settings, particularly when a good initial guess of the optimal solution is available.
The optimization status codes LOCALLY_OPTIMAL and LOCALLY_INFEASIBLE have been added to represent the possible outcomes for the nonlinear barrier algorithm.
You can enable this solver by setting OptimalityTarget to 1, and you can adjust the behaviour of the algorithm by setting the parameters NLBarIterLimit, NLBarCFeasTol, NLBarDFeasTol, and NLBarPFeasTol.
After a successful optimization run, you can obtain the number of iterations that the NL barrier method performed by querying the NLBarIterCount attribute.
PDHG Algorithm#
Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms for solving linear programs (LPs).
You can enable this solver when solving an LP or MIP
by setting the Method parameter to GRB_METHOD_PDHG (6),
and you can adjust the behaviour of the algorithm by setting the parameters
PDHGAbsTol, PDHGConvTol,
PDHGRelTol, and PDHGIterLimit.
After a successful optimization run, you can obtain the number of iterations that PDHG performed by querying the PDHGIterCount attribute.
PDHG on NVIDIA GPUs#
Important
This feature is still considered beta in this release. This means that we invite users to try this features, but the code didn’t undergo as much testing as the rest of the product. This features shouldn’t be used in production settings, and the scope of technical support is on a best-effort basis.
By default, the PDHG algorithm will run on the CPU, but it can take advantage of NVIDIA GPUs. You can set the parameter PDHGGPU to 1 to specify that PDHG should run on the GPU if available.
NoRel Heuristic for a Limited Number of Solutions#
A new parameter NoRelHeurSolutions has been added to specify that the NoRel heuristic should run, and stop when it has found a specific number of solutions. This can be useful when the time to find these solutions is difficult to predict beforehand.
NoRel Heuristic with Variable Hints#
The NoRel heuristic now takes user-provided variable hints (see the VarHintVal attribute) into account. This could lead to finding solutions faster that are in the neighborhood of the variable hint values.
Specify where Flags for Callbacks#
You can now specify for which where flags a callback should be
invoked. This allows the optimizer to
send information from a remote worker to the client only when that information will be useful. This has
a positive performance impact since the remote worker does not need to wait
for the client to acknowledge these messages. In particular, when the solution
vectors are not needed during the solve, we observed a performance
improvement of more than a factor of two for instances that produce many
solutions during the solve.
The following changes have been made to support this feature across all APIs:
In C, the new function
GRBsetcallbackfuncadvallows you to provide a bit vector specifying for whichwhereflags the callback should be invoked.In C++, the
setCallbackfunction now accepts an additional optional argument. The optional argument allows you to provide a bit vector specifying for whichwhereflags the callback should be invoked.In Java, an additional version of the
setCallbackfunction has been added. This additional overload allows you to provide a bit vector specifying for whichwhereflags the callback should be invoked.In .NET, the
SetCallbackfunction now accepts an additional optional argument. The optional argument allows you to provide a bit vector specifying for whichwhereflags the callback should be invoked.In gurobipy,
Model.optimize,Model.optimizeAsync, andModel.computeIISnow all accept an optionalwheresargument. The optional argument allows you to provide a list ofwhereflags for which the callback should be invoked.
Additional Option for Thread Usage#
The Threads parameter now accepts a special value of -1. When you set this value, Gurobi may use as many threads as there are virtual processors detected on the machine. The automatic setting (0), which is the default value, will use at most 32 threads even if the machine is larger. Refer to the description of the parameter for further details.
Additional Operations in Nonlinear Expressions#
Nonlinear Constraints have been extended to handle two new nonlinear operations:
the hyperbolic tangent function (OPCODE_TANH)
the signed power function (OPCODE_SIGNPOW). The signed power function is defined as \(\text{signpow}(x, a) = \text{sign}(x) |x|^a\), where \(\text{sign}(x)\) denotes the sign of \(x\) and \(a \in \mathbb{R}_{\geq 1}\). For example, \(\text{signpow}(x, 2) = x |x|\).
Extended Log Output#
The model statistics reported at the beginning of the log (Header) now include the model sense and the number of non-zero linear objective coefficients.
If you interrupt a MIP optimization and then resume it by calling the optimize method again, the final log output that displays the number of processed nodes, simplex iterations, runtime and work spent has slightly changed in Gurobi 13. In prior versions, the log output only showed the time and work spent on the most recent optimize call. With Gurobi 13, this log line displays the total, accumulated time and work spent on solving the given MIP model, and it adds another log line to display the time and work spent just on the most recent optimize call.
For example, consider a case in which you set a time limit of 2 seconds and call the optimize method on a MIP model twice. This way, the optimizer will run at most 4 seconds in total, 2 seconds for each of the two optimize calls. In prior versions of Gurobi, the final log line after the second solve could have looked like this:
Explored 5274 nodes (75223 simplex iterations) in 2.00 seconds (0.67 work units)
It would have shown the total, accumulated number of explored nodes and simplex iterations since the initial optimize call, but only the time and work spent on the second optimize call.
With Gurobi 13, you would get the following log output:
Explored 5274 nodes (75223 simplex iterations) in 4.01 seconds (1.05 work units)
Most recent optimization runtime was 2.00 seconds (0.67 work units)
The first line shows total, accumulated values for all statistics: node counts, interation counts, time and work. The additional log line shows the time and work spent on the second optimize call.
Ignore Parameter Settings for Tuning#
With the new parameter TuneIgnoreSettings, you can now specify
parameter settings that the tuner should skip during its run. This is
particularly useful when continuing an interrupted tuning
process: by providing the parameter settings already tested
in the previous run, the tuner avoids re-evaluating them. To support this, the
tuner writes a parameter file at the end of each tuning run, listing all tested
parameter configurations. The default name for this file is tune-all.prm.
Branching Priority and Multiple MIP starts in Tuner#
In addition to MIP starts, the tuner now considers branching priorities. These can be provided using an ATTR file or by setting the variable attribute BranchPriority. If multiple MIP starts are given, the tuner also includes them all. For more details, see the respective sections of the Parameter Tuning Tool.
Control Parameter Inheritance#
When working with Concurrent Environments or Multiobjective Environments, the new parameter InheritParams controls whether parameters from a main environment should be inherited. This is, for example, useful when tuning multi-objective models.
Multi-objective attributes#
After solving a multi-objective model, you can retrieve information about the different optimization passes. The attribute NumObjPasses gives the number of optimization passes that were conducted in the last solve. For each optimization pass that was processed, the following attributes can be queried. Use the parameters ObjPassNumber or ObjNumber to specify the optimization pass you’re interested in:
Attribute name |
Short description |
|---|---|
Number of simplex interations in the selected optimization pass |
|
MIP gap for the selected optimization pass |
|
Number of explored nodes in the selected optimization pass |
|
Objective bound for the selected optimization pass |
|
Objective value for the selected optimization pass |
|
Number of unexplored nodes in the selected optimization pass |
|
Runtime for the selected optimization pass |
|
Status for the selected optimization pass |
|
Deterministic work for the selected optimization pass |
In addition, the attribute ObjNPass gives the index of the optimization pass of the objective function specified with parameter ObjNumber.
Barrier Optimization Status before starting Crossover#
When solving an LP with the barrier algorithm and crossover, the new attribute BarStatus returns the solution status of the barrier optimizer before starting crossover. This can help in interpreting the solution vectors that can be accessed via the BarX and BarPi attributes.
Changes to gurobipy#
The behaviour of the global methods
setParamandresetParams, which use the default environment, has changed. In previous versions, these functions applied parameter changes to anyModelobjects found in the__main__namespace of a Python script. This is inconsistent with the behaviour of other environments, and withModelobjects stored within other data structures. These functions no longer impact already createdModelobjects.Model.optimize,Model.optimizeAsync, andModel.computeIISnow all accept an optionalwheresargument. The optional argument allows to specify a list ofwhereflags for which the callback should be invoked.A callback function can now be provided to
Model.tune. This callback functionality allows the tuner to be programmatically terminated from a callback. See Callbacks in the Tuner for details.The Global Interpreter Lock (GIL) is now released when starting an environment. This avoids a potential deadlock in multithreaded Python code when an environment takes some time to start (for example, when a job is queued in a compute server).
A running optimization can now be gracefully interrupted in Jupyter notebooks on Windows.
A
LinExpr.linTermsmethod has been added which iterates over the individual terms of aLinExprexpression object.New methods
QuadExpr.linTermsandQuadExpr.quadTermshave been added which iterate over the individual linear and quadratic terms, respectively, of aQuadExprexpression object.New method
Model.getQandModel.getQCMatriceshave been added which query quadratic objective terms and quadratic constraint terms, returningscipy.sparserepresentations.The
loadModelfunction has been added, which allowsModelobjects to be built directly from input data without creatingVarorConstrobjects.Model.getAttrandModel.setAttrcan now be called for array attributes without the need to pass modeling objects. The performance of both methods has been improved.
Changes to MATLAB API#
The solution pool return field
xnhas been renamed topoolnx, see thegurobifunction.
Changes to R API#
The solution pool return component
xnhas been renamed topoolnx, see thegurobifunction.
Changes to C++ API#
GRBExceptionnow inherits fromstd::runtime_errorwhich allows you to catch Gurobi exceptions via standard library types.
Changes to JSON solution file format#
The fields
XnandPoolObjValin the JSON solution file format have been renamed toPoolNXandPoolNObjVal, respectively. See the JSON solution format section.
Other Notable Changes#
Methods
GRBModel::resetParamsthat allow to reset parameters on a given model to their default values have been added to theC++,Javaand.NETAPIs. Similar functions existed already inCandPython.A new parameter FixVarsInIndicators has been added to control how indicator constraints are treated when creating the fixed model.
New parameters StartTimeLimit and StartWorkLimit have been added to set limits on the sub-MIP solve for a partial MIP start.
A new default value of
-1has been introduced for the LPWarmStart parameter. This is equivalent to the previous default value of1for all algorithms except the new PDHG algorithm. For PDHG, the default value is equivalent to2.A new parameter ImproveStartWork has been added to set the amount of work after which the solver should switch to the solution improvement phase.
If the ImproveStartTime or ImproveStartWork parameters are set and either limit is hit during the root node processing, Gurobi will now interrupt the root node processing and go directly into the solution improvement phase.
A new parameter MasterKnapsackCuts has been added to control the generation of cuts derived from the master knapsack polytope.
New attributes PoolNMaxVio, PoolNBoundVio, PoolNBoundVioIndex, PoolNBoundVioSum, PoolNConstrVio, PoolNConstrVioIndex, PoolNConstrVioSum, PoolNIntVio, PoolNIntVioIndex and PoolNIntVioSum have been added to query quality data for all solutions in the MIP solution pool.
New values have been added to the NLPHeur parameter. With values
2and3, the NLP heuristic is called more aggressively than before. A new default value of-1for NLPHeur was also introduced.The
PRESOLVEcallback is now invoked by the solver even when using Gurobi Remote Services.
Deprecated functionality#
If you are upgrading from a previous version of Gurobi, we recommend
first running your code with Gurobi 12 and warnings enabled to catch
deprecations in gurobipy. Fixing these deprecated usages will help
to keep compatibility for Gurobi 13 and later versions. Warnings can be
enabled by running your code with the -X dev or -W default
flags. See the Python Development
Mode or
warnings package
documentation for further details.
In Gurobi 13, the following usage is deprecated and will be removed in a future version:
The attributes Xn and PoolObjVal are deprecated. Use the PoolNX and PoolNObjVal attributes instead. This is to unify the attribute naming with the new solution pool quality attributes like PoolNMaxVio and also with existing multi-objective attributes (e.g., ObjNPriority) and multi-scenario attributes (e.g., ScenNLB).
Function constraints are deprecated. This applies to all APIs and the corresponding function constraint related attributes FuncPieceError, FuncPieceLength, FuncPieceRatio, FuncPieces, FuncNonlinear and parameters FuncPieceError, FuncPieceLength, FuncPieceRatio, FuncPieces, FuncNonlinear. Please use nonlinear constraints instead.
Removal of deprecated functionality#
Removal of the interactive shell#
The interactive shell has been removed from the Gurobi installation. You can
achieve similar functionality by installing gurobipy in any Python
environment and running from gurobipy import * when you first start the
Python interpreter. While such wildcard imports may be convenient for
interactive use of the Python interpreter, we recommend using the pattern
import gurobipy as gp
with gp.Env() as env, gp.Model(env=env) as model:
pass
in optimization applications.
Removals from gurobipy#
A number of deprecated functions have been removed from gurobipy: