Additions, Changes and Removals in Gurobi 13.0#

Release Highlights#

  • Gurobi V13 provides performance improvements across a variety of model families, notably on MIP and MINLP. No parameter settings or application code changes are necessary to benefit from these performance improvements. The details for these improvements will be provided after the beta period.

  • A new nonlinear barrier method is included in Gurobi V13 as a preview feature. This solver makes it possible to find local optima for nonconvex continuous models more quickly than the global solver.

  • Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms for solving linear programs (LPs). By default, it will run on the CPU, but it has optional GPU acceleration.

New Features#

NL Barrier Method for Solving NLPs to Local Optimality#

Important

We consider this feature a preview in this release. This means that it is fully tested and supported, but will likely undergo significant changes in subsequent Gurobi technical or major releases, potentially including breaking changes in API, behavior and packaging.

You can now ask Gurobi to look for a locally optimal solution to your nonlinear continuous optimization problems (NLPs). It will do so using a variant of the barrier algorithm.

For problems without discrete elements (such as integer variables, SOS constraints, or piecewise-linear functions), this solver might be preferable to the global MINLP solver when the latter takes too long; for example, due to a large number of variables and/or constraints. It does not guarantee a globally optimal solution, unless the problem is convex. Instead, it looks for feasible solutions for which the objective value cannot be easily improved by small changes in the optimization variables. These local optima can typically be computed much faster and are sufficient in many settings, particularly when a good initial guess of the optimal solution is available.

The optimization status codes LOCALLY_OPTIMAL and LOCALLY_INFEASIBLE have been added to represent the possible outcomes for the nonlinear barrier algorithm.

You can enable this solver by setting OptimalityTarget to 1, and you can adjust the behaviour of the algorithm by setting the parameters NLBarIterLimit, NLBarCFeasTol, NLBarDFeasTol, and NLBarPFeasTol.

After a successful optimization run, you can obtain the number of iterations that the NL barrier method performed by querying the NLBarIterCount attribute.

PDHG Algorithm#

Primal-Dual Hybrid Gradient (PDHG) has been added to our suite of algorithms for solving linear programs (LPs).

You can enable this solver when solving an LP or MIP by setting the Method parameter to GRB_METHOD_PDHG (6), and you can adjust the behaviour of the algorithm by setting the parameters PDHGAbsTol, PDHGConvTol, PDHGRelTol, and PDHGIterLimit.

After a successful optimization run, you can obtain the number of iterations that PDHG performed by querying the PDHGIterCount attribute.

PDHG on NVIDIA GPUs#

Important

This feature is still considered beta in this release. This means that we invite users to try this features, but the code didn’t undergo as much testing as the rest of the product. This features shouldn’t be used in production settings, and the scope of technical support is on a best-effort basis.

By default, the PDHG algorithm will run on the CPU, but it can take advantage of NVIDIA GPUs. You can set the parameter PDHGGPU to 1 to specify that PDHG should run on the GPU if available.

NoRel Heuristic for a Limited Number of Solutions#

A new parameter NoRelHeurSolutions has been added to specify that the NoRel heuristic should run, and stop when it has found a specific number of solutions. This can be useful when the time to find these solutions is difficult to predict beforehand.

NoRel Heuristic with Variable Hints#

The NoRel heuristic now takes user-provided variable hints (see the VarHintVal attribute) into account. This could lead to finding solutions faster that are in the neighborhood of the variable hint values.

Specify where Flags for Callbacks#

You can now specify for which where flags a callback should be invoked. This allows the optimizer to send information from a remote worker to the client only when that information will be useful. This has a positive performance impact since the remote worker does not need to wait for the client to acknowledge these messages. In particular, when the solution vectors are not needed during the solve, we observed a performance improvement of more than a factor of two for instances that produce many solutions during the solve.

The following changes have been made to support this feature across all APIs:

  • In C, the new function GRBsetcallbackfuncadv allows you to provide a bit vector specifying for which where flags the callback should be invoked.

  • In C++, the setCallback function now accepts an additional optional argument. The optional argument allows you to provide a bit vector specifying for which where flags the callback should be invoked.

  • In Java, an additional version of the setCallback function has been added. This additional overload allows you to provide a bit vector specifying for which where flags the callback should be invoked.

  • In .NET, the SetCallback function now accepts an additional optional argument. The optional argument allows you to provide a bit vector specifying for which where flags the callback should be invoked.

  • In gurobipy, Model.optimize, Model.optimizeAsync, and Model.computeIIS now all accept an optional wheres argument. The optional argument allows you to provide a list of where flags for which the callback should be invoked.

Additional Option for Thread Usage#

The Threads parameter now accepts a special value of -1. When you set this value, Gurobi may use as many threads as there are virtual processors detected on the machine. The automatic setting (0), which is the default value, will use at most 32 threads even if the machine is larger. Refer to the description of the parameter for further details.

Additional Operations in Nonlinear Expressions#

Nonlinear Constraints have been extended to handle two new nonlinear operations:

  • the hyperbolic tangent function (OPCODE_TANH)

  • the signed power function (OPCODE_SIGNPOW). The signed power function is defined as \(\text{signpow}(x, a) = \text{sign}(x) |x|^a\), where \(\text{sign}(x)\) denotes the sign of \(x\) and \(a \in \mathbb{R}_{\geq 1}\). For example, \(\text{signpow}(x, 2) = x |x|\).

Extended Log Output#

The model statistics reported at the beginning of the log (Header) now include the model sense and the number of non-zero linear objective coefficients.

If you interrupt a MIP optimization and then resume it by calling the optimize method again, the final log output that displays the number of processed nodes, simplex iterations, runtime and work spent has slightly changed in Gurobi 13. In prior versions, the log output only showed the time and work spent on the most recent optimize call. With Gurobi 13, this log line displays the total, accumulated time and work spent on solving the given MIP model, and it adds another log line to display the time and work spent just on the most recent optimize call.

For example, consider a case in which you set a time limit of 2 seconds and call the optimize method on a MIP model twice. This way, the optimizer will run at most 4 seconds in total, 2 seconds for each of the two optimize calls. In prior versions of Gurobi, the final log line after the second solve could have looked like this:

Explored 5274 nodes (75223 simplex iterations) in 2.00 seconds (0.67 work units)

It would have shown the total, accumulated number of explored nodes and simplex iterations since the initial optimize call, but only the time and work spent on the second optimize call.

With Gurobi 13, you would get the following log output:

Explored 5274 nodes (75223 simplex iterations) in 4.01 seconds (1.05 work units)
Most recent optimization runtime was 2.00 seconds (0.67 work units)

The first line shows total, accumulated values for all statistics: node counts, interation counts, time and work. The additional log line shows the time and work spent on the second optimize call.

Ignore Parameter Settings for Tuning#

With the new parameter TuneIgnoreSettings, you can now specify parameter settings that the tuner should skip during its run. This is particularly useful when continuing an interrupted tuning process: by providing the parameter settings already tested in the previous run, the tuner avoids re-evaluating them. To support this, the tuner writes a parameter file at the end of each tuning run, listing all tested parameter configurations. The default name for this file is tune-all.prm.

Branching Priority and Multiple MIP starts in Tuner#

In addition to MIP starts, the tuner now considers branching priorities. These can be provided using an ATTR file or by setting the variable attribute BranchPriority. If multiple MIP starts are given, the tuner also includes them all. For more details, see the respective sections of the Parameter Tuning Tool.

Control Parameter Inheritance#

When working with Concurrent Environments or Multiobjective Environments, the new parameter InheritParams controls whether parameters from a main environment should be inherited. This is, for example, useful when tuning multi-objective models.

Multi-objective attributes#

After solving a multi-objective model, you can retrieve information about the different optimization passes. The attribute NumObjPasses gives the number of optimization passes that were conducted in the last solve. For each optimization pass that was processed, the following attributes can be queried. Use the parameters ObjPassNumber or ObjNumber to specify the optimization pass you’re interested in:

Attribute name

Short description

ObjPassNIterCount

Number of simplex interations in the selected optimization pass

ObjPassNMipGap

MIP gap for the selected optimization pass

ObjPassNNodeCount

Number of explored nodes in the selected optimization pass

ObjPassNObjBound

Objective bound for the selected optimization pass

ObjPassNObjVal

Objective value for the selected optimization pass

ObjPassNOpenNodeCount

Number of unexplored nodes in the selected optimization pass

ObjPassNRuntime

Runtime for the selected optimization pass

ObjPassNStatus

Status for the selected optimization pass

ObjPassNWork

Deterministic work for the selected optimization pass

In addition, the attribute ObjNPass gives the index of the optimization pass of the objective function specified with parameter ObjNumber.

Barrier Optimization Status before starting Crossover#

When solving an LP with the barrier algorithm and crossover, the new attribute BarStatus returns the solution status of the barrier optimizer before starting crossover. This can help in interpreting the solution vectors that can be accessed via the BarX and BarPi attributes.

Changes to gurobipy#

  • The behaviour of the global methods setParam and resetParams, which use the default environment, has changed. In previous versions, these functions applied parameter changes to any Model objects found in the __main__ namespace of a Python script. This is inconsistent with the behaviour of other environments, and with Model objects stored within other data structures. These functions no longer impact already created Model objects.

  • Model.optimize, Model.optimizeAsync, and Model.computeIIS now all accept an optional wheres argument. The optional argument allows to specify a list of where flags for which the callback should be invoked.

  • A callback function can now be provided to Model.tune. This callback functionality allows the tuner to be programmatically terminated from a callback. See Callbacks in the Tuner for details.

  • The Global Interpreter Lock (GIL) is now released when starting an environment. This avoids a potential deadlock in multithreaded Python code when an environment takes some time to start (for example, when a job is queued in a compute server).

  • A running optimization can now be gracefully interrupted in Jupyter notebooks on Windows.

  • A LinExpr.linTerms method has been added which iterates over the individual terms of a LinExpr expression object.

  • New methods QuadExpr.linTerms and QuadExpr.quadTerms have been added which iterate over the individual linear and quadratic terms, respectively, of a QuadExpr expression object.

  • New method Model.getQ and Model.getQCMatrices have been added which query quadratic objective terms and quadratic constraint terms, returning scipy.sparse representations.

  • The loadModel function has been added, which allows Model objects to be built directly from input data without creating Var or Constr objects.

  • Model.getAttr and Model.setAttr can now be called for array attributes without the need to pass modeling objects. The performance of both methods has been improved.

Changes to MATLAB API#

  • The solution pool return field xn has been renamed to poolnx, see the gurobi function.

Changes to R API#

  • The solution pool return component xn has been renamed to poolnx, see the gurobi function.

Changes to C++ API#

  • GRBException now inherits from std::runtime_error which allows you to catch Gurobi exceptions via standard library types.

Changes to JSON solution file format#

  • The fields Xn and PoolObjVal in the JSON solution file format have been renamed to PoolNX and PoolNObjVal, respectively. See the JSON solution format section.

Other Notable Changes#

Deprecated functionality#

If you are upgrading from a previous version of Gurobi, we recommend first running your code with Gurobi 12 and warnings enabled to catch deprecations in gurobipy. Fixing these deprecated usages will help to keep compatibility for Gurobi 13 and later versions. Warnings can be enabled by running your code with the -X dev or -W default flags. See the Python Development Mode or warnings package documentation for further details.

In Gurobi 13, the following usage is deprecated and will be removed in a future version:

Removal of deprecated functionality#

Removal of the interactive shell#

The interactive shell has been removed from the Gurobi installation. You can achieve similar functionality by installing gurobipy in any Python environment and running from gurobipy import * when you first start the Python interpreter. While such wildcard imports may be convenient for interactive use of the Python interpreter, we recommend using the pattern

import gurobipy as gp

with gp.Env() as env, gp.Model(env=env) as model:
    pass

in optimization applications.

Removals from gurobipy#

A number of deprecated functions have been removed from gurobipy:

  • The help() function has been removed. Use Python’s built-in help function instead for information on gurobipy methods and classes from within the Python interpreter.

  • Deprecated models() function has been removed.

  • Deprecated system() function has been removed, use os.system instead.