Solver Parameters to Manage Numerical Issues#

Reformulating a model may not always be possible, or it may not completely resolve numerical issues. When you must solve a model that has numerical issues, some Gurobi parameters can be helpful. We discuss these now, in descending order of relevance.

Presolve#

Gurobi presolve algorithms are designed to make a model smaller and easier to solve. However, in some cases, presolve can contribute to numerical issues. The following Python code can help you determine if this is happening. First, read the model file and print summary statistics for the presolved model:

m = gp.read('gurobi.rew')
p = m.presolve()
p.printStats()

If the numerical range looks much worse than the original model, try the parameter Aggregate=0:

m.reset()
m.Params.Aggregate = 0
p = m.presolve()
p.printStats()

If the resulting model is still numerically problematic, you may need to disable presolve completely using the parameter Presolve=0; try the steps above using

m.reset()
m.Params.Presolve = 0
p = m.presolve()
p.printStats()

If the statistics look better with Aggregate=0 or Presolve=0, you should further test these parameters. For a continuous (LP) model, you can test them directly. For a MIP, you should compare the LP relaxation with and without these parameters. The following Python commands create three LP relaxations: the model without presolve, the model with presolve, and the model with Aggregate=0:

m = gp.read('gurobi.rew')
r = m.relax()
r.write('gurobi.relax-nopre.rew')
p = m.presolve()
r = p.relax()
r.write('gurobi.relax-pre.rew')
m.reset()
m.Params.Aggregate = 0
p = m.presolve()
r = p.relax()
r.write('gurobi.relax-agg0.rew')

With these three files, use the techniques mentioned earlier to determine if Presolve=0 or Aggregate=0 improves the numerics of the LP relaxation.

Finally, if Aggregate=0 helps numerics but makes the model too slow, try AggFill=0 instead.

Choosing the Right Algorithm#

Gurobi Optimizer provides two main algorithms to solve continuous models and the continuous relaxations of mixed-integer models: barrier and simplex.

The barrier algorithm is usually fastest for large, difficult models. However, it is also more numerically sensitive. And even when the barrier algorithm converges, the crossover algorithm that usually follows can stall due to numerical issues.

The simplex method is often a good alternative, since it is generally less sensitive to numerical issues. To use dual simplex or primal simplex, set the Method parameter to 1 or 0, respectively.

Note that, in many optimization applications, not all problem instances have numerical issues. Thus, choosing simplex exclusively may prevent you from taking advantage of the performance advantages of the barrier algorithm on numerically well-behaved instances. In such cases, you should use the concurrent optimizer, which uses multiple algorithms simultaneously and returns the solution from the first one to finish. The concurrent optimizer is the default for LP models, and can be selected for MIP by setting the Method parameter to 3 or 4.

For detailed control over the concurrent optimizer, you can create concurrent environments, where you can set specific algorithmic parameters for each concurrent solve. For example, you can create one concurrent environment with Method=0 and another with Method=1 to use primal and dual simplex simultaneously. Finally, you can use concurrent optimization with multiple distinct computers using distributed optimization. On a single computer, the different algorithms run on multiple threads, each using different processor cores. With distributed optimization, independent computers run the separate algorithms, which can be faster since the computers do not compete for access to memory.

Making the Algorithm less Sensitive#

When all else fails, try the following parameters to make the algorithms more robust:

ScaleFlag, ObjScale (All models)

It is always best to reformulate a model yourself. However, for cases when that is not possible, these two parameters provide some of the same benefits. Set ScaleFlag=2 for aggressive scaling of the coefficient matrix. ObjScale rescales the objective row; a negative value will use the largest objective coefficient to choose the scaling. For example, ObjScale=-0.5 will divide all objective coefficients by the square root of the largest objective coefficient.

NumericFocus (All models)

The NumericFocus parameter controls how the solver manages numerical issues. Settings 1-3 increasingly shift the focus towards more care in numerical computations, which can impact performance. The NumericFocus parameter employs a number of strategies to improve numerical behavior, including the use of quad precision and a tighter Markowitz tolerance. It is generally sufficient to try different values of NumericFocus. However, when NumericFocus helps numerics but makes everything much slower, you can try setting Quad to 1 and/or MarkowitzTol to larger values of such as 0.1 or 0.5.

NormAdjust (Simplex)

In some cases, the solver can be more robust with different values of the simplex pricing norm. Try setting NormAdjust to 0, 1, 2 or 3.

BarHomogeneous (Barrier)

For models that are infeasible or unbounded, the default barrier algorithm may have numerical issues. Try setting BarHomogeneous=1.

CrossoverBasis (Barrier)

Setting CrossoverBasis=1 takes more time but can be more robust when creating the initial crossover basis.

GomoryPasses (MIP)

In some MIP models, Gomory cuts can contribute to numerical issues. Setting GomoryPasses=0 may help numerics, but it may make the MIP more difficult to solve.

Cuts (MIP)

In some MIP models, various cuts can contribute to numerical issues. Setting Cuts=1 or Cuts=0 may help numerics, but it may make the MIP more difficult to solve.

Tolerance values (FeasibilityTol, OptimalityTol, IntFeasTol) are generally not helpful for addressing numerical issues. Numerical issues are better handled through model reformulation.