Parameter Tuning Tool#
The Gurobi Optimizer provides a wide variety of parameters that allow you to control the operation of the optimization engines. The level of control varies from extremely coarse-grained (e.g., the Method parameter, which allows you to choose the algorithm used to solve continuous models) to very fine-grained (e.g., the MarkowitzTol parameter, which allows you to adjust the tolerances used during simplex basis factorization). While these parameters provide a tremendous amount of user control, the immense space of possible options can present a significant challenge when you are searching for parameter settings that improve performance on a particular model. The purpose of the Gurobi tuning tool is to automate this search.
The Gurobi tuning tool performs multiple solves on your model, choosing different parameter settings for each solve, in a search for settings that improve runtime. The longer you let it run, the more likely it is to find a significant improvement. If you are using a Gurobi Compute Server, you can harness the power of multiple machines to perform distributed parallel tuning in order to speed up the search for effective parameter settings.
The tuning tool can be invoked through two different interfaces. You can
either use the grbtune
command-line tool, or you can invoke it
from one of our programming language APIs. Both
approaches share the same underlying tuning algorithm. The command-line
tool offers more tuning features. For example, it allows you to provide
a list of models to tune, or specify a list of base settings to try
(TuneBaseSettings).
A number of tuning-related parameters allow you to control the operation of the tuning tool. The most important is probably TuneTimeLimit, which controls the amount of time spent searching for an improving parameter set. Other parameters include TuneTrials (which attempts to limit the impact of randomness on the result), TuneCriterion (which specifies the tuning criterion), TuneResults (which controls the number of results that are returned), and TuneOutput (which controls the amount of output produced by the tool).
Before we discuss the actual operation of the tuning tool, let us first provide a few caveats about the results. While parameter settings can have a big performance effect for many models, they aren’t going to solve every performance issue. One reason is simply that there are many models for which even the best possible choice of parameter settings won’t produce an acceptable result. Some models are simply too large and/or difficult to solve, while others may have numerical issues that can’t be fixed with parameter changes.
Another limitation of automated tuning is that performance on a model can experience significant variations due to random effects (particularly for MIP models). This is the nature of search. The Gurobi algorithms often have to choose from among multiple, equally appealing alternatives. Seemingly innocuous changes to the model (such as changing the order of the constraint or variables), or subtle changes to the algorithm (such as modifying the random number seed) can lead to different choices. Often times, breaking a single tie in a different way can lead to an entirely different search. We’ve seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.
The bottom line is that automated performance tuning is meant to give suggestions for parameters that could produce consistent, reliable improvements on your models. It is not meant to be a replacement for efficient modeling or careful performance testing.
Command-Line Tuning#
The grbtune
command-line tool provides a very simple way to invoke
parameter tuning on a model (or a set of models). You specify a list of
parameter=value
arguments first, followed by the name of the file
containing the model to be tuned. For example, you can issue the
following command (in a Windows command window, or in a Linux/Mac
terminal window):
> grbtune TuneTimeLimit=100 c:\gurobi1200\win64\examples\data\misc07
(substituting the appropriate path to a model, stored in an MPS or LP
file). The tool will try to find parameter settings that reduce the
runtime on the specified model. When the tuning run completes, it writes
a set of .prm
files in the current working directory that capture
the best parameter settings that it found. It also writes the Gurobi log
files for these runs (in a set of .log
files).
You can also invoke the tuning tool through our programming language APIs. That will be discussed shortly.
If you specify multiple model files at the end of the command line, the tuning tool will try to find settings that minimize the total runtime for the listed models.
Running the Tuning Tool#
The first thing the tuning tool does is to perform a baseline run. The parameters for this run are determined by your choice of initial parameter values. If you set a parameter, it will take the chosen value throughout tuning. Thus, for example, if you set the Method parameter to 2, then the baseline run and all subsequent tuning runs will include this setting. In the example above, you’d do this by issuing the command:
> grbtune Method=2 TuneTimeLimit=100 c:\gurobi1200\win64\examples\data\misc07
For a MIP model, you will note that the tuning tool actually performs several baseline runs, and captures the mean runtime over all of these trials. In fact, the tool will perform multiple runs for each parameter set considered. This is done to limit the impact of random effects on the results, as discussed earlier. Use the TuneTrials parameter to adjust the number of trials performed.
Once the baseline run is complete, the time for that run becomes the time to beat. The tool then starts its search for improved parameter settings. Under the default value of the TuneOutput parameter, the tool prints output for each parameter set that it tries…
Testing candidate parameter set 82...
Method 2 (fixed)
BranchDir 1
DegenMoves 1
MIPFocus 2
Cuts 0
Solving MISC07 with random seed #1 ... runtime 0.41s
Solving MISC07 with random seed #2 ... runtime 0.43s
Summary candidate parameter set 82 (discarded)
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.41s 0.43s - - - -
Progress so far:
baseline: mean runtime 0.67s (parameter set 1, 0 non-defaults)
best: mean runtime 0.22s (parameter set 81, 5 non-defaults)
Total elapsed tuning time 92s (8s remaining, 1 running jobs)
This output indicates that the tool has tried 82 parameter sets so far. For the 82nd set, it changed the value of the BranchDir parameter, the DegenMoves parameter, the MIPFocus parameter and the Cuts parameter. Note that the Method parameter was specified to get a value 2 on the command line, so this change will appear in every parameter set that the tool tries and is marked as fixed. The first trial, with random seed #1, solved the model in 0.41 seconds, while the second solve took 0.43 seconds. This parameter set is discarded after the second trial since it cannot improve the best known set. The output displays a small summary table showing results for each model and each trial, together with the average runtime, the standard deviation and the maximum. In addition, the output states the progress so far. It shows the result of the base line parameter set and the best parameter set. The latter gives an average runtime of 0.22s and was found with parameter set 81 which has 5 non-defaults parameters. Note that, if any trial hits a time limit, the corresponding parameter set is considered to be worse than any set that didn’t hit a time limit. Finally, the output shows elapsed and remaining runtime as well as the number of running jobs.
Tuning normally proceeds until the elapsed time exceeds the tuning time limit. However, hitting CTRL-C will also stop the tool.
When the tuning tool finishes, it prints a summary which could look like the one below.
Tested 88 parameter sets in 99.45s
Total optimization run time for up to 1 concurrent runs: 99.34s
Baseline parameter set: mean runtime 0.83s
Method 2 (fixed)
# Name 0 1 2 Avg Std Dev Max
0 MISC07 1.05s 0.78s 0.65s 0.83s 0.17 1.05s
Improved parameter set 1 (mean runtime 0.23s):
Method 2 (fixed)
BranchDir 1
Heuristics 0
MIPFocus 2
Cuts 0
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.25s 0.23s 0.20s 0.23s 0.02 0.25s
Improved parameter set 2 (mean runtime 0.23s):
Method 2 (fixed)
BranchDir 1
RINS 0
Cuts 0
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.25s 0.21s 0.23s 0.23s 0.02 0.25s
Improved parameter set 3 (mean runtime 0.27s):
Method 2 (fixed)
RINS 0
Cuts 0
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.28s 0.26s 0.26s 0.27s 0.01 0.28s
Improved parameter set 4 (mean runtime 0.45s):
Method 2 (fixed)
RINS 0
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.43s 0.48s 0.45s 0.45s 0.02 0.48s
Wrote parameter files: tune0.prm through tune4.prm
Wrote log files: tune0.log through tune4.log
The summary shows the number of parameter sets it tried, and provides
details on a few of the best parameter sets it found. It also shows the
names of the .prm
and .log
files it writes. You can change the
names of these files using the ResultFile parameter. If you
set ResultFile=model.prm
, for example, the tool would write
model1.prm
through model4.prm
and model1.log
through
model4.log
. For each displayed parameter set, the tuning tool prints
the parameters used and a small summary table showing results for each
model and each trial, together with the average runtime, the standard
deviation and the maximum.
The number of sets that are retained by the tuning tool is controlled by the TuneResults parameter. The default behavior is to keep the sets that achieve the best trade-off between runtime and the number of changed parameters. In other words, we report the set that achieves the best result when changing one parameter, when changing two parameters, etc. We actually report a Pareto frontier, so for example we won’t report a result for three parameter changes if it is worse than the result for two parameter changes.
Other Tuning Parameters#
So far, we’ve only talked about using the tuning tool to minimize the time to find an optimal solution. For MIP models, you can also minimize the optimality gap after a specified time limit. You don’t have to take any special action to do this; you just set a time limit. Whenever a baseline run hits this limit, the tuning tool will automatically try to minimize the MIP gap. To give an example, the command
> grbtune TimeLimit=100 c:\gurobi1200\win64\examples\data\glass4
will look for a parameter set that minimizes the optimality gap
achieved after 100s of runtime on model glass4
. If the tool happens
to find a parameter set that solves the model within the time limit, it
will then try to find settings that minimize mean runtime.
Tune Criterion#
For models that don’t solve to optimality in the specified time limit, you can gain more control over the criterion used to choose a winning parameter set with the TuneCriterion parameter. This parameter allows you to tell the tuning tool to search for parameter settings that produce the best incumbent solution or the best lower bound, rather than always minimizing the MIP gap,
Output#
You can modify the TuneOutput parameter to produce more or less output. The default value is 2. A setting of 0 produces no output; a setting of 1 only produces output when an improvement is found; a setting of 3 produces a complete Gurobi log for each run performed.
MIP Start#
If you would like to use a MIP start with your tuning run, you can include the name of the start file immediately after the model name in the argument list. For example:
> grbtune misc07.mps misc07.mst
You can also use MIP starts when tuning over multiple models; any model that is immediately followed by a start file in the argument list will use the corresponding start. For example:
> grbtune misc07.mps misc07.mst p0033.mps p0548.mps p0548.mst
Multi-objective Models#
If you tune a multi-objective model, you can specify multiple sets of parameter settings, via the command line parameter MultiObjSettings.
> grbtune MultiObjSettings=obj0.prm,obj1.prm,obj2.prm mymultiobj.mps
For the first objective function solve, the first setting (i.e. all the
parameters specified in obj0.prm) is applied. For the second objective, the
second setting is applied, and so on. When tuning several multi-objective
models at once, grbtune
will apply the same settings to all
models. These settings will overrule for the corresponding objective
function solve, the parameter values selected by the tuner for the whole
multi-objective model.
Tuning API#
The tuning tool can be invoked from our C
,
C++
, Java
,
.NET
, and Python
interfaces. The tool behaves slightly differently when invoked from
these interfaces. Rather than writing the results to a set of files,
upon completion the tool populates a TuneResultCount
attribute, which gives a count of the number of improving parameter sets
that were found and retained. The user program can then query the value
of this attribute, and then use the GetTuneResult
method to copy any
of these parameter sets into a model (using
C
,
C++
,
Java
,
.NET
, or
Python
). Once loaded into the model,
the parameter set can be used to perform a subsequent optimization, or
the list of changed parameters can be written to a .prm
file using
the appropriate Write
routine (from C
,
C++
, Java
,
.NET
, or
Python
).