Parameter Tuning Tool#
The Gurobi Optimizer provides a wide variety of parameters that allow you to control the operation of the optimization process. The level of control varies from extremely coarse-grained (e.g., the Method parameter, which allows you to choose the algorithm used to solve continuous models) to very fine-grained (e.g., the GomoryPasses parameter, which allows you to adjust the number of performed Gomory cut passes). While these parameters provide a tremendous amount of user control, the immense space of possible options can present a significant challenge when you are searching for parameter settings that improve performance on a particular model. The purpose of the Gurobi tuning tool is to automate this search.
Operation#
The Gurobi tuning tool performs multiple solves on your model by choosing different parameter settings for each solve. We recommend setting a time limit (parameter TimeLimit) for a single solve of your model to guide the tuning tool. In this way, the focus is to find a setting that improves the given time or to find at least the best results (controlled via TuneCriterion) for the model for the given time limit. Additionally, you should set a time limit for the whole tuning run (TuneTimeLimit). The longer you let it run, the more likely it is to find a significant improvement. If you are using a cluster of Gurobi Compute Servers, you can harness the power of multiple machines to perform distributed parallel tuning in order to speed up the search for effective parameter settings.
The main goal is to find settings that minimize the runtime required to find a proven optimal solution. However, a secondary criterion is considered for MIP models that do not solve to optimality within the specified time limit. With the default setting, currently, the MIPGap is minimized. The secondary tuning criterion can be changed via the parameter TuneCriterion.
There are two options for running the tuning tool. The more common usage is via
grbtune
, the command-line tool. This option also provides a variety of
tuning features. As a second option, the tuning tool can also be invoked from
our C
, C++
,
Java
, .NET
, and
Python
interfaces. While the second option provides
fewer features, both approaches share the same underlying tuning algorithm.
Their usage will be discussed in more detail in the Usage
Section.
Limitation#
Before we discuss the actual operation of the tuning tool, let us first provide a few caveats about the results. While parameter settings can have a big performance effect for many models, they aren’t going to solve every performance issue. One reason is that there are many models for which even the best possible choice of parameter settings won’t produce an acceptable result. Some models are simply too large and/or complex to solve, while others may have numerical issues that parameter changes can’t fix.
Another limitation of automated tuning is that performance on a model can experience significant variations due to random effects (particularly for MIP models). This is the nature of search. The Gurobi algorithms often have to choose among multiple equally appealing alternatives. Seemingly innocuous changes to the model (such as changing the order of the constraint or variables) or subtle changes to the algorithm (such as modifying the random number seed) can lead to different choices. Oftentimes, breaking a single tie in a different way can lead to an entirely different search. We’ve seen cases where subtle changes in the search produce 100X performance swings. While the tuning tool tries to limit the impact of these effects, the final result will typically still be heavily influenced by such issues.
The bottom line is that automated performance tuning is meant to give suggestions for parameters that could produce consistent, reliable improvements on your models. It is not meant to be a replacement for efficient modeling or careful performance testing.
Usage#
In this section, we will discuss the general usage and the available features of the tuning tool. As mentioned, the command-line tuning tool provides more features than the API usage. Whenever available, we provide guidance for both options. If only the command-line option is discussed, the specific feature is not available in the APIs.
We strongly recommend setting the parameter TimeLimit to provide a time limit for a single solve of the model and the parameter TuneTimeLimit to limit the whole tuning run. If one or both parameters are not set, Gurobi will make assumptions about these limits. However, this is not an educated guess but a safety measure to not run a tuning for an infinite amount of time.
For the command-line tool grbtune
, the general invocation is to specify a
list of parameter=value
arguments first, followed by the name of the file
containing the model to be tuned.
For example, you can issue the following command (in a Windows command window, or in a Linux/Mac terminal window):
> grbtune TuneTimeLimit=1800 TimeLimit=60 data/glass4.mps
(substituting the appropriate path to a model stored in an MPS or LP
file). The tool will try to find parameter settings that reduce the
runtime on the specified model. When the tuning run completes, it writes
a set of .prm
files in the current working directory that capture
the best parameter settings it found. It also writes the Gurobi log
files for these runs (in a set of .log
files).
The tuning tool can be invoked from our C
,
C++
, Java
,
.NET
, and Python
interfaces. The tool behaves slightly differently when invoked from
these interfaces. Rather than writing the results to a set of files,
upon completion, the tool populates a TuneResultCount
attribute, which gives a count of the number of improving parameter sets
that were found and retained. The user program can then query the value
of this attribute, and then use the GetTuneResult
method to copy any
of these parameter sets into a model (using
C
,
C++
,
Java
,
.NET
, or
Python
). Once loaded into the model,
the parameter set can be used to perform a subsequent optimization, or
the list of changed parameters can be written to a .prm
file using
the appropriate Write
routine (from C
,
C++
, Java
,
.NET
, or
Python
).
The tune example shows how to run the tuner for the Gurobi APIs C, C++, Java, .NET, and Python and how to save the best tuning result in a parameter file.
The number of sets that are retained by the tuning tool is controlled by the TuneResults parameter. The default behavior is to keep the sets that achieve the best trade-off between runtime and the number of changed parameters. In other words, we report the set that achieves the best result when changing one parameter, when changing two parameters, etc. We report a Pareto frontier, so for example, we won’t report a result for three parameter changes if it is worse than the result for two parameter changes.
In the following we want to discuss different options for operating the tuning tool:
Control the general operation of the whole tuning process via tuning parameters.
Fix parameters to have the same value in all optimization runs.
Define one or multiple parameter sets the tuner shall start with.
Tuning-Control Parameter#
Review the tuning-related parameters for a complete list of parameters to control the general tuning process. Here, we want to emphasize only some of them; others are discussed below when considering the respective features. Apart from the TuneTimeLimit, all other tuning parameters can be left at their default values if you have no strong opinion for specific values.
TuneTimeLimit: As mentioned, this parameter controls the time spent searching for improving parameter sets.
TuneCriterion: Whenever a MIP model cannot be solved to optimality within the specified time limit, a second criterion is considered that can be improved instead. This parameter controls the secondary tuning criterion.
TuneTrials: For a MIP model, you will note that the tuning tool performs multiple runs for each parameter set. This is done to limit the impact of random effects on the results. Use this parameter to adjust the number of trials performed.
TuneMetric: Controls how the number of trials are aggregated to one measure. Possible metrics are the average of all individual results or the maximum value. If multiple models are tuned, the same metric aggregates the results.
TuneTargetMIPGap: If this parameter is set to a value greater than 0, the tuner stops when it finds a setting satisfying this MIP gap. (This only works if the tuning criterion is also MIP gap.)
TuneTargetTime: As soon as the tuner finds a parameter setting such that the model can be solved in less than this target value, the tuning is stopped.
TuneParams: The usage of this parameter is discussed in Tuned Parameters below.
TuneBaseSettings: The usage of this parameter is discussed in Baseline Setting below.
TuneOutput: The usage of this parameter is discussed in Tuning Output below.
Let us consider the example of changing the tuning criterion to the objective value of the best feasible solution:
> grbtune TuneTimeLimit=1800 TuneCriterion=2 TimeLimit=60 data/glass4.mps
Note that the order in which the parameters are set is irrelevant.
Have a again a look at the tune example. The TuneCriterion parameter can be set similarly as the TuneTimeLimit parameter.
Fix Parameters#
Setting any algorithmic parameters, e.g., simplex, barrier, presolve, or MIP parameters, implies that these parameters are fixed for all trials and will not be changed by the tuner. In particular, setting a parameter explicitly to its default value means that the tuner will not change this parameter.
For example, fixing the Method parameter to 2 and the Presolve parameter to its default value for all runs, can be issued by the command:
> grbtune TuneTimeLimit=1800 TimeLimit=60 Method=2 Presolve=-1 data/glass4.mps
Here again, the order of the parameters is not relevant.
In the APIs, parameters can be fixed by setting them on the model object,
similar to what is done for TimeLimit, TuneTimeLimit,
or TuneResults in the
tune example. The order is not
relevant as long as all parameters are set before the tune
call.
Tuned Parameters#
The tuner considers a certain subset of all Gurobi parameters. This subset is
determined to include the relevant algorithmic parameters that might change the
performance. The considered parameters depend on the problem class (LP, MIP,
MIQCP) because, e.g., MIP parameters do not influence LP
solves. If you need more control over the set of parameters that are tested, you
can define your own custom set of all parameters the tuner should test. This
custom set is then provided as a json
file to the tuner via the parameter
TuneParams. The tuner will only test parameters that are defined
in this json
file. It is also possible to restrict parameters to certain
values only. This feature is only available for command-line tuning. An example
call with a custom parameter set param.json
is as follows:
> grbtune TuneTimeLimit=1800 TimeLimit=60 TuneParams=param.json data/glass4.mps
Have a look at the documentation of TuneParams for an example of
the file param.json
.
Baseline Setting#
The first thing the tuning tool does is to perform a baseline run. The baseline run includes all fixed parameters and uses default values for all other parameters. The baseline run provides the initial values on runtime and, if available, objective value, lower bound, and MIP gap. These initial values become the ones to beat. If you know a good parameter setting, you can provide this setting for the baseline run. It is also possible to define several parameter sets the tuner should start with, the first setting is considered as the base setting. Providing a base setting (or a set of parameter sets to be started with) can be done via parameter TuneBaseSettings. Again this is only available for the command-line tuning.
For example, with the following call
> grbtune TuneTimeLimit=1800 TimeLimit=60 Method=2 TuneBaseSettings=base.prm data/glass4.mps
a tuning run is started where Method is set to 2 in all runs. In
the first run (baseline), additionally, the following parameters are set, i.e.,
the content of base.prm
is:
# Parameter settings
Cuts 2
Heuristics 0.5
The fixed parameters (here Method) can be included in the prm
file, but
it is not required. They are respected anyway.
MIP Starts#
Providing a MIP start for all runs in the tuning is possible for command-line and API tuning.
If you would like to use a MIP start with your tuning run, you can include the name of the start file immediately after the model name in the argument list. For example:
> grbtune TuneTimeLimit=3600 TimeLimit=100 glass4.mps glass4.mstInstead of MST files, you can also use SOL or ATTR files to provide a MIP start. Additionally, with the ATTR file, other attribute values, e.g., for variable hints and branching priorities, can be provided which are then considered by the tuner.
Multiple Models#
For command-line tuning, it is possible to tune multiple models in one tuning run. You can simply specify multiple model files. The tuning tool will try to find settings that improve the running time (or if necessary a secondary tuning criterion - TuneCriterion) for all models w.r.t. the defined metric (TuneMetric).
You can also use MIP starts when tuning over multiple models; any model that is immediately followed by a start file in the argument list will use the corresponding start attributes. An example call is:
> grbtune TuneTimeLimit=18000 TimeLimit=100 misc07.mps misc07.mst p0033.mps p0548.mps p0548.mst
Note that the TimeLimit is set to 100 for each of the models. If you want to fix other algorithmic parameters, e.g., Method to 2, it is also fixed for all model files.
Log Output#
You can modify the TuneOutput parameter to produce more or less output. The default value is 2. A setting of 0 produces no output; a setting of 1 only produces output when an improvement is found; a setting of 3 produces a complete Gurobi log for each run performed. In the following, we discuss the default output level.
Assume we have started the tuner as follows:
> grbtune TimeLimit=10 TuneTimeLimit=120 Method=2 misc07.mps
When starting the base line parameter setting (the first parameter set), the output starts with showing the time limit for each solve followed by some information for the baseline run:
Solving model using baseline parameter set with TimeLimit=10s
-------------------------------------------------------------------------------
Testing candidate parameter set 1...
Method 2 (fixed)
Solving MISC07 with random seed #1 ... MIP gap 0.01%
Solving MISC07 with random seed #2 ... MIP gap 0.01%
Solving MISC07 with random seed #3 ... MIP gap 0.01%
Summary candidate parameter set 1
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.97s 0.80s 1.39s 1.05s 0.25 1.39s
-------------------------------------------------------------------------------
Begin tuning (baseline mean runtime 1.05s)...
-------------------------------------------------------------------------------
Here, the Method parameter is included as a fixed parameter in the first solve. The model was solved with three different seeds (number of trials). For all seed values, the model was solved to optimality (default gap of 0.01%). The output also displays a small summary table showing results for each model (in the example, we have only one) and each trial, together with the average runtime, the standard deviation, and the maximum runtime. If it was not possible to solve the model to proven optimality, the table shows the final MIP gaps or the respective chosen tuning criterion.
In our example, the mean runtime for the baseline setting is 1.05 seconds. So, this is the time to beat. The tuner might discard a new considered parameter setting before finishing the computation of all seed values if it becomes apparent that it cannot improve the current best-known set (which is the baseline setting in the beginning). In our example, this happens for parameter set 13. The first two seeds were already too slow, such that a third seed was not computed:
-------------------------------------------------------------------------------
Testing candidate parameter set 13...
Method 2 (fixed)
Presolve 2
Solving MISC07 with random seed #1 ... runtime 2.00s
Solving MISC07 with random seed #2 ... runtime 1.23s
Summary candidate parameter set 13 (discarded)
# Name 0 1 2 Avg Std Dev Max
0 MISC07 2.00s 1.23s - - - -
Progress so far:
baseline: mean runtime 1.05s (parameter set 1, 0 non-defaults)
best: mean runtime 1.05s (parameter set 1, 0 non-defaults)
Total elapsed tuning time 54s (66s remaining, 1 running jobs)
-------------------------------------------------------------------------------
At the end of each tested parameter set, you can see a summary of the baseline and the best setting so far, as well as the total elapsed time of the tuning tool and the remaining time. In our example, there was no improvement of the base settings until parameter set 13.
Whenever an improving setting is found, it is noted, and the summary is updated:
-------------------------------------------------------------------------------
Testing candidate parameter set 14...
Method 2 (fixed)
CutPasses 1
Solving MISC07 with random seed #1 ... runtime 1.01s
Solving MISC07 with random seed #2 ... runtime 1.02s
Solving MISC07 with random seed #3 ... runtime 0.90s
Summary candidate parameter set 14
# Name 0 1 2 Avg Std Dev Max
0 MISC07 1.01s 1.02s 0.90s 0.98s 0.06 1.02s
Improvement found:
baseline: mean runtime 1.05s (parameter set 1, 0 non-defaults)
improved: mean runtime 0.98s (parameter set 14, 1 non-defaults)
Total elapsed tuning time 57s (63s remaining, 1 running jobs)
-------------------------------------------------------------------------------
When the tuning tool finishes, it prints a summary, which could look like the one below.
Tune time limit reached
-------------------------------------------------------------------------------
Tested 36 parameter sets in 118.83s
Total optimization run time for up to 1 concurrent runs: 118.43s
Baseline parameter set: mean runtime 1.05s
Method 2 (fixed)
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.97s 0.80s 1.39s 1.05s 0.25 1.39s
Improved parameter set 1 (mean runtime 0.76s):
Method 2 (fixed)
RINS 0
CutPasses 1
# Name 0 1 2 Avg Std Dev Max
0 MISC07 0.70s 0.83s 0.76s 0.76s 0.06 0.83s
Improved parameter set 2 (mean runtime 0.91s):
Method 2 (fixed)
RINS 0
# Name 0 1 2 Avg Std Dev Max
0 MISC07 1.00s 0.85s 0.87s 0.91s 0.07 1.00s
Wrote parameter files: tune0.prm through tune2.prm
Wrote log files: tune0.log through tune2.log
The summary shows the number of parameter sets tried, and provides
details on a few of the best parameter sets it found. It also shows the
names of the .prm
and .log
files it writes. You can change the
names of these files using the ResultFile parameter. If you
set ResultFile=model
, for example, the tool would write
model1.prm
through model4.prm
and model1.log
through
model4.log
. For each displayed parameter set, the tuning tool prints
the parameters used and a small summary table showing results for each
model and each trial, together with the average runtime, the standard
deviation, and the maximum runtime.
You can also get a summary when hitting CTRL-C during tuning. This will stop the tool gracefully.