Batch Optimization#

Batch optimization is a feature available with the Gurobi Cluster Manager. It allows a client program to build an optimization model, submit it as a batch request to a Compute Server cluster (through a Cluster Manager), and later check on the status of the request and retrieve the solution. Once a batch is submitted to the Cluster Manager, it is identified through a unique BatchID. The client program (or any other program) can use this ID to query the BatchStatus of the batch (submitted, completed, etc.). Once the batch has completed and a solution is available, the client can retrieve that solution as a JSON string.

This section explains the steps required to perform the various tasks listed above. We’ll use the batchmode.py Python example, which is included with the distribution, to illustrate these steps.

Setting Up a Batch Environment#

Recall that the first step in building an optimization model is to create a Gurobi environment. An environment provides a number of configuration options; among them is an option to indicate where the model should be solved. You can solve a model locally, on a Compute Server, or using a Gurobi Instant Cloud server. If you have a Cluster Manager installed, you also have the option of using batch optimization.

To use batch optimization, you should configure your environment as if you will be using a Compute Server through a Cluster Manager. You’ll need to set the CSManager parameter to point to your Cluster Manager, and provide a valid Username and ServerPassword. The difference is that you will also need to set the CSBatchMode parameter to 1. This will cause the client to build the model locally, and only submit it to the server once a call to the optimizeBatch method is made. This is in contrast to a standard Compute Server job, where the connection to the server is established immediately and the model is actually built on the server.

The following shows how you might set up your environment for batch optimization (in Python):

env = gp.Env(empty=True)
env.setParam('LogFile',        'batchmode.log')
env.setParam('CSManager',      'http://localhost:61080')
env.setParam('UserName',       'gurobi')
env.setParam('ServerPassword', 'pass')
env.setParam('CSBatchMode',    1)

Note that you can also use CSAPIAccessID and CSAPISecret (instead of Username and ServerPassword) to connect to a Cluster Manager.

Tagging Variables or Constraints#

Batch optimization separates the process of building a model from the process of retrieving and acting on its solution. For example, you can build your model on one machine, submit a batch request, and then use the resulting BatchID to retrieve the solution on a completely different machine.

Of course, disconnecting a model from its solution introduces a mapping problem: the process that retrieves the solution needs to know how to map the elements of the solution back to the corresponding elements of the model. This is done through tags. When a model is built, the user associates unique strings with the variables and constraints of interest in the model. Solution values are then associated with these strings. If the user doesn’t provide a tag for a model element, no solution information is stored or returned for that element.

You can tag variables (using the VTag attribute), linear constraints (using the CTag attribute), and quadratic constraints (using the QCTag attribute). We should point out that solutions to mixed-integer models don’t contain any constraint information, so constraint tags have no effect for such models.

For details on the information that is available in the solution in different situations, please refer to the JSON solution format section.

Here’s a simple example that tags the first 10 variables in a model:

# Define tags for some variables in order to access their values later
for count, v in enumerate(model.getVars()):
    v.VTag = "Variable{}".format(count)
    if count >= 10:
        break

Submitting a Batch Optimization Request#

Once you have built your model and tagged the elements of interest, you are ready to submit your batch request. This is done by invoking the optimizeBatch method (e.g., optimizeBatch in Python). This method returns a BatchID string which is used for later queries. Here’s a simple example:

# Submit batch request
batchID = model.optimizeBatch()

Interacting with Batch Requests#

You can use a BatchID string to ask the Cluster Manager for more information about the corresponding batch. Specifically, you can query the BatchStatus for that batch, and if the batch is complete you can retrieve the computed solution as a JSON string.

Your first step in using a BatchID to gather more information is to create a Gurobi environment that enables you to connect to your Cluster Manager. This is done in this line of our Python example:

with setupbatchenv().start() as env, gp.Batch(batchID, env) as batch:

The setupbatchenv method creates an environment with the CSManager, Username, and ServerPassword parameters set to appropriate values.

With this environment and our BatchID, we can now create a Batch object (by calling the Batch constructor in the above code segment) that holds information about the batch.

That Batch object can be used to query the BatchStatus:

with setupbatchenv().start() as env, gp.Batch(batchID, env) as batch:

    starttime = time.time()
    while batch.BatchStatus == GRB.BATCH_SUBMITTED:
        # Abort this batch if it is taking too long
        curtime = time.time()
        if curtime - starttime > maxwaittime:
            batch.abort()
            break

        # Wait for two seconds
        time.sleep(2)

        # Update the resident attribute cache of the Batch object with the
        # latest values from the cluster manager.
        batch.update()

        # If the batch failed, we retry it
        if batch.BatchStatus == GRB.BATCH_FAILED:
            batch.retry()

It can also be used to perform various operations on the batch, including aborting or retrying the batch.

Once a batch has been completed, you can query the solution and all related attributes for tagged elements in the model by retrieving the associated JSON solution string (or by saving it into a file):

print("JSON solution:")
# Get JSON solution as string, create dict from it
sol = json.loads(batch.getJSONSolution())

By default, the Cluster Manager will keep the solution for the model and other information for a while (the exact retention policy is set by the Cluster Manager). You can ask the Cluster Manager to discard information for a batch by explicitly calling the discard method:

# Remove batch request from manager
batch.discard()

No further queries on that batch are possible after this has been done.

Interpreting the JSON Solution#

Once you have retrieved a JSON solution string, you can use a JSON parser to retrieve solution information for individual variables and constraints. This parser isn’t included in the Gurobi library. Rather, programming languages have libraries for doing this. The appropriate package in Python is (not surprisingly) called json. The following provides a simple example of how this library can be used to parse a JSON solution string and extract a few pieces of solution information:

# Get JSON solution as string, create dict from it
sol = json.loads(batch.getJSONSolution())

# Pretty printing the general solution information
print(json.dumps(sol["SolutionInfo"], indent=4))

Note that you may have to set the parameter JSONSolDetail to 1 to see all relevant solution data, like pool solution values. Consult the JSON solution format description for details.

A Complete Example#

# This example reads a MIP model from a file, solves it in batch mode,
# and prints the JSON solution string.
#
# You will need a Compute Server license for this example to work.

import sys
import time
import json
import gurobipy as gp
from gurobipy import GRB


# Set up the environment for batch mode optimization.
#
# The function creates an empty environment, sets all necessary parameters,
# and returns the ready-to-be-started Env object to caller.   It is the
# caller's responsibility to dispose of this environment when it's no
# longer needed.
def setupbatchenv():

    env = gp.Env(empty=True)
    env.setParam('LogFile',        'batchmode.log')
    env.setParam('CSManager',      'http://localhost:61080')
    env.setParam('UserName',       'gurobi')
    env.setParam('ServerPassword', 'pass')
    env.setParam('CSBatchMode',    1)

    # No network communication happened up to this point.  This will happen
    # once the caller invokes the start() method of the returned Env object.

    return env


# Print batch job error information, if any
def printbatcherrorinfo(batch):

    if batch is None or batch.BatchErrorCode == 0:
        return

    print("Batch ID {}: Error code {} ({})".format(
        batch.BatchID, batch.BatchErrorCode, batch.BatchErrorMessage))


# Create a batch request for given problem file
def newbatchrequest(filename):

    # Start environment, create Model object from file
    #
    # By using the context handlers for env and model, it is ensured that
    # model.dispose() and env.dispose() are called automatically
    with setupbatchenv().start() as env, gp.read(filename, env=env) as model:
        # Set some parameters
        model.Params.MIPGap = 0.01
        model.Params.JSONSolDetail = 1

        # Define tags for some variables in order to access their values later
        for count, v in enumerate(model.getVars()):
            v.VTag = "Variable{}".format(count)
            if count >= 10:
                break

        # Submit batch request
        batchID = model.optimizeBatch()

    return batchID


# Wait for the final status of the batch.
# Initially the status of a batch is "submitted"; the status will change
# once the batch has been processed (by a compute server).
def waitforfinalstatus(batchID):
    # Wait no longer than one hour
    maxwaittime = 3600

    # Setup and start environment, create local Batch handle object
    with setupbatchenv().start() as env, gp.Batch(batchID, env) as batch:

        starttime = time.time()
        while batch.BatchStatus == GRB.BATCH_SUBMITTED:
            # Abort this batch if it is taking too long
            curtime = time.time()
            if curtime - starttime > maxwaittime:
                batch.abort()
                break

            # Wait for two seconds
            time.sleep(2)

            # Update the resident attribute cache of the Batch object with the
            # latest values from the cluster manager.
            batch.update()

            # If the batch failed, we retry it
            if batch.BatchStatus == GRB.BATCH_FAILED:
                batch.retry()

        # Print information about error status of the job that processed the batch
        printbatcherrorinfo(batch)


def printfinalreport(batchID):
    # Setup and start environment, create local Batch handle object
    with setupbatchenv().start() as env, gp.Batch(batchID, env) as batch:
        if batch.BatchStatus == GRB.BATCH_CREATED:
            print("Batch status is 'CREATED'")
        elif batch.BatchStatus == GRB.BATCH_SUBMITTED:
            print("Batch is 'SUBMITTED")
        elif batch.BatchStatus == GRB.BATCH_ABORTED:
            print("Batch is 'ABORTED'")
        elif batch.BatchStatus == GRB.BATCH_FAILED:
            print("Batch is 'FAILED'")
        elif batch.BatchStatus == GRB.BATCH_COMPLETED:
            print("Batch is 'COMPLETED'")
            print("JSON solution:")
            # Get JSON solution as string, create dict from it
            sol = json.loads(batch.getJSONSolution())

            # Pretty printing the general solution information
            print(json.dumps(sol["SolutionInfo"], indent=4))

            # Write the full JSON solution string to a file
            batch.writeJSONSolution('batch-sol.json.gz')
        else:
            # Should not happen
            print("Batch has unknown BatchStatus")

        printbatcherrorinfo(batch)


# Instruct the cluster manager to discard all data relating to this BatchID
def batchdiscard(batchID):
    # Setup and start environment, create local Batch handle object
    with setupbatchenv().start() as env, gp.Batch(batchID, env) as batch:
        # Remove batch request from manager
        batch.discard()


# Solve a given model using batch optimization
if __name__ == '__main__':

    # Ensure we have an input file
    if len(sys.argv) < 2:
        print("Usage: {} filename".format(sys.argv[0]))
        sys.exit(0)

    # Submit new batch request
    batchID = newbatchrequest(sys.argv[1])

    # Wait for final status
    waitforfinalstatus(batchID)

    # Report final status info
    printfinalreport(batchID)

    # Remove batch request from manager
    batchdiscard(batchID)

    print('Batch optimization OK')

Limitations#

It is currently not possible to optimize models with Multiple Objectives via the Batch Optimization feature. The Concurrent Optimizer and the Parameter Tuning Tool are also not available via the Batch Optimization feature.