Simple Example#
After your cluster has been set up (setup is covered in this section), you can submit a job or a batch using either a programming language API, the command-line tools, or the Web User Interface for your Cluster Manager. This section provides a few short examples that use the command-line tools. More complete descriptions of the various interfaces and options will come in a later section.
Log In to the Cluster#
The first step in submitting a job to the cluster is to log in to the
Cluster Manager with the grbcluster login
command:
> grbcluster login --manager=http://localhost:61080 -u=gurobi
info : Using client license file '/Users/john/gurobi.lic'
Password for gurobi:
info : User gurobi connected to http://localhost:61080, session will expire on 2019-09...
This command indicates that you want to connect to the Cluster Manager
running on port 61080 of machine localhost
as the gurobi
user.
The output from the command first shows that the client license file
gurobi.lic
located in the home directory of the user will be used to
store the connection parameters. It then prompts you for the password
for the specified user (in a secure manner). After contacting the
Cluster Manager, the client retrieves a session token that will expire
at the indicated date and time.
Using this approach to logging in removes the need to display the user
password or save it in clear text, which improves security. The session
token and all of the connection parameters are saved in the client
license file, so they can be used by all of the command-line tools
(gurobi_cl
, grbtune
, and grbcluster
). When the token session
expires, the commands will fail and you will need to log in again.
Submitting an Interactive Job#
Once you are logged in, you can use gurobi_cl
to submit a job:
> gurobi_cl ResultFile=solution.sol stein9.mps
Set parameter CSManager to value "http://server1:61080"
Set parameter LogFile to value "gurobi.log"
Compute Server job ID: 1e9c304c-a5f2-4573-affa-ab924d992f7e
Capacity available on 'server1:61000' - connecting...
Established HTTP unencrypted connection
Using client license file /Users/john/gurobi.lic
Gurobi Optimizer version 11.0.1 build v11.0.1rc0 (linux64)
Copyright (c) 2022, Gurobi Optimization, LLC
...
Gurobi Compute Server Worker version 11.0.1 build v11.0.1rc0 (linux64)
Thread count: 4 physical cores, 8 logical processors, using up to 8 threads
...
Optimal solution found (tolerance 1.00e-04)
Best objective 5.000000000000e+00, best bound 5.000000000000e+00, gap 0.0000%
Compute Server communication statistics:
Sent: 0.002 MBytes in 9 msgs and 0.01s (0.26 MB/s)
Received: 0.007 MBytes in 26 msgs and 0.09s (0.08 MB/s)
The initial log output indicates that a Compute Server job was created,
that the Compute Server cluster had capacity available to run that job,
and that an unencrypted HTTP connection was established with a server in
that cluster. The log concludes with statistics about the communication
performed between the client machine and the Compute Server. Note that
the result file solution.sol
is also retrieved.
This is an interactive optimization task because the connection with the job must be kept alive and the progress messages are displayed in real time. Also, stopping or killing the command terminates the job.
Submitting a Non-Interactive Job#
You can use grbcluster
to create a batch (i.e., a non-interactive
job):
> grbcluster batch solve ResultFile=solution.sol misc07.mps --download
info : Batch 5d0ea600-5068-4a0b-bee0-efa26c18f35b created
info : Uploading misc07.mps...
info : Batch 5d0ea600-5068-4a0b-bee0-efa26c18f35b submitted with job a9700b72...
info : Batch 5d0ea600-5068-4a0b-bee0-efa26c18f35b status is COMPLETED
info : Results will be stored in directory 5d0ea600-5068-4a0b-bee0-efa26c18f35b
info : Downloading solution.sol...
info : Downloading gurobi.log...
info : Discarding batch data
This command performs a number of steps. First, a batch specification is created and the batch ID is displayed. Then, the model file is uploaded and a batch job is submitted. Once the job reaches the front of the Compute Server queue, it is processed. At that point, the batch is marked as completed and the result file with the log file is automatically downloaded to the client. By default, the directory name where the result file is stored is the batch ID. Finally, the batch data is discarded, which allows the Cluster Manager to delete the associated data from its database.
This is a non-interactive optimization task because it happens in two
distinct phases. The first phase uploads the model to the server and
creates a batch. The second waits for the batch to complete and
retrieves the result. In general, stopping the client has no effect on a
batch once it has been submitted to the Cluster Manager. Our example
waits for the completion of the batch, but that’s only because we used
the —-download
flag. You could check on the status of the batch and
download the results whenever (and wherever) they are needed, since they
are stored in the Cluster Manager until they are discarded.