Welcome to the developer documentation for SigOpt. If you have a question you can’t answer, feel free to contact us!
Welcome to the new SigOpt docs! If you're looking for the classic SigOpt documentation then you can find that here. Otherwise, happy optimizing!

SigOpt Experiment Client Calls

sigopt.create_experiment(...) Back to Top

Creates a new Experiment.

namestringYesA user-specified name for this experiment.
parameterslist of parametersYesA list of Parameter objects.
conditionalslist of conditionalsNoSee conditionals.
linear_constrain tslist of parameter constraintsNoSee constraints.
metadatadictionary of {string: value}NoOptional user-provided object. See Using Metadata for more information.
metricslist of metricsYesA list of Metric definitions to be optimized or stored for analysis. If the array is of length one, the standard optimization problem is conducted. This array can have no more than 2 optimized entries and no more than 50 entries in total.
num_solutionsintNoThe number of (diverse) solutions SigOpt will search for. This feature is only available for special plans, and does not need to be set unless the desired number of solutions is greater than 1. A budget is required if the number of solutions is greater than 1. No categorical variables are allowed in multiple solution experiments.
budgetintNoThe number of Runs you plan to create for this Experiment. This is required when the length of metrics is greater than 1, and optional for a single metric experiment. Deviating from this value, especially by failing to reach it, may result in suboptimal performance for your experiment.
parallel_bandwidthintNoThe number of simultaneous Runs you plan to maintain during this experiment. The default value for this is 1, i.e., a sequential experiment. The maximum value for this is dependent on your plan. This field is optional, but setting it correctly may improve performance.
typestringNoA type for this experiment. Experiments can span 3 types: “offline”, “random”, “grid”. “Offline” experiments will use SigOpt’s Optimizer. “Random” executes random search, and “grid” executes grid search

Example for creating an Experiment:

experiment = sigopt.create_experiment(
  name="Keras Model Optimization (Python)",
    dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)),
    dict(name="activation_fn", type="categorical", categorical_values=["relu", "tanh"]),
  metrics=[dict(name="holdout_accuracy", objective="maximize")],

Once you’ve created an Experiment, you are able to loop through an Experiment in two ways:

for run in experiment.loop():
  with run:
while not experiment.is_finished():
  with experiment.create_run() as run:

sigopt.get_experiment(experiment_id) Back to Top

Retrieves an existing Experiment.

experiment_idstringYesReturns a SigOpt Experiment object specified by the provided experiment_id.

create_run() Back to Top

Creates a new Run in the Experiment. Returns a RunContext object to use for tracking Run attributes.

loop() Back to Top

Start an Experiment loop. Returns an iterator of RunContext objects, used for tracking attributes of each Run in the experiment. The iterator will terminate when the Experiment has consumed its entire budget.

is_finished() Back to Top

Check if the Experiment has consumed its entire budget.

refresh() Back to Top

Refresh the Experiment attributes.

get_runs() Back to Top

Returns an iterator of all the TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.

get_best_runs() Back to Top

Returns an iterator of the best TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.

archive() Back to Top

Archives the Experiment. All associated Runs will not be archived and can be found on the Project Runs page. Method applied to an instance of an Experiment object.