SigOpt Experiment Client Calls
sigopt.create_experiment(...) Back to Top
Creates a new Experiment.
|Yes||A user-specified name for this experiment.|
|Yes||A list of Parameter objects.|
|No||Optional user-provided object. See Using Metadata for more information.|
|Yes||A list of Metric definitions to be optimized or stored for analysis. If the array is of length one, the standard optimization problem is conducted. This array can have no more than 2 optimized entries and no more than 50 entries in total.|
|No||The number of (diverse) solutions SigOpt will search for. This feature is only available for special plans, and does not need to be set unless the desired number of solutions is greater than 1. A budget is required if the number of solutions is greater than 1. No categorical variables are allowed in multiple solution experiments.|
|No||The number of Runs you plan to create for this Experiment. This is required when the length of metrics is greater than 1, and optional for a single metric experiment. Deviating from this value, especially by failing to reach it, may result in suboptimal performance for your experiment.|
|No||The number of simultaneous Runs you plan to maintain during this experiment. The default value for this is 1, i.e., a sequential experiment. The maximum value for this is dependent on your plan. This field is optional, but setting it correctly may improve performance.|
|No||A type for this experiment. Experiments can span 3 types: “offline”, “random”, “grid”. “Offline” experiments will use SigOpt’s Optimizer. “Random” executes random search, and “grid” executes grid search|
Example for creating an Experiment:
experiment = sigopt.create_experiment( name="Keras Model Optimization (Python)", type="offline", parameters=[ dict(name="hidden_layer_size", type="int", bounds=dict(min=32, max=128)), dict(name="activation_fn", type="categorical", categorical_values=["relu", "tanh"]), ], metrics=[dict(name="holdout_accuracy", objective="maximize")], parallel_bandwidth=1, budget=30, )
Once you’ve created an Experiment, you are able to loop through an Experiment in two ways:
for run in experiment.loop(): with run: ...
while not experiment.is_finished(): with experiment.create_run() as run: ...
sigopt.get_experiment(experiment_id) Back to Top
Retrieves an existing Experiment.
|Yes||Returns a SigOpt Experiment object specified by the provided |
create_run() Back to Top
Creates a new Run in the Experiment. Returns a
RunContext object to use for tracking Run attributes.
loop() Back to Top
Start an Experiment loop. Returns an iterator of
RunContext objects, used for tracking attributes of each Run in the experiment. The iterator will terminate when the Experiment has consumed its entire budget.
is_finished() Back to Top
Check if the Experiment has consumed its entire budget.
refresh() Back to Top
Refresh the Experiment attributes.
get_runs() Back to Top
Returns an iterator of all the TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.
get_best_runs() Back to Top
Returns an iterator of the best TrainingRuns for an Experiment. Method applied to an instance of an Experiment object.
archive() Back to Top
Archives the Experiment. All associated Runs will not be archived and can be found on the Project Runs page. Method applied to an instance of an Experiment object.