# Define Parameter Boundaries

Parameters are a crucial part of every SigOpt experiment, defining the domain to be searched. A parameter is one of three different types: integer, double and categorical. While valid choices for categorical parameters are defined by a set of possible inputs, the range of possible values for double and integer parameters is defined by lower and upper bounds.

For example, with the parameters below we will always provide suggestions with assignments `x`

and `y`

between 0 and 1.

```
{
"parameters": [
{
"bounds": {
"max": 1.0,
"min": 0.0
},
"name": "x",
"type": "double"
},
{
"bounds": {
"max": 1.0,
"min": 0.0
},
"name": "y",
"type": "double"
}
]
}
```

Defining a meaningful domain is an important first step towards a successful application. Sometimes, our users have more knowledge about how parameters relate and how they relate to the optimization problem. Leveraging that knowledge can help to improve performance and obtain better results.

Therefore, in addition to parameter types, we provide several features and recommendations that can help to construct a domain for a specific problem.

### Constraints

Sometimes, a problem requires non-rectangular bounds, e.g., the sum of some double parameters must exceed a specific a specific value. Linear constraints can be used to restrict the parameters to a valid region and SigOpt will never provide suggestions that do not satisfy these constraints.

### Failure Regions

If your bounds cannot be specified with linear constraints, or the constraints are not known a-priori, e.g., some assignments will always run out memory or fail to produce a robust solution, your application might benefit from using failed observations. Instead of returning a bad value or manually iteratively refining the domain, observations can be marked as failures, which allows us to *learn* custom non-rectangular experiment domains.

### Logarithmic Parameter Transformations

Some parameters are more naturally described on a logarithmic scale, for example regularization penalties and the learning rate for most machine learning models. Parameter transformation can be used to indicate the parameter is best searched on the log scale.

### Further Notes

There is no one-size-fits-all solution here, and our optimization algorithm is designed to adapt to how different parameters behave regardless of transformations. Nevertheless, the way you define your domain will influence the outcome and ultimately you are the best judge for these types of decisions.