Hyperopt

Hyperopt is a popular open-source hyperparameter tuning library. Hyperopt’s job is to optimize a scalar-valued objective function over a set of input parameters to that function. When using Hyperopt to do hyperparameter tuning for your machine learning models, you define the objective function to take hyperparameters of interest as input and output a training or validation loss. In the objective function, you load the training data, train your machine learning model with hyperparameters received from the input and save model checkpoints every several iterations as usual. Hyperopt offers two tuning algorithms: Random Search and the Bayesian method Tree of Parzen Estimators (TPE), which offer improved compute efficiency compared to a brute force approach such as grid search.

There are two ways to use Hyperopt in a distributed setting:

  • Use distributed Hyperopt with single-machine training algorithms. Specifically, you use the SparkTrials class when calling hyperopt.fmin() and run single-machine training algorithms in the objective function.
  • Use single-machine Hyperopt with distributed training algorithms. Specifically, you use the default base.Trials class when calling hyperopt.fmin() and run distributed training algorithms in the objective function.

See the following sections for detailed demonstrations of these two use cases: