An MLflow Model is a standard format for packaging machine learning models that can be used in a variety of downstream tools—for example, batch inference on Apache Spark and real-time serving through a REST API. The format defines a convention that lets you save a model in different flavors (Python Function, PyTorch, Scikit-learn, and so on), that can be understood by different model serving and inference platforms.
Saving, loading, and deploying models
Most models are logged to a tracking server using the
mlflow.<model-type>.log_model(model, ...), loaded using the
mlflow.<model-type>.load_model(modelpath), and deployed using the
See the notebooks in tracking-examples for examples of saving models and the notebooks below for examples of loading and deploying models.
You can also save models locally and load them in a similar way using the
mlflow.<model-type>.save_model(model, modelpath) API. For local models, MLflow requires you to use the DBFS FUSE paths for
modelpath. For example, if you have a DBFS location
dbfs:/diabetes_models to store diabetes regression models, you must use the model path
modelpath = "/dbfs/diabetes_models/model-%f-%f" % (alpha, l1_ratio) mlflow.sklearn.save_model(lr, modelpath)