Enable logging in Azure Machine Learning
The Azure Machine Learning Python SDK allows you to enable logging using both the default Python logging package, as well as using SDK-specific functionality both for local logging and logging to your workspace in the portal. Logs provide developers with real-time information about the application state, and can help with diagnosing errors or warnings. In this article, you learn different ways of enabling logging in the following areas:
- Training models and compute targets
- Image creation
- Deployed models
Training models and compute target logging
There are multiple ways to enable logging during the model training process, and the examples shown will illustrate common design patterns. You can easily log run-related data to your workspace in the cloud by using the
start_logging function on the
from azureml.core import Experiment exp = Experiment(workspace=ws, name='test_experiment') run = exp.start_logging() run.log("test-val", 10)
See the reference documentation for the Run class for additional logging functions.
To enable local logging of application state during training progress, use the
show_output parameter. Enabling verbose logging allows you to see details from the training process as well as information about any remote resources or compute targets. Use the following code to enable logging upon experiment submission.
from azureml.core import Experiment experiment = Experiment(ws, experiment_name) run = experiment.submit(config=run_config_object, show_output=True)
You can also use the same parameter in the
wait_for_completion function on the resulting run.
The SDK also supports using the default python logging package in certain scenarios for training. The following example enables a logging level of
INFO in an
from azureml.train.automl import AutoMLConfig import logging automated_ml_config = AutoMLConfig(task='regression', verbosity=logging.INFO, X=your_training_features, y=your_training_labels, iterations=30, iteration_timeout_minutes=5, primary_metric="spearman_correlation")
You can also use the
show_output parameter when creating a persistent compute target. Specify the parameter in the
wait_for_completion function to enable logging during compute target creation.
from azureml.core.compute import ComputeTarget compute_target = ComputeTarget.attach( workspace=ws, name="example", attach_configuration=config) compute.wait_for_completion(show_output=True)
Logging during image creation
Enabling logging during image creation will allow you to see any errors during the build process. Set the
show_output param on the
from azureml.core.webservice import Webservice service = Webservice.deploy_from_image(deployment_config=your_config, image=image, name="example-image", workspace=ws ) service.wait_for_deployment(show_output=True)
Logging for deployed models
To retrieve logs from a previously deployed web service, load the service and use the
get_logs() function. The logs may contain detailed information about any errors that occurred during deployment.
from azureml.core.webservice import Webservice # load existing web service service = Webservice(name="service-name", workspace=ws) logs = service.get_logs()
You can also log custom stack traces for your web service by enabling Application Insights, which allows you to monitor request/response times, failure rates, and exceptions. Call the
update() function on an existing web service to enable Application Insights.
See the how-to for more information on how to work with Application Insights in the Azure portal.
Python native logging settings
Certain logs in the SDK may contain an error that instructs you to set the logging level to DEBUG. To set the logging level, add the following code to your script.
import logging logging.basicConfig(level=logging.DEBUG)