I've been trying to deploy a ML online endpoint for inference but I'm getting the following error:
Code: None
Message: InternalServerError: Internal error. Please see troubleshooting guide, available here: https://aka.ms/oe-tsg#error-internalservererror
Exception Details: (None) InternalServerError: Internal error. Please see troubleshooting guide, available here: https://aka.ms/oe-tsg#error-internalservererror
Unfortunately there's not sufficient logs available to troubleshoot and I cannot make the deployment succeed.
Here you can see the reference deployment files:
Environment reference file
name: inference-env
channels:
- conda-forge
dependencies:
- python=3.11
- pip
- pip:
- joblib
- azure-identity
- azure-keyvault-secrets
- utils
score.py (simplified version omitting some postgres statements)
import json
import os
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
from azureml.core.model import Model
def init():
global model
global conn
# Load the model from the Azure ML model registry
model_path = Model.get_model_path("my-model")
# Retrieve database credentials from Azure Key Vault
key_vault_name = os.environ["AZURE_KEY_VAULT_NAME"]
kv_uri = f"https://{key_vault_name}.vault.azure.net"
credential = DefaultAzureCredential()
client = SecretClient(vault_url=kv_uri, credential=credential)
def run(raw_data):
data = json.loads(raw_data)
description = data["description"]
return {"status": "success", "embedding": description}