Export and run a trained brain


  • Total time to complete: 15 minutes
  • Active time: 10 minutes
  • Export time: 5 minutes

Use the export brain function to package your trained brain as a containerized service that can run anywhere Docker is installed, including your local machine.

Note

Bonsai export supports the following processor architectures:

  • x64: AMD and Intel systems on 64-bit or 32-bit hardware (general-purpose Linux systems)
  • arm64 (v8): 64-bit RISC systems (Raspberry Pi 3+, Linux, newer industrial systems)
  • arm32 (v7): 32-bit RISC systems (Raspberry Pi 1, Raspberry Pi 2, Linux, older industrial systems)

Before you start

  • You must have the Azure CLI installed.
  • You must have Docker installed on your local machine. The community edition of Docker is available for Windows, Linux, and MacOS.
  • You must have read/write access to Azure Container Registry (ACR). Bonsai provides a default ACR when you provision a workspace under the same resource group as the workspace.
  • You must have a completed, fully trained Bonsai brain.

Step 1: Export the brain as a docker image

  1. Log into the Bonsai UI.
  2. Select the brain and version you want to work with.
  3. Navigate to the Train panel.
  4. Click the Export brain button.
  5. In the dialog box that opens:
    • Provide a name for the exported brain.
    • Select the processor architecture the brain will run on.

Brain export

Screenshot of the Bonsai export brain panel with x64 architecture selected".

An image of the exported brain is packaged as a Docker container, saved to the ACR associated with your workspace, and added to the list of available brains under Exported Brains in the left-hand menu.

Bonsai displays exported brains by the name you assign during export. The full image name in ACR is BRAIN_NAME:VERSION-OS_TYPE-ARCHITECTURE where:

  • BRAIN_NAME is the human-readable name assigned to the brain during export.
  • VERSION is the brain version you chose to export.
  • OS_TYPE is the operating system set during export.
  • ARCHITECTURE is the processor architecture you selected during export.

Step2: Download and run the Docker container

To download your exported brain:

  1. Log into your Azure Container Registry with the Azure CLI:
    az login
    az acr login --name ACR_NAME
    
  2. Fetch the containerized brain image and save it to your local machine:
    docker pull \
      WORKSPACE_NAME.azurecr.io/WORKSPACE_ID/BRAIN_NAME:VERSION-OS_TYPE-ARCHITECTURE
    
  3. Run the container as a local server:
    docker run              \
      --detach              \
      --publish 5000:5000   \
      --name CONTAINER_NAME \
      WORKSPACE_NAME.azurecr.io/WORKSPACE_ID/BRAIN_NAME:VERSION-OS_TYPE-ARCHITECTURE
    
  • ACR_NAME: The name of your Azure Container Registry (ACR) instance.
  • WORKSPACE_NAME: The name you gave to your Bonsai workspace when it was provisioned.
  • WORKSPACE_ID: The Azure-assigned resource ID of your Bonsai workspace.
  • CONTAINER_NAME: A human-readable name for the local Docker container.

The detach flag starts the container in the background without blocking your terminal. The publish flag tells Docker to route traffic on port 5000 of the container to port 5000 on your local machine (localhost:5000).

Tip

You can copy the download and run commands with all the placeholders automatically filled in from the Bonsai UI:

  1. Click on the options menu (...) next to the exported brain you want to download.
  2. Select View download info to display the bash commands.

Step 3: Check the status of the container

You can check the status of your local container with the docker ps or docker logs command.

The docker ps command, lists the status of all running containers by default.

Containerized brains that are running properly will be listed with the status "Up" and indicate how long the container has been running:

> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1310dba590e3        cfe62aa00691        "predict -m /saved_m…"   2 days ago          Up 34 hours         0.0.0.0:5000->5000/tcp   CONTAINER_NAME

Step 4: Talk to the exported brain

Once the container is running, you can communicate with the brain programmatically using the Brain API and port 5000 of the localhost (localhost:5000).

For example, assume you have the following definitions in your Inkling file:

inkling "2.0"

using Math
using Goal

# Pole and track constants
const TrackLength = 0.5
const MaxPoleAngle = (12 * Math.Pi) / 180

# State info from the simulator
type SimState {
  CartPosition: number,         # Position of cart in meters
  CartVelocity: number,         # Velocity of cart in meters/sec
  PoleAngle: number,            # Current angle of pole in radians
  PoleAngularVelocity: number, # Angular velocity of the pole in radians/sec
}

# Possible action results
type SimAction {
  # Amount of force in x direction to apply to the cart.
  Command: number<-1 .. 1>
}

# Define a concept graph with a single concept
graph (input: SimState): SimAction {
  concept BalancePole(input): SimAction {
    curriculum {
      source simulator (Action: SimAction): SimState {
        package "Cartpole"
      }

      # The objective of training is expressed as a goal with two
      # objectives: keep the pole from falling over and stay on the track
      goal (State: SimState) {
        avoid `Fall Over`:
          Math.Abs(State.pole_angle) in Goal.RangeAbove(MaxPoleAngle)
        avoid `Out Of Range`:
          Math.Abs(State.cart_position) in Goal.RangeAbove(TrackLength / 2)
      }
    }
  }
}

Your API call would look like:

import requests
import json

# General variables
url = "http://localhost:5000"
predictPath = "/v2/clients/{clientId}/predict"
headers = {
  "Content-Type": "application/json"
}

# Set a random UUID for the client.
# The same client ID will be used for every call
myClientId = str(uuid.uuid4())

# Build the endpoint reference
endpoint = url + predictPath.replace("{clientId}", myClientId)

# Set the request variables
requestBody = {
  "state": {
    "CartPosition": 3,
    "CartVelocity": 1.5,
    "PoleAngle": 0.75,
    "PoleAngularVelocity": 0.1
  }
}

# Send the POST request
response = requests.post(
            endpoint,
            data = json.dumps(requestBody),
            headers = headers
          )

# Extract the JSON response
prediction = response.json()

# Access the JSON result: full response object
print(prediction)

# Access the JSON result: all concepts
print(prediction['concepts'])

# Access the JSON result: specific field
print(prediction['concepts']['BalancePole']['action']['Command'])

Your API call would look like:

import requests
import json

# General variables
url = "http://localhost:5000"
predictionPath = "/v1/prediction"
headers = {
  "Content-Type": "application/json"
}

# Build the endpoint reference
endpoint = url + predictionPath

# Set the request variables
requestBody = {
  "CartPosition": 3,
  "CartVelocity": 1.5,
  "PoleAngle": 0.75,
  "PoleAngularVelocity": 0.1
}

# Send the POST request
response = requests.post(
            endpoint,
            data = json.dumps(requestBody),
            headers = headers
          )

# Extract the JSON response
prediction = response.json()

# Access the JSON result: full response object
print(prediction)

# Access the JSON result: specific field
print(prediction['Command'])

Tip

The container will serve API details for your brain, including your custom request and response objects at http://localhost:5000/v1/doc/index.html

Step 5: Clean up

Stop the Docker container when you have finished working with the brain locally.

To stop the brain, use the docker ps command to get the local container ID and docker stop CONTAINER_ID to stop the associated container.

> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1310dba590e3        cfe62aa00691        "predict -m /saved_m…"   2 days ago          Up 34 hours         0.0.0.0:5000->5000/tcp   CONTAINER_NAME

> docker stop 1310dba590e3