Export and run a trained brain

  • Total time to complete: 15 minutes (Windows), 50 minutes (Linux)
  • Active time: 10 minutes
  • Export time: 5 minutes (Windows), 40 minutes (Linux)

Use the export brain function to package your trained brain as a containerized service that can run anywhere Docker is installed, including your local machine.


The Bonsai export function currently supports the following processor architectures.

  • x64: AMD and Intel systems on 64-bit or 32-bit hardware (general-purpose Windows and Linux systems)
  • arm64 (v8): 64-bit RISC systems (Raspberry Pi 3+, Linux, newer industrial systems)
  • arm32 (v7): 32-bit RISC systems (Raspberry Pi 1, Raspberry Pi 2, Linux, older industrial systems)

Before you start

  • You must have the Azure CLI installed.
  • You must have Docker installed on your local machine. The community edition of Docker is available for Windows, Linux, and MacOS.
  • You must have read/write access to Azure Container Registry (ACR). Bonsai provides a default ACR when you provision a workspace under the same resource group as the workspace.
  • You must have a completed, fully trained Bonsai brain.

Step 1: Export the brain as a docker image

  1. Log into the Bonsai UI.
  2. Select the brain and version you want to work with.
  3. Navigate to the Train panel.
  4. Click the Export brain button.
  5. In the dialog box that opens:
    • Provide a name for the exported brain.
    • Select the operating system the brain will run on.
    • Select the processor architecture the brain will run on.

Brain export

Screenshot of from bonsai platform showing how to export brain".

An image of the exported brain is packaged as a Docker container, saved to the ACR associated with your workspace, and added to the list of available brains under Exported Brains in the left-hand menu.

Bonsai displays exported brains by the name you assign during export. The full image name in ACR is BRAIN_NAME:VERSION-OS_TYPE-ARCHITECTURE where:

  • BRAIN_NAME is the human-readable name assigned to the brain during export.
  • VERSION is the brain version you chose to export.
  • OS_TYPE is the operating system you selected during export.
  • ARCHITECTURE is the processor architecture you selected during export.

Step2: Download and run the Docker container

To download your exported brain:

  1. Log into your Azure Container Registry with the Azure CLI:
    az acr login
  2. Fetch the containerized brain image and save it to your local machine:
    docker pull \
  3. Run the container as a local server:
    docker run              \
      --detach              \
      --publish 5000:5000   \
      --name CONTAINER_NAME \
  • WORKSPACE_NAME: The name you gave to your Bonsai workspace when it was provisioned.
  • WORKSPACE_ID: The Azure-assigned resource ID of your Bonsai workspace.
  • CONTAINER_NAME: A human-readable name for the local Docker container.

The detach flag starts the container in the background without blocking your terminal. The publish flag tells Docker to route traffic on port 5000 of the container to port 5000 on your local machine (localhost:5000).


You can copy the download and run commands with all the placeholders automatically filled in from the Bonsai UI:

  1. Click on the options menu (...) next to the exported brain you want to download.
  2. Select View download info to display the bash commands.

Step 3: Check the status of the container

You can check the status of your local container with the docker ps or docker logs command.

The docker ps command, lists the status of all running containers by default.

Containerized brains that are running properly will be listed with the status "Up" and indicate how long the container has been running:

> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1310dba590e3        cfe62aa00691        "predict -m /saved_m…"   2 days ago          Up 34 hours>5000/tcp   CONTAINER_NAME

Step 4: Talk to the exported brain

Once the container is running, you can communicate with the brain programmatically using the POST /v1/prediction endpoint of Brain API and port 5000 of the localhost (localhost:5000).

JSON request and response objects for the Brain API are customized to each brain based on the state and action definitions in the related Inkling file based on the following templates:

Request object template

'StateName1' : 'Value1',
'StateName2' : {
'Subfield1': 'Subvalue1'
'Subfield2': 'Subvalue2'
'StateName3' : 'Value3',
'StateNameN' : 'ValueN',

Each StateName field correspond to an input variable in the Inkling file used to train the brain.

Response object template

'ActionName1' : 'Value1',
'ActionName2' : 'Value2',
'ActionName3' : {
'Subfield1': 'Subvalue1'
'Subfield2': 'Subvalue2'
'ActionNameN' : 'ValueN',

Each ActionName field correspond to an output variable in the Inkling file used to train the brain.


The container will serve API details for your brain at http://localhost:5000/v1/doc/index.html

You can use any language and REST library to send requests to the exported brain. For example, to send a request with Powershell:

$param = @{
    Uri = "http://localhost:5000/v1/prediction"
    Method = "POST"
    Body =  '{"StateName1": Value1, "StateName2":Value2}'
    ContentType = "application/json"
Invoke-RestMethod @param

Step 5: Clean up

Stop the Docker container when you have finished working with the brain locally.

To stop the brain, use the docker ps command to get the local container ID and docker stop CONTAINER_ID to stop the associated container.

> docker ps

CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                    NAMES
1310dba590e3        cfe62aa00691        "predict -m /saved_m…"   2 days ago          Up 34 hours>5000/tcp   CONTAINER_NAME

> docker stop 1310dba590e3