Import the AdventureWorks sample database to Azure Arc-enabled PostgreSQL Hyperscale

AdventureWorks is a sample database containing an OLTP database used in tutorials, and examples. It's provided and maintained by Microsoft as part of the SQL Server samples GitHub repository.

An open-source project has converted the AdventureWorks database to be compatible with Azure Arc-enabled PostgreSQL Hyperscale.

This document describes a simple process to get the AdventureWorks sample database imported into your PostgreSQL Hyperscale server group.

Note

As a preview feature, the technology presented in this article is subject to Supplemental Terms of Use for Microsoft Azure Previews.

The latest updates are available in the release notes.

Download the AdventureWorks backup file

Download the AdventureWorks .sql file into your PostgreSQL Hyperscale server group container. In this example, we'll use the kubectl exec command to remotely execute a command in the PostgreSQL Hyperscale server group container to download the file into the container. You could download this file from any location accessible by curl. Use this same method if you have other database back up files you want to pull in the PostgreSQL Hyperscale server group container. Once it's in the PostgreSQL Hyperscale server group container, it's easy to create the database, schema, and populate the data.

Run a command like this to download the files replace the value of the pod name and namespace name before you run it:

Note

Your container will need to have Internet connectivity over 443 to download the file from GitHub.

Note

Use the pod name of the Coordinator node of the Postgres Hyperscale server group. Its name is <server group name>c-0 (for example postgres01c-0, where c stands for Coordinator node). If you are not sure of the pod name run the command kubectl get pod

kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres  -- /bin/bash -c "cd /tmp && curl -k -O https://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql"

#Example:
#kubectl exec postgres02-0 -n arc -c postgres -- /bin/bash -c "cd /tmp && curl -k -O hthttps://raw.githubusercontent.com/microsoft/azure_arc/main/azure_arc_data_jumpstart/cluster_api/capi_azure/arm_template/artifacts/AdventureWorks2019.sql"

Import the AdventureWorks database

Similarly, you can run a kubectl exec command to use the psql CLI tool that is included in the PostgreSQL Hyperscale server group containers to create and load the database.

Run a command like this to create the empty database first substituting the value of the pod name and the namespace name before you run it.

kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --username postgres -c 'CREATE DATABASE "adventureworks";'

#Example
#kubectl exec postgres02-0 -n arc -c postgres -- psql --username postgres -c 'CREATE DATABASE "adventureworks";'

Then, run a command like this to import the database substituting the value of the pod name and the namespace name before you run it.

kubectl exec <PostgreSQL pod name> -n <namespace name> -c postgres -- psql --username postgres -d adventureworks -f /tmp/AdventureWorks.sql

#Example
#kubectl exec postgres02-0 -n arc -c postgres -- psql --username postgres -d adventureworks -f /tmp/AdventureWorks.sql

Note: You will not see so much performance benefits of running on Azure Arc-enabled PostgreSQL Hyperscale until you scale out and you shard/distribute the data/tables across the worker nodes of your PostgreSQL Hyperscale server group. See Suggested next steps.

Suggested next steps