Hi folks!
The official doc is quite poor regarding the topic info.
Here is an instruction: Onboarding into serverless compute tier
https://docs.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#onboarding-into-serverless-compute-tier
with just one phrase:
OK. Great. Now when you try to run a similar command:A serverless database can be moved into a provisioned compute tier in the same way as moving a provisioned compute database into a serverless compute tier.
az sql db update ...
you catch an error:
Azure SQL Data Warehouse can be updated with the command az sql dw update.
OK. Let's try the same command with using
az sql dw update (instead of
az sql db update )
In my case it's:
az sql dw update -g SomeResourceGroupName -s SomeServerName -n SomeDBName --edition GeneralPurpose --min-capacity 1 --capacity 4 --family Gen5 --compute-model Provisioned
And..... here you get another error:
az: error: unrecognized arguments: --edition GeneralPurpose --min-capacity 1 --capacity 4 --family Gen5 --compute-model Provisioned
OK. Nice. Let's go to the manual with description:
Azure CLI Commands - az sql dw update
https://docs.microsoft.com/en-us/cli/azure/sql/dw?view=azure-cli-latest#az_sql_dw_update
And..... voala! There are no such arguments at all!
In this way you can only set
--service-objective (e.g. DW100, DW1000c), but it's not the case, besides this scaling up/down can be done just in one click in the settings in Azure portal.
Please add remarks to above-mentioned paragraph that DW/SQL Pool doesn't support such migration.
Now let's get back to main topic. If there is no automatic/fast way to move from the serverless to the provisioned tier then what other options are available ?
I can manually create a provisioned DB and copy all the data there. But here another bad news are awaiting.
AFAIK Azure DWH doesn't support direct cross-DB queries. The only workaround (according to stackoverflow.com) is to set up Elastic Query Feature (just in order to run it once) OR to create an external table with "a few" steps:
1) CREATE MASTER KEY ENCRYPTION BY PASSWORD
2) CREATE DATABASE SCOPED CREDENTIAL
3) CREATE EXTERNAL DATA SOURCE
4) CREATE EXTERNAL TABLE
5) Deep breath and repeat STEP (4) for each table in DB
6) Deep breath and copy one by one each table by INSERT-SELECT.
6) ???
7) Profit
At that I need to keep the same naming for tables, but the local table can't be named exactly as an external table:
There is already an object named 'SomeTableName' in the database. And it's another trouble with migration that requires intermediate tables and re-naming. I didn't find an option to assign some separate schema to external tables in order to keep the same table names.
But anyway, is this really the easiest/fastest way?
It looks like a duct tape. I'm disappointed :(
Best regards,
Vadim.