AZURE SQL MI General Purpose Performance question

Biju Mathew 471 Reputation points
2021-08-30T06:31:15.63+00:00

Hi,
we have provisioned an Azure SQL MI GP with 8 cores and 512gb reserverd space.

I have created a test DB on it with both the data file and log file 100 mb each.
The DB compatibility level is 150 for 2019.

I have created 1 table :

CREATE TABLE [Test_Data] (
[Id] INT IDENTITY,
[Date] DATETIME DEFAULT GETDATE (),
[City] CHAR (25) DEFAULT 'Testing123',
[CityName] CHAR (25) DEFAULT 'Test Test1 Test2');

Then i insert data using the below for 1 k rows.

INSERT INTO Test_DATA DEFAULT VALUES ;
GO 1000

This insert takes close to 2 minutes to insert 1000 rows!!!! 2 mins for 1k rows is excessive , in my view.

Just for comparison on my laptop local sql instnace, the same query runs in 3 seconds.

I do not know what i am missing here .

Anyone , any thoughts please ?

thanks

Azure SQL Database
0 comments No comments
{count} votes

Accepted answer
  1. Oury Ba-MSFT 16,471 Reputation points Microsoft Employee
    2021-08-31T21:38:22.817+00:00

    @Biju Mathew

    The disks you explicitly create for Azure VMs drives will have their respective speed, independently from the data file sizes placed on them.
    On Azure SQL MI General Purpose tier each file created will be placed on the nearly sized disk, meaning the bigger the file - the bigger disk (and faster speed) you will get.
    IO performance of Azure Premium disks depend on the disk size. This is explained in the article I shared above.

    Hope that helps.

    Pease accept mark as accepted answer if the reply is helpful. Otherwise let us know how we ca better assist.

    Regards,
    Oury

    1 person found this answer helpful.
    0 comments No comments

3 additional answers

Sort by: Most helpful
  1. Oury Ba-MSFT 16,471 Reputation points Microsoft Employee
    2021-08-30T21:04:44.827+00:00

    Hi @Biju Mathew Thank you for posting your question on Microsoft Q&A. Consider increasing the size of the data & log files - Impact of log file size on General Purpose Managed Instance performance | by Jovan Popovic | Azure SQLDB Managed Instance | Medium.

    Additionally 1000 transactions is pretty inefficient way of inserting data (wrap it into a transaction).

    What is your local database recovery model ? A huge difference will be on the Transaction Log impact between Simple & Full (SQL MI) recovery models.

    Also, General Purpose does offer a remote storage - comparing it to a local SSD when "hammering" the Transaction Log with sequential transactions - will never be efficient.

    Regards,
    Oury

    0 comments No comments

  2. Biju Mathew 471 Reputation points
    2021-08-31T08:16:44.493+00:00

    Thanks for this input . I will try increasing the data/log files size and see if it makes a difference.

    You say that GP offer remote storage - via Azure Premium disk - and so every IO becomes a network IO and hence the latency.

    a related question, I am trying to compare this behaviour to how SQL Server on Azure VM (with Premium managed disks for data file) behave. Will it behave the same i,e. every IO becomes a network IO just like in GP ? or are there some optimizations when it comes to SQL on VM storage?

    Thanks

    0 comments No comments

  3. Biju Mathew 471 Reputation points
    2021-09-01T01:14:46.337+00:00

    Thanks @Oury Ba-MSFT I understand Azure VMs drives perform regardless of the filesizes.
    the question was more around how the managed premium disks are attached to Azure VMs.

    Are they attached locally or they follow the same remote storage pattern like in MI GP?