SQL Server Performance Issue

Shreenivasa S Jagatagar 1 Reputation point
2021-01-25T20:05:23.047+00:00

We have many task performed using T-SQL SQL Server stored procedures.
These SQL Server Stored Procedures are converted from Oracle.

One of the Task which involves around 50 plus SQL Server stored procedures(T-SQL) is around 10 times slower in SQL Server compared to Oracle.
In SQL Server this takes 3 minutes and 30 Seconds and in Oracle this takes around 24 seconds.
For e.g., Each INSERT takes 19 milliseconds in SQL Server whereas in Oracle it is less than 1 millisecond.
There are few temp tables used in these SQL Server procedures related to this task. These Temp Tables (# tables) are used in SQL Server DB for Oracle Global Variables (mainly for Oracle Global collections).
This task reads around 14000 records and creates 56000 records .
Both Oracle and SQL Server databases are in same server.
The server is Intel Xeon Silver 4112 two Virtual processors (2.59 GH)
And RAM = 48 GB .
With SQL Server 2014. (Also tried SQL server 2016)

Following is a simple INSERT , inserting 1000 records to a test table. This takes more than 150 milliseconds in SQL Server and in Oracle this takes around 30 milliseconds.
CREATE TABLE [dbo].[TESTPERF01](
[CCN] varchar NOT NULL,
[SVC_CUR_ERROR_ROLLUP_ID] [numeric](6, 0) NULL,
[SVC_ENG_ERROR_ROLLUP_ID] [numeric](6, 0) NULL,
[SVC_SIM_ERROR_ROLLUP_ID] [numeric](6, 0) NULL,
[SVC_ITEM_AGC] varchar NULL,
[SVC_ITEM_VAT] varchar NULL,
[SVC_ITEM_STAX] varchar NULL,
[SVC_ITEM_UTAX] varchar NULL,
[SVC_ITEM_ETAX] varchar NULL
)

CREATE PROCEDURE dbo.TEST_INSERT03
@P_PROCESS_ID VARCHAR(8) = ' '
AS

BEGIN

SET NOCOUNT ON

DECLARE @SEQ VARCHAR(6)
,@I INT = 1
,@LV_END_DATE DATETIME
,@LV_START_DATE DATETIME
,@UMSG_TEXT VARCHAR(1000)

SET @LV_START_DATE = SYSDATETIME()

begin transaction TRAN_1;

IF @P_PROCESS_ID = ' '
BEGIN
SET @P_PROCESS_ID = Format( SYSDATETIME(),'HHmmss','en-us')
END

DELETE FROM TESTPERF01

SET @I = 1
WHILE @I <= 1000
BEGIN

SET @SEQ = CAST(@I AS VARCHAR(6))
INSERT INTO TESTPERF01 (
[CCN]
,[SVC_CUR_ERROR_ROLLUP_ID]
,[SVC_ENG_ERROR_ROLLUP_ID]
,[SVC_SIM_ERROR_ROLLUP_ID]
,[SVC_ITEM_AGC]
,[SVC_ITEM_VAT]
,[SVC_ITEM_STAX]
,[SVC_ITEM_UTAX]
,[SVC_ITEM_ETAX]

)
VALUES (
@SEQ
,123
,123
,123
,'AGC'
,'VT1'
,'1'
,'1'
,'1'
);
SET @I = @I + 1
END

commit transaction TRAN_1;
SET @LV_END_DATE = SYSDATETIME()

PRINT CAST( DATEDIFF(MILLISECOND, @LV_START_DATE, @LV_END_DATE) AS VARCHAR(30))
PRINT CAST( DATEDIFF(nanosecond, @LV_START_DATE, @LV_END_DATE) AS VARCHAR(30))

END

In our procedures , the BEGIN TRAN is issued at the beginning of the transaction. And the COMMIT TRAN is issued at the end of the transactions. (And every INSERT is not embedded with BEGIN TRAN and COMMIT TRAN)

I have reviewed the query execution plan, as far as possible set based approach is used,
Could you please advice if there is any SQL Server Set-ups or any other possible issues .

SQL Server
SQL Server
A family of Microsoft relational database management and analysis systems for e-commerce, line-of-business, and data warehousing solutions.
12,646 questions
Transact-SQL
Transact-SQL
A Microsoft extension to the ANSI SQL language that includes procedural programming, local variables, and various support functions.
4,547 questions
0 comments No comments
{count} votes

5 answers

Sort by: Most helpful
  1. Jeffrey Williams 1,886 Reputation points
    2021-01-25T21:30:27.637+00:00

    SQL Server does not do well with loops - and looping 1000 times to insert 1000 rows is going to take some time. If you want better performance for batch inserts like this - you would be much better off using a batch insert process.

    An example to mimic your process would be:

       Drop Table If Exists #testperf01;  
      
     Create Table #testperf01 (  
            [CCN] [varchar](6) Not Null  
          , [SVC_CUR_ERROR_ROLLUP_ID] [numeric](6, 0) Null   
          , [SVC_ENG_ERROR_ROLLUP_ID] [numeric](6, 0) Null   
          , [SVC_SIM_ERROR_ROLLUP_ID] [numeric](6, 0) Null   
          , [SVC_ITEM_AGC] [varchar](30) Null   
          , [SVC_ITEM_VAT] [varchar](30) Null   
          , [SVC_ITEM_STAX] [varchar](30) Null   
          , [SVC_ITEM_UTAX] [varchar](30) Null   
          , [SVC_ITEM_ETAX] [varchar](30) Null   
            );  
      
    Declare @startTime datetime2 = sysdatetime();  
      
     --==== Batch insert 1000 rows  
       With t(n)  
         As (  
     Select t.n   
       From (  
     Values (0), (0), (0), (0), (0), (0), (0), (0), (0), (0)) As t(n)  
            )  
          , iTally (n)  
         As (  
     Select checksum(row_number() over(Order By @@spid))   
       From t t1, t t2, t t3                                -- 1000 rows  
            )  
     Insert Into #testperf01 (  
            CCN  
          , SVC_CUR_ERROR_ROLLUP_ID  
          , SVC_ENG_ERROR_ROLLUP_ID  
          , SVC_SIM_ERROR_ROLLUP_ID  
          , SVC_ITEM_AGC  
          , SVC_ITEM_VAT  
          , SVC_ITEM_STAX  
          , SVC_ITEM_UTAX  
          , SVC_ITEM_ETAX  
            )  
     Select CCN = cast(it.n As varchar(6))  
          , SVC_CUR_ERROR_ROLLUP_ID = 123  
          , SVC_ENG_ERROR_ROLLUP_ID =  123  
          , SVC_SIM_ERROR_ROLLUP_ID = 123  
          , SVC_ITEM_AGC = 'AGC'  
          , SVC_ITEM_VAT = 'VT1'  
          , SVC_ITEM_STAX = '1'  
          , SVC_ITEM_UTAX = '1'  
          , SVC_ITEM_ETAX = '1'  
       From iTally                  it;  
      
    Declare @endDateTime datetime2 = sysdatetime();  
      
     Select @startTime  
          , @endDateTime  
          , ms = datediff(ms, @startTime, @endDateTime)  
          , ns = datediff(ns, @startTime, @endDateTime);  
    

    The first time this runs it will take longer - as it needs to generate the execution plan. But after this it takes less than 50ns (on my system) and reports 0 for both ms and ns. It also does not require a transaction since all rows are inserted in a single transaction.

    With that said - this technique may not work for your specific process as it is all dependent on how the values are generated for each row to be inserted and what it takes to generate those values.

    0 comments No comments

  2. Dan Guzman 9,206 Reputation points
    2021-01-25T22:07:09.477+00:00

    This takes more than 150 milliseconds in SQL Server and in Oracle this takes around 30 milliseconds.

    The proc with 1K singleton inserts runs in 13 milliseconds on my test machine and averages 1-1.5 seconds with 100K rows. I suggest you focus on memory configuration. It may be that max server memory is set too high and competing with the other processes (e.g. Oracle) running on the same machine.

    0 comments No comments

  3. Erland Sommarskog 100.8K Reputation points MVP
    2021-01-25T22:21:48.42+00:00

    My first attempt actually took 160 ms, but the next to exections were 6 to 10 ms.

    In addition to Dan's post, check if you have set up any traces or extended event sessions that looks at too much data. That can be an excellent way to slow things down - and particularly things that run many short statements.

    And I echo what Jeffery said. That kind of code may be kosher on Oracle, but that is definitely not how you do it on SQL Server.


  4. Shreenivasa S Jagatagar 1 Reputation point
    2021-01-26T00:00:49.923+00:00

    Thanks Jeffrey.
    Ok this simple INSERT can be converted to batch INSERTs like you have shown.

    In our case the Source Record is read (Testing with 14k records ) .
    For each source Record, it decides the details written into the target table based on the Source Transaction Type, Sub Type, Source Transaction Application, Source Transaction Codes, Source Transaction location.
    Also based on the Source Transaction and other parameters , it decides how many target records are created for each source Record.
    For each Source Record it reads 15 other look-up tables to retrieve data required for the Target Table. From each of these look-up tables it can read maximum of 20 records for each source record.

    It will update one main target table and it can write maximum of 20 records to the main target table for each source transaction.
    And apart from the main target table, it can write to 16 other target tables based on the Source transaction.

    In this process there is one T-SQL Procedure which writes to the main target table , this procedure is taking around 19 milliseconds in SQL Server to insert 1 record. The same procedure in Oracle takes less than 1 millisecond.


  5. MelissaMa-MSFT 24,176 Reputation points
    2021-01-26T07:53:30.423+00:00

    Hi @Shreenivasa S Jagatagar ,

    Welcome to Microsoft Q&A!

    Thanks for your details and update.

    When we compare the performance of the same Procedure, it could be better to compare under exactly same environment and conditions which means the table, row count, operation, statistics,compile, memory configuration,disk configuration,trace etc. are same for both. Best is to compare execution plans.

    If you would like to improve the performance of this procedure in TSQL, you could have a try with below options:

    • create indexes in tables
    • Limit the complexity of the statement
    • Update statistics
    • Disable the trace,extended event and so on
    • Try with Hash Join, Merge Join or Nested Loop Join in any query while join multi tables
    • Allocate more physical resources

    Best regards
    Melissa


    If the answer is helpful, please click "Accept Answer" and upvote it.
    Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.