Hang after C# calls ExecuteReader to read big data multiple times

Aspire 81 Reputation points
2020-06-23T12:25:39.047+00:00

1.Create a thread and call ExecuteReader
string conStr = "server=.;uid=sa;pwd=123456;database=itcastdb";
SqlConnection con = new SqlConnection(conStr);
SqlCommand cmd =con.CreateCommand();
con.Open();
cmd.CommandText = “SELECT TOP 50000 FROM table1 LEFT JOIN table2";
SqlDataReader reader = cmd.ExecuteReader();
(Here reader and con are set as class members,The select statement queries data in two tables with left join)
2.Create another thread and use reader object to read DB data
reader.Read();
The thread ends each time when a certain amount of data is fetched,then open a new thread to continue reading DB data through reader object.
3.If all queried data has been read:
thread.Abort();
reader.Close();
con.Close();
4.There is a Cancel function to interrupt the reader thread:
thread.Abort();
reader.Cancel();
reader.Close();
con.Close();

The problem is after repeating the above steps many times, I get ExecuteReader hang and need to wait more than 10 minutes to recover.

environment:
DB has more 80 million data;
while selecting, there is insert into table1.

Any good suggestions?

Azure SQL Database
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Mike Ubezzi 2,776 Reputation points
    2020-06-24T19:55:51.093+00:00

    Hi @Aspire , It is possible that you have reached the resource limitation of max concurrent workers (requests) or max concurrent sessions for the specific DTU or vCore provisioned database. Do you have any information from Log Analytics about what is causing the ExecuteReader to hang? For a thorough method of investigating the error, please see: Auditing for Azure SQL Database and Azure Synapse Analytics

    My assumption given the amount of detail provided, I think that there is a thread exhaustion issue where either of the two resources identified above have reached their max or their is a thread pool issue that is being created based upon how you have implemented your logic for step 2. The following resources might be helpful:

    If that information does not help to improve with efficiencies and thread optimization, then please leverage Log Analytics to capture this issue (if Log Analytics is not already enabled) and then investigate events to identify root cause.

    Regards,
    Mike