StanMalachowski-8077 avatar image
0 Votes"
StanMalachowski-8077 asked StanMalachowski-8077 commented

Max Memory Setting on SQL Server


I am having some very strange behaviour with a two-tier application with a SQL Server 2017 backend. The max memory size for SQL is the default : 2147483647, which seems a crazy value, but from what I understand, this version of SQL should be managing memory dynamically. We have a performance problem recently where update transactions from users seem to block for an unusually long time - 30ec to 2 minutes when previously, this took perhaps 5 seconds.

I have had high resolution metrics on the system and cannot spot anything obvious. However, the SQL Serer Memory Manager\Free Memory(KB) counter is crashing up and down every few minutes but the peak to trough is only 4GB down to around 120MB. The system has about 768GB of memory and 40 cores. It's been difficult, but some direct observations of the slow saves correlate with the trough on the free memory. The system does not appear to be hard faulting (page reads/sec), but the page faults are continually toggling from 10's (ie really low) to thousands every 5 seconds or so.

My questions are:

  1. Does the max server memory actually need to be set and what would be a sensible number?

  2. Are there any other metrics I could consider to see if there is a memory management problem?

I have attached an image of the trace for free mem (converted from KB to MB). The circle shows where a user reported slow performance.




mem.png (28.0 KiB)
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

1 Answer

ErlandSommarskog avatar image
0 Votes"
ErlandSommarskog answered StanMalachowski-8077 commented

The theory is that you can leave "max server memory" to its default, and SQL Server will manage the memory for you. That is, it will grab as much as it can, and then yield memory if there is memory pressure in the machine, that is some other process wants the memory.

In practice, many DBAs do always set this value to make sure that the OS has space to breathe. And if there other services or applications running on the machine, there is all reason to cap SQL Server. Even more so if there are multiple instances of SQL Server running on the machine.

If SQL Server is the only thing running in this box, I would probably set max server memory to 700000, although there is not much science behind that number.

You mention page faults. But are they page faults in general or with the SQL Server process? SQL Server should preferably not page-fault, and there is a cure for this, to wit, grant the service account for SQL Server the Windows permission "Lock pages in memory". I think that with this setting, it is even more important to set "max server memory".

But your real problem seems to be that some update operations are taking longer time. I don't think these considerations on memory are related to this problem. Rather this is likely to be due to a change in query plans. Are you on SQL 2016 or later and have enable Query Store for the database? In such case you have a wealth of material to dig into.

· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hi Erland,

Thanks very much. What you state is what I generally have felt about max memory. It is normally a number I would see set to do as you describe - not squeeze memory on the operating system. I do actually suspect this is happening, but I am missing few critical counters, so I will try again. It does seem, however, the MS claims that SQL should manage this, but their documentation is cryptic.

You may be right about query plans, but the event is not consistent and seems unrelated to load on the system. We are on SQL 2017, so I will look into the query store, as well a "Lock Pages in Memory"

Thanks again,


0 Votes 0 ·