ADO.NET The misunderstood “Connection Lifetime” managed pooler connection string keyword.
I find that we have not done a good job of explaining the “knobs” used to tweak the managed connection pool (the one used by SqlClient and OracleClient), the worst knob definitely has to be the “Connection Lifetime” keyword. First of all, the name is bad. Really really bad. In Whidbey we are renaming it to Load Balance Timeout to help make it clearer, but we need to keep backward compatibility so Connection Lifetime will continue working. The second problem is that it does something really bad, it throws away connections that should have been pooled.
I have found that it is really hard to explain what this keyword does. Here is some pseudo code:
Check if time the connection has been open is greater than Connection Lifetime if it is, throw the connection away
Else Put connection on the pool
This is the _only_ time that we will check Connection Lifetime. We do not check the lifetime when the connection is idle in the pool and we certainly don't use this value to determine how long it can stay in there.
As you can see we are throwing away perfectly good connections, this will certainly affect performance since the next time we require a connection we will have to go to the server to get it, in effect you are telling us to turn pooling off after Connection Lifetime has been reached. This is a terrible idea for most applications and I would highly recommend not using this connection string keyword _unless_ you are connecting to a cluster _and_ you understand the trade offs.
This brings us to the design discussion, what exactly where we trying to solve when we came up with this knob? Let’s look at this scenario:
You have a cluster of servers with currently two servers up, we start running our middle tier applications against this server cluster and eventually we find that the load is too high, we add a third cluster and expect the load to be distributed. When we look at the load a few minutes later we see that the third server is sitting idle while the first two servers are overworked, what happened?
Pooling happened. The middle tier applications are caching the connections and never opening new ones. Connection Lifetime is the knob that allows this scenario to work, by throwing away perfectly good connections we will force the middle tier applications to create new connections, these new connections will be load balanced into the new server. It is up to the developer to fine tune how much performance they are willing to give up versus how important it is to load balance the application.
Summary: Do not use Connection Lifetime unless you are connected to a cluster and understand the trade offs.
Rambling out. Standard Disclaimer: This post is provided “AS IS” and confers no rights.