SharePoint 2010 Service Application Load Balancer
I’ve recently been asked some questions about the internal load balancer for service applications in SharePoint 2010, which I’ll summarize here.
What is the Service Application Load Balancer?
In order to simplify installation of multi-machine server farms, SharePoint provides a basic load balancer that can round-robin requests to Web service applications.
This provides load balancing and fault-tolerance out-of-the-box for SharePoint service applications without requiring the administrator to be familiar with the intricacies of external load balancing solutions.
If a deployment requires more advanced load balancing or fault tolerance capabilities, an administrator can opt to use an external load balancer.
How does it work?
A Service Application Proxy typically requests an endpoint for a connected Service Application from the load balancer, which is a software component that executes in the same process and application domain as the proxy.
The load balancer maintains a complete list of available endpoints for each Service Application in a cache, and simply returns the next available endpoint to the proxy in a round-robin fashion.
How does it maintain the list of available service application endpoints?
For communication with service applications that are local to a SharePoint server farm, the load balancer looks into the configuration database to discover the available endpoints for a service application.
In this case, the endpoint cache is updated whenever the list of available service application endpoints in the local configuration database changes. The delay in updating the cache is typically less than 1 minute.
For communication with service applications in a remote server farm (i.e., federated services), the load balancer uses the Topology web service (aka, “Application Discovery and Load Balancer Service”) to discover the available endpoints for a service application.
In this case, the list of service application endpoints is periodically retrieved from the Topology web service on the remote farm and copied into the local configuration database as an intermediate cache.
This periodic retrieval is performed by the “Application Addresses Refresh” timer job. By default, this timer job executes every 15 minutes.
How does it handle failures?
The Service Application Proxy may report an endpoint failure to the load balancer, in which case the load balancer will remove the specified endpoint from the round-robin rotation for some period of time.
By default, this period is 10 minutes (configurable using the "BadListPeriod” parameter of the Set-SPTopologyServiceApplicationProxy Cmdlet).
If only one endpoint remains, it will not be removed from the rotation, even if a proxy reports a failure for that endpoint.
Is the Topology Web service a single point of failure for the load balancer?
No. The Topology web service application proxy is “self-balancing”. In other words, it uses the same round-robin load balancing algorithm to provide fault tolerance for itself.
When an administrator creates a connection to a remote (federated) service application, the connection information includes one of the Topology web service application endpoints (selected at random) from the remote server farm. I’ll call this the “bootstrap” node.
The bootstrap node is used to query the complete list of Topology web service application endpoints, which are then cached in the local SharePoint configuration database and used to round-robin all future requests to the Topology web service application.
This cached list of Topology web service application endpoints is then updated periodically just like any other federated service application.
So, the bootstrap node is only needed one time when an administrator creates a new connection (Service Application Proxy) to a remote service application.
What are the limitations of the load balancer?
There is an extreme case where connectivity to a federated service farm may be lost by replacing all of the servers in the remote farm within a single cache refresh interval (i.e., before the “Application Addresses Refresh” timer job can execute).
To hit this case, you would need to bring down all of the servers hosting the Topology web service application in the service farm at once, and after they are all down, replace them all with servers that have different addresses, essentially moving the entire farm to a new set of servers at once rather than gradually over time.
In this case, none of the Topology web services endpoints cached by the client farm would be valid, and the client could would not be able to acquire the all-new list of endpoints (because it needs at least one valid endpoint to query the list of new endpoints).
To avoid this situation, at least one of the Topology web servers in the service farm should be maintained long enough for client farms to update their cache with at least one of the new Topology web service endpoints (at which point the old server can safely be removed). Alternatively, give at least one of the new servers the same name as one of the servers being replaced.
Another limitation of the Service Application load balancer is that the list of available endpoints is maintained separately in the application domain of each caller, and there is no coordination between callers.
This means that each proxy must independently discover that an endpoint is unavailable for some reason, which may translate to one failure per caller application domain (e.g., client application pool process) when a given server goes down unexpectedly.
This also means that it is theoretically possible that some clients may achieve lock-step and send requests to the same servers in succession, thus achieving fault tolerance but not the most effective load balancing.