Dfs and Load Sharing

Dfs load sharing occurs at the client level that is opposite Windows 2000 Network Load Balancing (formerly called Windows Load Balancing Service or WLBS) that takes place on the server side. Dfs takes advantage of the share redundancy provided by replica sets to distribute demand.

The root and child nodes of a Dfs link must be backed by more than one physical server when using Load Balancing. Dfs provides a degree of load balancing because clients randomly select a physical server to connect to from the list of replicas returned by the Dfs server. However, Dfs does not :

  • Take into account the number of client sessions maintained by a replica.

  • Take into account the length of sessions maintained by clients.

  • Use a DNS-style round-robin selection criteria.

If a root or link appears to be overused, you can flush the PKT for a Windows 2000–based client by using Dfsutil. This forces the client to request a new referral. For clients with operating systems other than Windows 2000, you must restart the client to flush its cache.

For more information about how Windows 2000–based clients use the referrals from the Dfs server to randomize replica selection, see "How Dfs Works" earlier in this chapter.

Revision Levels and Load Sharing
Dfs clients and servers come in two versions. Clients and servers that are running Windows NT 4.0, Windows 98, and Windows 95 talk revision level 2. Clients and servers that are running Windows 2000 talk revision level 3. Clients and servers negotiate and converse at the highest common Dfs revision level. You can see Dfs revision levels in network packets by using Network Monitor.
This relates to load sharing in that Dfs servers send the referral list (the names of the physical shares underlying the Dfs namespace requested) to revision level 2 clients in a fixed order, relying on the client to randomize the list in a unique order.
When a revision level 3 client accesses the Dfs namespace, the Dfs server performs the randomization process. The referral list is split into two parts, one composed of replicas in the same site as the client and one containing out-of-site replicas. Both halves of the list are shuffled into random order by the server. —