Geek of All Trades: iSCSI Is the Perfect Fit for Small Environments

Greg Shields

Do you remember the good old days of SCSI? Back then, figuring out your SCSI connections, speeds, acronyms and interfaces practically required a secret decoder ring. Determining whether your server needed Fast SCSI or Ultra SCSI or Ultra2 Wide SCSI or any flavor in-between made working with SCSI disk drives a complicated task.

As a result, more than a few of us threw up our hands in frustration. In those days, we often saved the SCSI work for outside consultants, or found ourselves favoring slower disks that operated under frameworks we understood.

Things have changed a lot since then, thanks to greater levels of standardization. Today, we can find SCSI pretty much everywhere. Our server manufacturers now deliver their equipment with internally preconfigured SATA or secure attention sequence (SAS) drives. No more worrying about acronyms, connectors or secret decoder rings. Just add your disks and go.

However, this increasing standardization of direct-attached SCSI still doesn’t get around the fact that its storage devices must be directly attached to a server. Using traditional direct-attached SCSI, there are no clean ways to connect multiple servers to centralized storage over your existing network.

This need for a networkable solution is why iSCSI exists. The “i” in iSCSI replaces SCSI’s multitude of connections with everyday, run-of-the-mill Ethernet cabling. By consolidating SCSI’s per-server storage into a single and shared device, and then connecting servers to storage through your existing network, your small environment can make better use of storage as you discretely provision it to servers and file shares as needed.

Replacing SCSI’s physical connections with copper Ethernet makes managing the physical layer easier. However, correctly incorporating iSCSI into your small environment still requires some techniques and a few extra protocols that may not be immediately obvious. With low cost and easy management, iSCSI can be a perfect fit for the storage needs of the Jack-of-all-Trades IT professional. Read on to learn more so you can use it successfully.

Goodbye Physical Connections, Hello Logical

Understanding iSCSI’s connection potential is best done by examining the options. Take a minute to call up a remote console on one of your favorite servers. Once there, open its iSCSI Initiator administrative tool. This console was given a much-needed facelift in Windows Server 2008 R2, most notably adding the Quick Connect dialog box you’ll see in Figure 1. You can easily create a basic connection between this server and an exposed iSCSI LUN using the Quick Connect dialog box.

The iSCSI Initiator administrative tool

Figure 1 The iSCSI Initiator administrative tool.

At this point, a few definitions might be helpful. As in the physical world, every iSCSI connection requires two devices. At your favorite server is what is called the iSCSI Initiator. The iSCSI Initiator represents the “client” that will be requesting services from a set of iSCSI disks.

At the other end is the iSCSI Target. The iSCSI Target has the disks to which you want to connect. It operates as the “server” that services requests from one or more client initiators. Once the connection between a target and an initiator is established, you can discover and initialize one or more LUNs as disk volumes on your server.

To use iSCSI, you’ll obviously require a device that supports the iSCSI protocol. This may be an existing SAN on your network. It can also be a regular Windows server that runs an instance of iSCSI Target software.

Microsoft has its own iSCSI Software Target software. However, this software is intended for use atop Windows Storage Server. Third parties also distribute software that installs as a service to a regular Windows server. That service exposes the server’s direct-attached disks to locations anywhere on your network.

Once installed, the next step always starts at the iSCSI Target. Prior to connecting any “client” computers to this device’s exposed LUNs, you’ll first need to create and expose those LUNs to the network. The details of this process will vary greatly and depend on your device and your software. Consult your iSCSI Target’s manual for the details. At the very least, you’ll have to carve out a quantity of disk space as a LUN, connecting that LUN to the proper network and network interfaces, and adding any security or authentication options.

Once you’ve completed this first step, creating a basic connection requires only a few clicks. First, enter the IP address or DNS name of the server or device that runs the iSCSI Target software into the Target field shown in Figure 1 and click Quick Connect. If you’ve correctly created and exposed your LUNs to this server, you’ll see them appear in the list of Discovered targets.

Figure 1 shows four discovered targets, three of which have been connected. Discovered targets always appear first in an Inactive state. This ensures that you can connect to them only when you’re ready.

Figure 2 The Connect to Target wizard

Figure 2 The Connect to Target wizard.

Select a target and click the Connect button. You’ll see a window similar to Figure 2. For most basic connections, ensure that the top checkbox is marked and click the OK button. Marking the top checkbox instructs the system to automatically restore the connection after every reboot.

There’s also an Advanced button in this window. As you’ll discover in a minute, all but the most basic of connections will require a set of advanced configurations, such as identifying the Initiator IP and Target portal IP—more on that shortly.

For your basic connection, there are two steps remaining to prepare your drive. First, select the iSCSI Initiator Volumes and Devices tab and click the Auto Configure button. This step automatically configures all available devices, further binding them so they’re ready for use at the next system restart.

After this step, you’ll find the disk is visible within Disk Management. Simply bring it online, initialize, and format the disk (if necessary). Your disk is now available for use just as if you had a direct-attached disk.

MPIO/MCS: Must-Have High Availability and Load Balancing

While connecting iSCSI disks to servers over your existing network is great for pervasive connectivity, your network’s interconnections can and will create points of failure. Someone could accidentally unplug a cable, misconfigure a router, or any of the myriad of issues that happen on a traditional network. Thus, any iSCSI production use really requires redundant connections.

The seemingly easy answer might be to use NIC “teaming” like you do with your production network connections. However, classic NIC teaming for iSCSI connections is neither supported nor is it considered a best practice. Don’t do it.

Connecting with iSCSI leverages its own set of protocols that handle high-availability and load balancing. You’ll also find that iSCSI’s Multipath Input/Output (MPIO) and Multiple Connected  Sessions (MCS) protocols are superior in many ways to classic NIC teaming, as each protocol has a greater range of failover and load balancing capabilities.

MPIO is a much different protocol than MCS. Using MPIO requires a Device-Specific Module (DSM) connected to the server that runs the iSCSI Initiator. Microsoft includes its own “default” DSM with the Windows OS, installed as the Multipath I/O Feature within Server Manager.

Many storage devices can use that DSM with no additional software installation. Others require their own specialized DSM from the manufacturer. Consult your device manufacturer to determine if special driver installation is required or if the in-box Microsoft DSM is acceptable.

MCS requires no such DSM installation to the server. However, in order to use MCS, your storage device must support its protocol. Not all devices are MCS-enabled, which means you’ll need to do a little research to determine which protocol is appropriate for your situation.

While different in their underlying code, managing their multipathing is fairly similar. Both MPIO and MCS provide a way to create multiple, parallel connections between a server and an iSCSI target. Most of what’s needed for either is to specifically identify the NICs and networks you want to use.

Because MCS involves the fewest steps of the two protocols, I’ll show its setup process. What you learn here will translate well to using MPIO. Remember the earlier example of a basic connection attached a server to storage through a single network connection. That connection existed between the single IP address on the server and the single IP address on the storage device.

Figure 3 Two servers, each with four network interfaces, connect with four network interfaces on a storage device

Figure 3 Two servers, each with four network interfaces, connect with four network interfaces on a storage device.

The “M” in MCS relates to augmenting that basic setup with multiple connections. Each connection relates to a network card and its associated IP address, with each iSCSI Target and Initiator using multiples of each. Figure 3 shows how this might look when two servers, each with four network interfaces and associated IP address, are connected to four network interfaces and IP addresses on the storage device.

Figure 4 The MCS configuration console

Figure 4 The MCS configuration console.

To manage MCS, select one of the targets in Figure 1, then click the Properties button, then the MCS button. You’ll see a console that looks similar to Figure 4. The earlier example’s “basic” setup configures a single connection between the Source Portal on the local server and the Target Portal on the storage device.

Figure 5 Advanced Settings for adding a connection

Figure 5 Advanced Settings for adding a connection.

To add a connection, click the Add button followed by the Advanced button. This will bring you to the Advanced Settings console, as shown in Figure 5. In this console, you should designate the IP addresses for a second local NIC in the Initiator IP box, along with the IP address for a second remote target portal on your SAN.

If there are no additional target portal IP addresses available here, you’ll need to discover them within the main console Discovery tab. Repeat this process for each initiator and target portal combination.

MCS Policies Define Behaviors

By creating these multiple connections, you provide more than one path (both physical and logical) through which you can transfer your storage network traffic. These multiple paths function as failover targets should you lose a connection. They can also load balance traffic, adding greater network capacity with each new connection.

Yet with these multiple connections must come some manner of defining how failover and load balancing behaves. With MCS, you can configure five policies:

  1. Fail Over Only: Using a Fail Over Only policy, there is no traffic load balancing. It will only use a single path, with others remaining in standby mode until the first connection’s path is lost.
  2. Round Robin: This is the simplest policy that includes load balancing. Using this policy, traffic is rotated among available paths in order.
  3. Round Robin with a subset of paths: This policy works similarly to Round Robin, except one or more paths are kept out of load balancing. These paths are used as standbys in the event of a primary path failure.
  4. Least Queue Depth: Also similar to Round Robin, this policy load balances traffic by identifying and using the path with the least number of queued requests.
  5. Weighted Paths: This policy lets you weight paths in situations where some may enjoy greater capacity than others. Traffic is balanced among paths as determined by the assigned weight.

Because MCS operates on a per-session basis, each individual session and connections can have its own MCS policy. Pay careful attention to your selected policy, as it can have a dramatic effect on the overall performance and availability of your storage connections.

The Perfect Fit for Small Environments

There’s a saying about the iSCSI Initiator console—“work from left to right.” Looking back at Figure 1, you can see six different tabs: Targets, Discovery, Favorite Targets, Volumes and Devices, RADIUS, and Configuration.

While many connections don’t require edits to the RADIUS or Configuration settings, creating connections with this console works best when you start with Targets and continue configuring through Volumes and Devices. Conversely, getting rid of a connection involves reversing these steps and working from right to left.

While these extra steps in configuring iSCSI’s high-availability options might seem cumbersome, remember you only need to do them as you’re adding new disks to servers. Once connected, those disks will reconnect with every server reboot and reconfigure automatically should a path fail.

Because of iSCSI’s reliance on existing networking infrastructure, the way it augments traditional SCSI can be a perfect fit for a small environment. An iSCSI SAN is a relatively inexpensive purchase, nowhere near the cost of yesteryear’s refrigerator-sized SAN chassis. Without needing the arcane knowledge required by other storage mediums, iSCSI’s ease of management makes it a great fit for the Jack-of-all-Trades IT professional.

Greg Shields,* MVP, is a partner at Concentrated Technology. Get more of Shields’ Jack-of-all-Trades tips and tricks at ConcentratedTech.com.*

Related Content