Geek of All Trades: The Troubleshooting Singularity

Advances in technology could have a huge impact on how you do your job—and even the very nature of your job itself.

By Greg Shields

Futurist Ray Kurzweil is credited with conceiving the idea of the technological singularity. In his vision of the future, the pace of technological growth becomes essentially instantaneous. Technology grows self-capable of developing faster, better and more powerful iterations. This cycle repeats at a pace humanity can no longer predict—or even comprehend.

Other futurists believe singularities have happened before. Generalizing Kurzweil’s original theory to the less dramatic explanation of “a sudden growth in any technology … implied by a long-term pattern of accelerating change,” you can easily see how singularities have already occurred in our past.

The introduction and application of science to medicine during the dark ages; both the agricultural and industrial revolutions; the recognition and continued revalidation of Moore’s Law to computing power—you could classify all of these as singularities. While none represent the proverbial supercomputer that takes over the world a la Terminator’s SkyNet, they all represent fundamental changes in how society evolves in approaching and resolving certain problems.

With these concepts serve as a frame of reference, there is another singularity taking shape right now in IT. That IT singularity involves an evolution in how we manage our computer systems—or, more specifically, how we fix them when they break.

Moore’s Law regularly doubles the number of transistors that can fit on a piece of hardware. Extrapolations of his law suggest that network capacity (Butter’s Law of Photonics) and storage cost per unit of information (Kryder’s Law) are trending at nearly the same rate. Moore was obviously a bright guy, but you have to wonder if he considered the residual effects his prediction would have on other parts of the IT ecosystem.

This constantly increasing raw capacity is affecting how we as IT professionals perform our jobs. More performance means a greater supply of computing power. Cheaper and larger disks mean more storage into which we can back up environment states. Better networking improves the speed of moving data.

Today, you can upload entire servers and desktops into datacenters or even to the cloud with relatively little expense. You can regularly takes snapshots of their state with near-continuous granularity, and transfer their UI anywhere with a reasonable network connection.

We’re perfecting a true “layering” of the entire computing experience. We can isolate operating systems, applications, policies, and user state information from each other and seamlessly shift them between devices. Stratifying the computing experience into isolated layers means we can manage each layer individually.

We can also quickly and reliably reconstruct a layer when troubles occur—loading a new set of applications, a copy of the user’s state information, even a completely new OS—rather than wasting time troubleshooting. Bleeding-edge technologies like desktop virtualization create a scenario where the OS can be relocated with ease.

We are essentially approaching a state where it no longer makes economic sense to troubleshoot desktop problems. If there’s an issue with the OS, remotely send a new one. If someone loses a computer in the airport, transfer that machine’s exact state just before the loss to a new laptop. When applications crash, simply delete the old one and stream down another copy.

Desktops aren’t the only place where this new approach is taking root. With pervasive high-availability features found in nearly every server product today, even servers aren’t immune. Does your mail server database have a problem? Fail it over to a slightly lagged mirror and roll forward the changes. Is your virtual host about to crash? Live migrate its virtual machines elsewhere and apply a new OS and configuration.

Call it Shields’ Law: “The efficacy of systems troubleshooting is inversely related to the datacenter-wide effects of Moore’s Law.” Your business saves money when you don’t troubleshoot, or at least when you have the technologies in place so you don’t have to. What’s particularly exciting about Shields’ Law is that our industry is further along the curve than many of us realize.

The tools, tactics and technologies to achieve this state already exist. Others are on the near horizon. You can even find a set of troubleshooting elimination solutions within the Microsoft stable of products, some of which you can get at no cost (which represents a fantastic opportunity for even the most budget-less Jack-of-All-Trades IT professionals!).

Figure 1 Layering the Windows OS

Figure 1  Layering the Windows OS

We’ve already covered some of these technologies. Last December, we outlined the Microsoft approach for layering the Windows computing experience in the TechNet Magazine article, “A Case for a Layered Approach to Deploying Windows Desktops.”

We segregated the OS from its drivers, updates, applications, configuration changes and personality elements (see Figure 1). You can stack the tools to manage each atop each other. This creates a solution close to the aforementioned seamless shifting of computer instances between devices. Add extra technologies like Microsoft System Center Configuration Manager and third-party options for user state composition, and you get even closer.

Virtualization at the desktop, streaming applications, storage virtualization, cloud computing and all the other buzzwords you hear are the components of this coming troubleshooting singularity. Plan on it. The fruits of their efforts will soon revolutionize our industry.

Greg Shields

Greg Shields*, MVP, is a partner at Concentrated Technology. Get more of Shields’ Jack-of-all-Trades tips and tricks at ConcentratedTech.com.*