Cloud Computing: Testing the Cloud
Among the many factors to consider when contemplating a move to cloud-based services is the ability of your potential provider to test and monitor its services.
Daniel Joseph Barry
Cloud computing has passed through the stage of hype to become a reality of modern enterprise infrastructure. Managing and operating networks and IT services isn’t getting any easier, and more organizations are realizing the benefits of remote hosting of IT services over local IT management.
Managing IT networks requires a broad set of competencies in a growing number of technologies. Considering the challenge of remaining current on those technologies, it makes sense to centralize those competencies in larger datacenters. Larger datacenters also mean larger installations with high-speed interfaces and the ability to maintain service availability obligations. This requires extensive testing and management capabilities to ensure the requisite levels of service availability.
How will testing and management for cloud services differ from “traditional” IT services? What special challenges do cloud services providers face? What sort of questions do you need to ask a potential cloud services provider?
The Testing Challenge
The first and fundamental challenge of providing cloud services is service availability. If your organization is looking to adopt cloud services rather than maintain local installations, you have to be convinced you can access the services and data you need whenever you need them without experiencing undue delays. The cloud service must look and feel as if it’s local, despite the fact that it’s hosted remotely.
This leads to the second challenge: service assurance. How can your cloud services provider assure timely delivery and even service availability when it doesn’t control the data communication connection between the cloud service and your corporate users? Does the data communication provider have the monitoring infrastructure in place to assure Service Level Agreements (SLAs)? Does your cloud services provider have the monitoring infrastructure in place to assure the services provided? You have to ask your provider these questions.
The final challenge is service efficiency. That challenge encompasses efficiency in all aspects—from cost savings, space and power efficiency to efficient and scalable service delivery using virtualization, high-end servers and high-speed interfaces. The infrastructure in place to monitor that efficiency must also follow the same principles.
From a testing perspective, there are a number of layers your cloud services provider will need to address:
- The Wide Area Network (WAN) providing data communication services between your company and the cloud services provider, which is fundamental to service assurance and testing end-to-end service availability.
- The datacenter infrastructure—the servers and the network connecting those servers—service availability and uptime, as well as efficient use of resources to ensure service efficiency.
- The monitoring infrastructure in the datacenter—the basis for service assurance, which also needs to operate efficiently.
- The individual servers and monitoring appliances based on servers that must also follow efficiency and availability principles to assure overall service efficiency and service availability.
End-to-end availability is the first service aspect your cloud services provider will need to test. At its most basic level, this involves testing connectivity. It can also involve some specific tests relevant to cloud services, such as latency measurement.
There are several commercial systems available for testing latency in a WAN environment. These are most often used by financial institutions to determine the time it takes to execute financial transactions with remote stock exchanges. Cloud services providers can also use them to test connection latency.
Typically the cloud services provider does not own the WAN data communication infrastructure. However, using network monitoring and analysis appliances at the datacenter and within your location, your cloud provider can measure the WAN performance to maintain the appropriate service level. The ability to offer performance data in support of agreed SLAs should drive your choice of cloud provider and communication provider.
Network monitoring and datacenter infrastructure analysis are also crucial as cloud services providers rely less on troubleshooting and more on service assurance strategies. In a typical IT network, a reactive strategy is preferable. You can troubleshoot any issues as they arise, and you can usually tolerate small periods of downtime.
While this approach is acceptable in a corporate network, downtime is a disaster for cloud services providers. If you aren’t confident in your cloud services provider’s ability to ensure service availability, you’ll most likely be quick to find alternatives or even go back to storing and managing your data and applications locally. A well-conceived service-assurance strategy involves constant network performance and service monitoring so you can identify issues before they arise.
Testing Virtual Servers
Virtualization is a major part of most cloud services providers’ infrastructures. The ability to consolidate multiple cloud services onto fewer physical servers provides tremendous efficiency benefits by lowering costs, space and power consumption.
Being able to move virtual machines (VMs) supporting cloud services from one physical server to another also allows for efficient use of resources in matching time-of-day demand. It also expedites reaction to any detected performance issues.
One of the consequences of consolidation is the need for faster interfaces. More data is being delivered to each server. This in turn requires a data communication infrastructure dimensioned to provide this bandwidth. This then demands that the network-monitoring infrastructure keep up with the data rates without losing data. Cloud services providers need to pay particular attention to the throughput performance of network-monitoring and analysis appliances. This is the only way they can ensure that they can keep up in the future.
Within the virtualized servers themselves, there are also emerging solutions to assist in monitoring performance. Just as network and application performance-monitoring appliances are available to monitor the physical infrastructure, there are now virtualized versions of these applications for monitoring virtual applications and communication between VMs.
There are also virtual test applications that help cloud providers define a number of virtual ports they can use for load testing in a cloud environment. This is extremely useful for testing whether a large number of users can access a service without having to deploy a large test network.
While virtualization has improved service efficiency, the network-monitoring and analysis infrastructure is still dominated by single-server implementations. In many cases, this is because network-monitoring and analysis appliances require all the processing power they can get. However, there are opportunities to consolidate appliances, especially as servers and server CPUs increase performance on a yearly basis.
There are now solutions for hosting multiple network-monitoring and analysis applications on the same physical server. If the applications are all based on the same OS, intelligent network adapters ensure the data is shared between these applications. These often need to analyze the same data at the same time, but for different purposes. However, there are situations where the applications are based on different OSes. You can still use virtualization to consolidate up to 32 applications onto a single physical server.
By consolidating network-monitoring and analysis appliances, cloud services providers can further improve service efficiency. Testing of cloud services—or, more specifically, testing service assurance, availability and efficiency—will help you separate the amateurs from the professionals in the cloud services arena.
The days of passively hosting VMs on a best-effort basis are gone. Assuring the availability of services using efficient infrastructure and active network monitoring and analysis will ensure that you’ll never look back once you’ve moved to the cloud.
Daniel Joseph Barry is vice president of marketing at Napatech. He has more than 17 years experience in the IT and telecom industry. Prior to joining Napatech in 2009, he was marketing director at TPACK, a leading supplier of transport chip solutions to the telecom sector. Before that, he was director of sales and business development at optical component vendor NKT Integration (now Ignis Photonyx). He has an M.B.A. and a B.S. in electronic engineering from Trinity College in Dublin, Ireland.