Test Run

Automate Your ASP.NET Web Services Testing

James McCaffrey

Code download available at:TestRun0503.exe(146 KB)


The Web Service and the Web App
The Test Automation
Alternative Approaches

It's no exaggeration to say that Web services are revolutionizing application-to-application communication. Web services are already being used extensively in corporate intranet environments and are making their way into commercial use, too. But because Web services are relatively new, techniques to test Web services programmatically are not widely known. In this column I'll show you a technique to quickly write test automation that verifies Web service functionality. The best way to show you what I mean is with two screenshots. Figure 1 shows a simple but representative ASP.NET Web application that uses a Web service.

Figure 1 Web Application that Uses a Web Service

Figure 1** Web Application that Uses a Web Service **

In this case the Web application uses a Web service that searches a proprietary SQL database of hypothetical products. Searching product names for the letter "i" returns four products. Notice the application is rather crude (no formatting of the product price data for example) just as it would be in a pre-production environment—I'm usually testing a system while it's still in development rather than after it's been released.

Now suppose you want to test the underlying Web service component of this system. Manually testing the service through any of its three interfaces (the Web client, the Web application, or the Web service) would be error-prone and inefficient—not to mention just plain boring. A better approach is to use the powerful capabilities of the .NET environment to programmatically send input to the Web service, capture the response from the service, and then examine the response for an expected value. The console application shown in Figure 2 does exactly this.

Figure 2 Testing the Web Service

Figure 2** Testing the Web Service **

Behind the scenes, the test automation reads test case inputs and expected values, sends each input to the Web service under test, grabs the response from the service, compares the actual response value to the expected value to determine a pass or fail result, and writes the result to an XML file. In Figure 2 test case 003 corresponds to manually searching for products with "i" in their name, as shown in Figure 1. The expected result in this test is the number of items returned from the corresponding search.

In the following sections I'll show you the ASP.NET Web service that I'll be testing, briefly examine the ASP.NET Web application that uses the service, and describe in detail the test automation that tests the service. I'll conclude by discussing how you can adapt and extend the code presented here to meet your own needs in a production environment. I think you'll find the ability to write test automation quickly for Web services interesting and a valuable addition to your skill set.

The Web Service and the Web App

Let's examine the underlying Web service. One of the most common uses of Web services is to expose proprietary data to another application. That's what my Web service does, so let's look at its SQL Server™ database. Using a short T-SQL script I created a database test bed, dbProducts, of product information. The database contains a single table, tblProducts, which has columns for a product ID, a product name, and a product unit price. Figure 3 shows the contents of the products table displayed using the SQL Query Analyzer program.

Figure 3 SQL Test Data

Figure 3** SQL Test Data **

When testing any system with a SQL back end, you normally want to create a dedicated database test bed rather than relying on the development or production databases. Here I just have five rows of back-end test data; in a production test environment you will have many more. Database dbProducts also has a stored procedure, usp_GetProducts, that retrieves product information and a SQL login, webServiceLogin, that can access dbProducts and has permission to execute the usp_GetProducts stored procedure. You can see the entire SQL script I used in the code that accompanies this column.

Now that you've seen the database, let's look at the Web service that exposes the data. Creating a Web service with Visual Studio® .NET is very easy because almost all of the plumbing code is generated for you automatically. After launching Visual Studio I created a new ASP.NET Web service project. I decided to use C# as my language, but the test automation I'll present is independent from the Web service language, so I could have used any .NET-compliant language. After creating the project I added a namespace import so that I could use the SqlDataAdapter class without having to fully qualify it, as shown here:

using System.Data.SqlClient;

Next I added a Web method, GetProducts, to the ProductsService class in the TheWebService namespace (see Figure 4).

Figure 4 GetProducts Web Method

[WebMethod] public DataSet GetProducts(string filter) { try { using(SqlConnection sc = new SqlConnection( ConnectionSettings.AppSettings["ConnStr"])) { SqlCommand cmd = new SqlCommand("usp_GetProducts", sc); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add("@filter", SqlDbType.VarChar, 35); cmd.Parameters["@filter"].Direction = ParameterDirection.Input; cmd.Parameters["@filter"].Value = filter; SqlDataAdapter sda = new SqlDataAdapter(cmd); DataSet ds = new DataSet(); sda.Fill(ds); return ds; } } catch { return null; } }

Frequently, the terms Web service and Web method are used interchangeably, but an ASP.NET Web service is really a collection of one or more WebMethods. My single WebMethod accepts a string as an input filter, connects to database dbProducts, and returns a DataSet of product information where input filter is contained in the product name.

Notice that GetProducts retrieves data from the dbProducts database into the DataSet using a SQL stored procedure called usp_GetProducts. My Web service could have retrieved data from dbProducts using an embedded T-SQL statement like this one:

cmd.CommandType = CommandType.Text; cmd.CommandText = "select * from tblProducts where prodname like '%' + @filter + '%'";

However, using a stored procedure instead will generally provide better performance and increased security.

ASP.NET provides a very neat way to test simple ASP.NET Web services manually. If you press the F5 key to start the debugger, Visual Studio .NET will launch Microsoft® Internet Explorer and load a page that lists the methods in the Web service. If you click on a Web method, you'll go to a page that allows you to enter arguments for the method manually, invoke the method, and then view the results (see Figure 5).

Figure 5 Manual Testing in Visual Studio .NET

Figure 5** Manual Testing in Visual Studio .NET **

The ability to test Web services manually through Visual Studio .NET is very useful during development, but for thorough testing I use an automated system like the one in this column. Before I move on to the test automation program, let's take a quick look at the ASP.NET Web application that uses the Web service.

If you look at Figure 1 you'll see that the Web application has a TextBox control where the user enters a product name, a DataGrid control where product data is displayed, and a Label control where miscellaneous messages are displayed. Some of the productivity-enhancing features in .NET are client-side proxies for Web services that make using a WebMethod just like calling a local method. Figure 6 shows the key code that calls the Web service.

Figure 6 Call Web Service Method

private void Button1_Click(object sender, System.EventArgs e) { try { WebReference.ProductsService ps = new WebReference.ProductsService(); string filter = TextBox1.Text.Trim(); DataSet ds = ps.GetProducts(filter); DataGrid1.DataSource = ds; DataGrid1.DataBind(); Label3.Text = ds.Tables["Table"].Rows.Count + " items found"; } catch(Exception ex) { Label3.Text = ex.Message; } }

Figure 7 Calling a Web Service Through Multiple Layers

Figure 7** Calling a Web Service Through Multiple Layers **

After adding a Web Reference to my service (which I created in a namespace called "WebReference" instead of accepting the default namespace "localhost"), I instantiated the ProductsService Web service proxy class just as I would any other ordinary class. Now I can call the WebMethod just like I would an ordinary instance method. You can think of the WebMethod component of a system under test as being very much like a local method. And as you'll see in a moment, despite their loosely coupled, distributed nature, testing Web service methods is similar in many respects to traditional API testing. See Figure 7 for a generalized schematic of a typical system that calls a Web service.

The Test Automation

Now let's examine the test case input data and test automation harness that generated the test run that was shown in Figure 2. I decided to store my test case input data in an XML file (see Figure 8). I could have used SQL Server or a text file to store my test case input data, but because Web services are built on XML, it was a natural choice.

Figure 8 Test Case Input Data

<?xml version="1.0" encoding="utf-8" ?> <cases> <testcase> <id>001</id> <input>widget</input> <expected count="1" clue="widget" /> </testcase> <testcase> <id>002</id> <input>wiidget</input> <expected count="0" clue="" /> </testcase> <testcase> <id>003</id> <input>i</input> <expected count="4" clue="widget" /> </testcase> <testcase comment="deliberate failure"> <id>004</id> <input>idg</input> <expected count="0" clue="" /> </testcase> <testcase> <id>005</id> <input>w</input> <expected count="3" clue="widget" /> </testcase> </cases>

The test case input data is self explanatory except for the <expected /> element. Each expected element has a count attribute, which holds the number of rows the resulting DataSet object should have for the specified input. For example, in test case 001 if the input is "widget", then the method GetProducts should return a DataSet with exactly 1 row. The clue element is additional information that the test automation can use to determine if the correct rows are returned. I'll discuss this in more detail in the next section. Notice I have deliberately included a test case that will fail so that I can make sure my test automation doesn't have a logic error that makes all test cases pass.

The overall test harness plan is very simple but has been used successfully on several medium-sized software projects. In pseudocode the plan is as follows:

read test case data from XML into memory loop fetch a single test case node parse out test case input and expected values send input argument to GetProducts() Web method retrieve the DataSet return value if DataSet is consistent with expected values write "pass" to XML result node else write "fail" to XML result node end loop write all result data from memory to XML

I implemented this plan using a C# console application. I could just as easily have used another .NET-targeted language like Visual Basic® .NET, and the code I present here will also work in a Windows®-based application or an ASP.NET Web application. The test harness is surprisingly short and is shown in Figure 9. Let's go over it in detail so that you'll be able to modify it to meet your own particular needs.

Figure 9 Test Automation Code

static void Main(string[] args) { if (args.Length != 1) { Console.WriteLine("usage: test.exe testCases.xml"); return; } try { string id, input, expected, clue; DateTime startTime = DateTime.Now; Console.WriteLine("\nStart Web Service Test Automation"); XmlDocument results = new XmlDocument(); XmlDeclaration xdeclare = results.CreateXmlDeclaration("1.0", null, null); results.AppendChild(xdeclare); // declaration XmlElement root = results.CreateElement("results"); // root element results.AppendChild(root); // root XmlElement result; XmlDocument tests = new XmlDocument(); tests.Load(args[0]); // path to XML test case data is first arg XmlNodeList xnl = tests.SelectNodes("/cases/testcase"); foreach (XmlNode tcn in xnl) { id = tcn.ChildNodes.Item(0).InnerText; input = tcn.ChildNodes.Item(1).InnerText; expected = tcn.ChildNodes.Item(2).Attributes.GetNamedItem("count").Value; clue = tcn.ChildNodes.Item(2).Attributes.GetNamedItem("clue").Value; WebReference.ProductsService ps = new WebReference.ProductsService(); DataSet ds = new DataSet(); ds = ps.GetProducts(input); result = results.CreateElement("result"); result.SetAttribute("id", id); result.SetAttribute("input", input); result.SetAttribute("expected", expected); result.InnerText = IsConsistent(ds, expected, clue) ? " Pass " : " *FAIL* "; root.AppendChild(result); } // foreach test case node results.Save("results.xml"); Console.WriteLine("End Test Automation"); DateTime endTime = DateTime.Now; TimeSpan ts = endTime - startTime; Console.WriteLine( "Elapsed time = " + ts.Milliseconds + " milliseconds"); } // try catch(Exception ex) { Console.WriteLine(ex.Message); } }

After adding a Web reference to the ProductsService service and adding a using statement for the System.Xml and System.Data namespaces, I start by declaring the four local string variables and one DateTime object that my test harness will use:

string id, input, expected, clue; DateTime startTime = DateTime.Now; Console.WriteLine("\nStart Web Service Test Automation");

The id, input, expected, and clue variables will hold information parsed from the XML test case input file. I get the start time so I can measure the total elapsed time of the test automation. It's often a good idea to time your test automation because an unusually long or short test run should be investigated. Timing Web service test automation is especially useful because there are generally several network connections involved and they are a potential source of trouble.

The next few lines of code prepare an in-memory data store to hold my test case results:

XmlDocument results = new XmlDocument(); XmlDeclaration xdeclare = results.CreateXmlDeclaration("1.0", null, null); results.AppendChild(xdeclare); XmlElement root = results.CreateElement("results"); results.AppendChild(root); XmlElement result;

The object referenced by results is an XmlDocument from the System.Xml namespace. Just as with my test case input data, I could have written my test results data to a SQL Server database or a text file but I prefer XML here. After creating the document, I write an XML header declaration into it and then I create and append the required XML root element. Finally, I create an XML element named <result> to hold the pass or fail result of each test case. The next part of the test harness does most of the work:

XmlDocument tests = new XmlDocument(); tests.Load(args[0]); XmlNodeList xnl = tests.SelectNodes("/cases/testcase"); foreach (XmlNode tcn in xnl) { // execute test case and store test result } results.Save("results.xml"); Console.WriteLine("End Test Automation"); DateTime endTime = DateTime.Now; TimeSpan ts = endTime - startTime; Console.WriteLine("Elapsed time = " + ts.Milliseconds + " milliseconds");

I start by creating a second XmlDocument object, this one to hold the test case input. I use the XmlDocument.Load method to read my test case input file, whose path is provided to the test application is the first command-line argument, into memory. Next I grab all of the XML testcase nodes into an XmlNodeList object using XmlDocument.SelectNodes. Recall that each node looks like the following:

<testcase> <id>xxx</id> <input>xxx</input> <expected count="x" clue="xxx" /> </testcase>

I iterate through the list of test case input nodes with a foreach loop. Because an XmlNodeList object is a collection, I could have also iterated with a for loop like so:

for (int i = 0; i < xnl.Count; ++i) { ... }

Inside the loop I send a test case input to the GetProducts Web method, capture the resulting DataSet object, use the expected value and clue to determine if the DataSet is correct, and write a pass or fail element into the test case result XmlDocument. After all test case input nodes have been processed, I use the XmlDocument.Save method to write the in-memory results to an external file. I finish the test run by determining and displaying the total elapsed time for the text cases.

Inside the foreach loop I begin by using the ChildNodes property to fetch the test case ID, test case input argument, an expected number of rows in the resulting DataSet, and a "clue" that will help me determine if the DataSet holds the correct data in its rows:

id = tcn.ChildNodes.Item(0).InnerText; input = tcn.ChildNodes.Item(1).InnerText; expected = tcn.ChildNodes.Item(2).Attributes.GetNamedItem("count").Value; clue = tcn.ChildNodes.Item(2).Attributes.GetNamedItem("clue").Value;

This logic assumes that each test case input node has the exact same element and node structure so that even if I don't want to have a clue for a particular test case, I've still got to include one as an empty string. (If you want, you can add logic to this part of test harness to first test whether the various attributes actually exist in order to handle variable-structure test case input.) Once I have the test case inputs, calling the Web service is easy:

WebReference.ProductsService ps = new WebReference.ProductsService(); DataSet ds = ps.GetProducts(input);

I instantiate the ProductService Web service proxy just as I would any other local object that has a parameterless constructor. Of course, I had to add a Web Reference to the service into my test harness. Then I call the GetProducts WebMethod just like a normal method and save the return value into a DataSet object. Under the covers, the client proxy object makes a SOAP request to the server where ASP.NET parses that request and invokes the appropriate WebMethod with the specified arguments. With the results returned to the client, I write test case data into the XML test result element I created earlier:

result = results.CreateElement("result"); result.SetAttribute("id", id); result.SetAttribute("input", input); result.SetAttribute("expected", expected);

This will produce an element that looks like the following:

<result id="xxx" input="xxx" expected="xxx"></result>

Because XML is so flexible, there are many alternative formats I could have used. The last few lines inside the test case input-controlled loop determine whether the test case passes or fails and adds that result to the test case result node:

result.InnerText = IsConsistent(ds, expected, clue)? " Pass " : " *FAIL* "; root.AppendChild(result);

I call a helper method IsConsistent to determine if each test case passes or fails. The IsConsistent method examines the result DataSet ds and checks to see if it has the correct number of rows and if the clue string is in the first row. The code for IsConsistent is shown here:

static bool IsConsistent(DataSet ds, string expected, string clue) { if (ds.Tables["Table"].Rows.Count != int.Parse(expected)) return false; if (clue != null && clue.Length > 0) { if (ds.Tables["Table"].Rows[0]["prodname"]. ToString().IndexOf(clue) < 0) return false; } return true; }

If the expected number of rows in the result DataSet doesn't equal the expected number of rows then the test case fails. Next I check to see if a clue string is contained in the first row of the result DataSet. This does not guarantee that the result DataSet is exactly correct, but it does give me confidence that the result DataSet is probably correct. I'll discuss other possible approaches next.

Alternative Approaches

You can modify and extend the Web service test automation technique I've presented here in many ways. This test automation harness connects to the Web service under test by adding a Web reference in Visual Studio .NET. Some of my colleagues prefer to generate this plumbing code from the command line using the wsdl.exe tool. Wsdl.exe is part of the .NET Framework SDK and can be used to generate code that you can copy and paste into the test automation harness or simply add the generated file to your project. The advantage of using wsdl.exe is that you can fine-tune your Web service connection plumbing if necessary without worrying that Visual Studio .NET will regenerate the file and overwrite your changes.

The technique I've presented here reads all test case input data into an in-memory XmlDocument object. Another approach is to use the XmlTextReader class to read one XML test case input node at a time. Using an XmlDocument is simpler but doesn't scale well if you are expecting to have many hundreds of thousands of test cases. Using an XmlTextReader is slightly more complicated but scales well to a large number of test cases.

When I present the Web service test automation technique in this column to experienced testers, one of the first things they want to discuss is the IsConsistent method. Recall that IsConsistent accepts a DataSet, an expected number of rows, and a "clue" string to determine if the DataSet is correct or not. The logic in IsConsistent does not guarantee that the actual result DataSet is exactly the same as an expected result DataSet. You can argue that it is better to write a helper method IsEqual(DataSet actual, DataSet expected) that does a deep comparison of two DataSet objects.

This idea is valid, but there are two problems. First, DataSet objects can be very complex, so determining if two of them are exactly the same is often very difficult. A major theme of the technique presented here is that the test automation should be quick and easy to write. The second problem with an IsEqual method is that because of the DataSet complexity you can easily end up with huge test case input data. Ultimately how you choose to compare actual and expected results will depend upon the details of your particular production environment.

Let me remind you that this technique is just one small part of Web services testing. You also need to test your Web services through any clients of the service, and you should thoroughly and separately test any database functionality on which the service relies. Furthermore, the technique presented here only tests Web service functionality. You'll need to test for performance, security, load capacity, and all the other types of software testing. One of the advantages of using software test automation is that it takes care of the time-consuming mundane testing that has to be done, which frees up time for you to test tricky and unusual product scenarios.

Note that I've only included minimal error checking in the Web service, the Web application, and the test automation. In a production environment you'll want to use the try/catch/finally mechanism liberally to prevent your automation from stopping in mid-run. In general your test automation will be running unattended and you can expect to find problems. Also, I only have a few rows of data in the test bed database and only a few test cases—in a production environment you'll want to add lots more data.


One of the reasons ASP.NET Web services have quickly gained acceptance by the software development community is that the programming model allows you to instantiate Web services and call WebMethods just like you would local objects and methods. As you've seen here, a consequence of this consistency is that testing Web service methods is much like traditional API testing.

The use of Web services is certainly going to increase. The release of Visual Studio 2005 will include the .NET Framework 2.0, which will deliver significant performance, productivity, and security enhancements to ASP.NET Web services. My colleagues and I have successfully used the test automation techniques presented here with Visual Studio 2005 so you'll be ready for it, too.

Send your questions and comments for James to  testrun@microsoft.com.

James McCaffrey works for Volt Information Sciences Inc., where he manages technical training for software engineers working at Microsoft. He has worked on several Microsoft products including Internet Explorer and MSN Search. James can be reached at jmccaffrey@volt.com or v-jammc@microsoft.com.