Chapter 2: Preparing for "Longhorn"

 

Preface
Chapter 1: The "Longhorn" Opportunity

Chapter 2: Preparing for "Longhorn"

Karsten Januszewski
John deVadoss
Microsoft Corporation

November 2003
Sample updated June 2004

Applies to:
   Longhorn Community Technical Preview releases (both PDC 2003 and WinHEC 2004)
   Note: The downloadable sample code is based only on the Longhorn Community Technical Preview, WinHEC 2004 Build (Build 4074)

Contents of Chapter 2

Understanding the Impetus for the SOA Model
Understanding SOAs
Understanding Services
Understanding Service Design
Understanding Service Interfaces
Developing Services with the .NET Framework
Understanding Managed Code
Conclusion
For Further Reading

When planning a distributed application that will take advantage of "Longhorn," software designers and developers need to consider both the application's architecture and the way it is coded. Chapter 2 focuses on these two points.

The first section explains the motivation for a Service-Oriented Architecture (SOA) and outlines its fundamental tenets. Software designers should take advantage of the principles behind service-oriented architectures to be better prepared to develop applications for "Longhorn." The second section discusses managed code, which is the preferred execution model for applications that will run on "Longhorn."

In brief:

  • In the SOA model, applications are structured as a composite of different services. Each service is an autonomous entity that is discoverable and accessible in a standards-friendly manner, which means it is both platform-neutral and programming language-neutral. Consumers use service registries (either public or private) to look up and bind to services. This means that the location of the service is immaterial.
  • Managed code is code targeted for the .NET Framework's common language runtime (CLR). Managed code provides the metadata needed by the CLR to handle tasks such as automatic memory management, code access security, and multi-language integration.

Using the principles behind SOAs and managed code not only benefits companies in the long run, by getting them ready to migrate to "Longhorn," but has immediate benefits as well. The SOA model can solve many of today’s business process integration problems in a cost-effective, interoperable, and standards-friendly way. Managed code makes developers more productive because they no longer have to worry about the infrastructure that environments such as Win32 and COM require.

Understanding the Impetus for the SOA Model

Customers alter their business models all the time, either by introducing new solutions in-house or by adding business capabilities. Changes in customers' business models require flexibility in application architecture.

Such flexibility does not exist in today’s monolithic systems. Monolithic systems, by definition, are tightly coupled. In other words, each of the subsystems that comprise the greater application is not only semantically aware of its surrounding subsystems, but is physically bound to those subsystems at compile time and run time. The ability to replace key pieces of functionality in reaction to a change in the business model, or of distributing an application as individual business capabilities, is simply not possible.

A monolithic system is not flexible enough to meet our customers’ changing business needs for the following reasons:

  • Monolithic applications are fundamentally resistant to change.
  • Applications aren’t designed to be integrated with other applications.
  • Integration is not a first-class citizen and is usually addressed after the fact.
  • Business concepts and business logic are often duplicated and can be inconsistent.
  • One domain’s definition of an entity may be different than another’s.

Building a monolithic, tightly integrated application is not sufficient for our next generation applications. We want to move away from the rigidity present in a monolithic architecture and towards a more flexible architecture that will support customer change—an architecture based on the notion of autonomous services and applications resulting from the composition of those services, an architecture that makes integration intrinsic to the application model and does so in a standards-friendly way.

Understanding SOAs

The application is expanding beyond the scope of a single system running on a single device and bounded by a single organization. In response to this, the industry at large, and Microsoft in particular, have expanded beyond object-oriented component designs for application development and deployment. They have enlarged the application development portfolio with services.

Services are discrete units of application logic that expose message-based interfaces that can be accessed across a network (although a network is not a requirement). SOAs permit very flexible deployment strategies; rather than requiring that all data and logic reside on a single computer, the service model allows applications to leverage networked computational resources.

The SOA approach has gained favor with companies building standards-based service interfaces for new and existing applications. These interfaces enable applications to interoperate or be combined into new, composite applications.

Applications built using an SOA are inherently integration-aware because they are actually collections of distinct services. Essentially, SOA is about:

  • Providing internal and external services to consumers via a standards-based, published, and discoverable set of contracts and interfaces.
  • Elevating the abstraction of these services for maximum code reuse. If consumers are not aware of the implementation behind the service, they can bind to services that continue to evolve and improve over time. Additionally, these services can be tuned and customized in the field to meet our customers’ needs.
  • Providing a clear integration model for the system, both internally and externally. Since the application or solution itself will be inherently a set of integrated services, integration complexity from other sources is greatly reduced.
  • Providing the foundation for global class applications.

Understanding Services

Services are discrete units of application logic that expose message-based interfaces suitable for access across a network. Typically, services provide both the business logic and the state management relevant to the problem they are designed to solve. When designing services, the goal is to effectively encapsulate the logic and data associated with real-world processes, making intelligent choices about what to include and what to implement as separate services.

State manipulation is governed by business rules. Business rules are relatively stable algorithms, such as the method in which an invoice is totaled from an item list, and are typically implemented as application logic.

Services are governed by policy. Policies are less static than business rules and may be regional or customer-specific. Policies are typically driven from lookup tables at run time. A more complete definition of services might be, "Services are network-capable units of software that implement logic, manage state, communicate via messages, and are governed by policy."

Services interact by exchanging messages. In the service model, the service is defined purely by the messages it will accept and produce, including the sequencing requirements for those messages. Successful routing of messages between services is a complex process that is best handled by a messaging infrastructure shared across the services an organization exposes.

When two services interact, messages are sent back and forth between them. Both sides need to know exactly what they should send, how they should send it, what to expect, and how to expect it. It is not only necessary to define the messages that can be sent, the format the messages should be in, and how they can be sent, but it is also important to specify the sequence in which these messages should be sent. A contract is the definition, or binding agreement, of all that is sent between two services and of how everything is sent.

Understanding Service Design

It's common practice in distributed application design to partition an application into components providing presentation, business, and data services. Components that perform similar types of functions can be grouped into layers, which in many cases are organized in a stacked fashion so that a given component uses the functionality provided by other components both in its own layer and in the layers below it to perform its work.

Note This guide uses the term "layer" when referring to a component type, and the term "tier" to refer to physical distribution patterns.

This partitioned view of an application can also be applied to services. A service-oriented solution can be seen as being composed of multiple services, each communicating with the others by passing messages. Conceptually, the services can be seen as "components" of the overall solution. Internally, however, each service is made up of software components, just like any other application.

Each service has its own data sources, business logic, and optionally, user interfaces. The internal design of a service might be no different from a traditional three-tier application. You can choose to build and expose a service that has no user interface directly associated with it—it is designed only to be invoked by other services and aggregators. Each service encapsulates its own data and manages atomic transactions with its own data sources.

The layers are merely logical groupings of the software components that comprise the application or service. They help differentiate between the different kinds of tasks performed by the components, making it easier to design re-usability into the solution. Within each of these logical layers, there are a number of discrete component types grouped into sub-layers, each performing a specific kind of task. By identifying the generic kinds of component that exist in most solutions, a meaningful map of an application or service can be constructed, and this map can form the basis of a blueprint for your design.

Understanding Service Interfaces

Services expose their functionality to other services through service interfaces. Service interfaces are to services what object interfaces are to objects. An object interface defines what a method call should look like, and a service interface defines what a message should look like. There are formats to describe both types of interfaces. For example you can use Web Services Description Language (WSDL) to define service interfaces much the same way you can use the Interface Description Language (IDL) in COM.

If an object wants to use another object, it needs to know the interface of that object. Likewise, to code another service to use an interface, you must know what that interface looks like. Tools are available that use either the implementation of the service or the description of the interface to generate code that can communicate with a service. For the programmer who wants to use a service, there is not much difference between the two.

Although the service interface is similar to an object interface, there is an important difference: the service interface does not adhere to the instancing model. An object instantiates another object and then calls its interface to use it; they can then make several calls to the same object. Services simply send a message to a service; when sending multiple messages, the infrastructure may send these messages to different physical locations, although the client usually does not notice any difference. The act of one object calling another remotely is generally referred to as a remote procedure call (RPC).

Service interfaces describe the complete information the messages need to operate. This includes the functions the service offers as well as the parameters and return values of these functions. You may think of the request or the response as a verb and the parameters as the subject; often these subjects are referred to as documents.

Because the consumer and the service communicate with each other, both can send and receive messages. Thus, the consumer must also be a service, and the service interfaces must define both inbound and outbound messages. The service interfaces come in matching pairs; the simplest matching interface is offered by a traditional proxy that is capable of sending requests and receiving responses.

Developing Services with the .NET Framework

The .NET Framework is an integral Windows operating system component that provides a comprehensive platform for SOA by embracing SOAP and XML at its core. It simplifies the development, deployment, and the management of services. It is designed to fulfill the following objectives:

  • To provide a consistent programming environment whether code is stored and executed locally, executed locally but Internet-distributed, or executed remotely.
  • To provide a code-execution environment that minimizes software deployment and versioning conflicts.
  • To provide a code-execution environment that promotes safe execution of code, including code created by an unknown or semi-trusted third party.
  • To provide a code-execution environment that eliminates the performance problems of scripted or interpreted environments.
  • To make the developer experience consistent across widely varying types of applications, such as Windows-based applications and Web-based applications.
  • To build all communication on industry standards to ensure that code based on the .NET Framework can integrate with any other code.

Understanding Managed Code

"Longhorn" will expose APIs that are a completely managed set of class libraries built with the .NET Framework. These APIs expose functionality that covers the full range of interactions with the operating system, including the user interface, graphics, storage, communications, documents, and multimedia. The majority of "Longhorn"-based applications can be entirely written using these new "Longhorn" managed APIs. Developing managed code applications now positions your company to take advantage of "Longhorn" once it is released and also offers immediate advantages.

As we stated, .NET applications are compiled into managed code, which are packaged into functional units known as assemblies. These assemblies are identified with either a .dll or .exe extension, but they are very different from Win32 dynamic-link libraries or executable files.

Assemblies are made up of the following components:

  • Assembly metadata
  • Type metadata
  • Microsoft Intermediate Language (MSIL) instructions
  • Resources

The assembly metadata is also known as the assembly manifest. It describes the assembly and contains such information as the name of the assembly, the version number, the culture used by the assembly (this is information such as the language, currency, and number formatting), and a list of all files that make up the assembly.

Type metadata describes the types within the assembly. For example, if the type is a method, then the type metadata contains such information as the method name, return type, number of arguments and their types, and access level. A property, on the other hand, would reference the get and set methods, which in turn would contain type information.

MSIL is the intermediate language used by the .NET platform. It is a CPU-independent instruction set that looks similar to assembly language but has some important differences. Basically the MSIL code tells the CLR what to do (which commands to execute) and the metadata tells it how to do it (which interfaces and calling conventions to use). One key characteristic is that MSIL is an object-oriented language, with the restriction of single class inheritance (although multiple inheritance for interfaces is allowed). The common type system, known as the Common Language Specification (CLS), used by the CLR is the key to the .NET Framework's ability to be language-neutral. All code, no matter what the programming language, uses this type system. The MSIL is the key to the .NET Framework's platform-independence. All that is required for an assembly to run on a non-Windows platform is for the CLR to be ported to that platform.

Resources include such things as string tables, cursors, and images. These can be stored either in an assembly or in a .resources file.

In addition to language-neutrality and platform independence, managed code has other immediate benefits. We will discuss some of these.

Code Verification

Code verification is conducted before actually running the application. This process walks through the code, ensuring that it is safe to run. For example, it checks for things such as invalid pointer references and array indexes. The goal is to ensure that the code won't crash, and that if an error does occur, the CLR can handle it gracefully by throwing an exception.

Code Access Verification

The code access verification process walks through the code and checks that it has permission to execute. The goal is to try to stop malicious attacks on the user's computer. The CLR contains a list of actions that it can grant permission to perform, and the assembly contains a list of permissions that it requires to run. If the CLR can grant all the permissions, the assembly runs without any problems. If the CLR can't grant all the permissions, it runs what it can but generates exceptions for those cases that don't have permission to execute.

Automatic Memory Management

Automatic Memory Management (often called garbage collection) allows a computer to detect and remove managed objects from the application's virtual memory space (also known as the heap) that are no longer being accessed by an application. The automatic memory management function also compresses the memory after removing the unused portion, thus keeping the working set of the applications as small as possible, and maximizing hardware cache performance.

Deployment and Maintainability

As explained earlier, every .NET assembly contains a manifest with a complete set of relevant metadata. No registration is required to deploy a managed application. Because this metadata identifies the other code assemblies it depends on, a simple xcopy is usually all that's required for deployment.

Web Services

Using managed code gives you access to the Web services libraries provided by the .NET Framework. The CLR allows you to interact with Web services as classes and methods, abstracting the complexities of SOAP and WSDL.

Conclusion

Organizations can prepare for the "Longhorn" release by developing applications that implement an SOA and by using managed code. The benefits of an SOA include improved availability, interoperability, and increased developer productivity. The .NET Framework was specifically designed to deliver software as a service. By providing an extensive class library and a platform-independent run time, it provides all the pieces necessary to develop and deploy Web services.

Managed code allows developers to take advantage of the .NET Framework and to develop interoperable, platform-independent services. In addition, developers no longer need to worry about such issues as memory management, and their code is automatically checked to see that it will run safely and securely.

For Further Reading

See the Microsoft patterns & practices book Application Architecture for .NET: Designing Applications and Services.

Continue to Chapter 3: Recommendations for Managed and Unmanaged Code Interoperability.

© 2003 Microsoft Corporation. All rights reserved.

Microsoft, Win32, Windows, Windows NT, Windows Server, WinFX, and ActiveX are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.