Chapter 19: Physical Tiers and Deployment
For more details of the topics covered in this guide, see Contents of the Guide.
- Distributed and Non-distributed Deployment
- Distributed Deployment Patterns
- Performance Patterns
- Reliability Patterns
- Security Patterns
- Scale Up and Scale Out
- Network Infrastructure Security Considerations
- Manageability Considerations
- Relevant Design Patterns
- Additional Resources
Application architecture designs exist as models, documents, and scenarios. However, applications must be deployed into a physical environment where infrastructure limitations may negate some of the architectural decisions. Therefore, you must consider the proposed deployment scenario and the infrastructure as part of your application design process. This chapter describes the options available for deployment of different types of applications, including distributed and nondistributed styles; ways to scale the application; and guidance and patterns for performance, reliability, and security issues. By considering the possible deployment scenarios for your application as part of the design process, you prevent a situation where the application cannot be deployed successfully, or fails to perform to its design requirements because of technical infrastructure limitations.
Choosing a deployment strategy requires design tradeoffs. For example, there might be protocol or port restrictions, or specific deployment topologies not supported in your target environment. Identify your deployment constraints early in the design phase to avoid surprises later; involve members of your network and infrastructure teams to help with this process. When choosing a deployment strategy:
- Understand the target physical environment for deployment.
- Understand the architectural and design constraints based on the deployment environment.
- Understand the security and performance impacts of your deployment environment.
Distributed and Nondistributed Deployment
When creating your deployment strategy, first determine if you will use a distributed or a nondistributed deployment model. If you are building a simple intranet application for your organization, which will be accessed by finite set of users, consider nondistributed deployment. If you are building a more complex application, which you must optimize for scalability and maintainability, consider a distributed deployment.
In a nondistributed deployment, all of the functionality and layers reside on a single server except for data storage functionality, as shown in the example in Figure 1.
This approach has the advantage of simplicity and minimizes the number of physical servers required. It also minimizes the performance impact inherent when communication between layers must cross physical boundaries between servers or server clusters. Keep in mind that by using a single server, even though you minimize communication performance overhead, you can hamper performance in other ways. Because all of your layers share resources, one layer can negatively affect all of the other layers when it is under heavy utilization. In addition, the servers must be generically configured and designed around the strictest of operational requirements, and must support the peak usage of the largest consumers of system resources. The use of a single tier reduces your overall scalability and maintainability because all the layers share the same physical hardware.
In a distributed deployment, the layers of the application reside on separate physical tiers. Tiered distribution organizes the system infrastructure into a set of physical tiers to provide specific server environments optimized for specific operational requirements and system resource usage. It allows you to separate the layers of an application on different physical tiers, as shown in the example in Figure 2.
A distributed approach allows you to configure the application servers that host the various layers in order to best meet the requirements of each layer. However, because the primary driver for optimizing component deployment is to match a component's resource consumption profile to an appropriate server, this implies that a direct mapping of layers to tiers is often not the ideal distribution strategy.
Multiple tiers enable multiple environments. You can optimize each environment for a specific set of operational requirements and system resource usage. You can then deploy components onto the tier that most closely matches their resource needs to maximize operational performance and behavior. The more tiers you use, the more deployment options you have for each component. Distributed deployment provides a more flexible environment where you can more easily scale out or scale up each physical tier as performance limitations arise, and when processing demands increase. However, keep in mind that adding more tiers adds complexity, deployment effort, and cost.
Another reason for adding tiers is to apply specific security policies. Distributed deployment allows you to apply more stringent security to the application servers; for example, by adding a firewall between the Web server and the application servers, and by using different authentication and authorization options.
Performance and Design Considerations for Distributed Environments
Distributing components across tiers can reduce performance because of the cost of remote calls across physical boundaries. However, distributing components can improve scalability opportunities, improve manageability, and reduce costs over time. Consider the following guidelines when designing an application that will run on a physically distributed infrastructure:
- Choose communication paths and protocols between tiers to ensure that components can securely interact with minimum performance degradation. Take advantage of asynchronous calls, one-way calls, or message queuing to minimize blocking when making calls across physical boundaries.
- Consider using services and operating system features such as distributed transaction support and authentication that can simplify your design and improve interoperability.
- Reduce the complexity of your component interfaces. Highly granular interfaces (chatty interfaces) that require many calls to perform a task work best when located on the same physical machine. Interfaces that make only one call to accomplish each task (chunky interfaces) provide the best performance when the components are distributed across separate physical machines. However, where you must support in-process calls as well as calls from other physical tiers, you may consider implementing a highly granular interface for in-process calls and a façade for use from other physical tiers that wraps the calls to provide a chunky interface.
- Consider separating long-running critical processes from other processes that might fail by using a separate physical cluster, and determine your failover strategy. For example, Web servers typically provide plenty of memory and processing power, but may not have robust storage capabilities (such as RAID mirroring) that can be replaced rapidly in the event of a hardware failure.
- Determine how best to plan for the addition of extra servers or resources that will increase performance and availability.
- When layers communicate across physical boundaries, you must consider how you will manage state across tiers, as this will affect scalability and performance. Choices for state management typically include:
- Stateless. All the state required will be provided when calling into a tier. This tends to be more scalable, but often requires the client to supply state information.
- Stateful. State will be stored or recovered for each client request. This tends to require more resources and is therefore a less scalable solution, but it is often convenient because it does not require the client to track and provide state information.
Recommendations for Locating Components within a Distributed Deployment
When designing a distributed deployment, you must determine which logical layers and components you will put into each physical tier. In most cases, you will place the presentation layer on the client or on the Web server; the service, business, and data layers on the application server; and the database on its own server. In some cases, you may want to modify this pattern. Consider the following guidelines when determining where to locate components in a distributed environment:
- Only distribute components where necessary. Common reasons for implementing distributed deployment include security policies, physical constraints, shared business logic, and scalability.
- If the business components are used synchronously by the presentation components, deploy the business components on the same physical tier as the presentation components in order to maximize performance and ease operational management.
- Do not locate presentation and business components on the same tier if there are security implications that require a trust boundary between them. For example, you might want to separate business and presentation components in a rich client application by placing the presentation components on the client and the business components on the server.
- Deploy service agent components on the same tier as the code that calls the components, unless there are security implications that require a trust boundary between them.
- Deploy business components that are called asynchronously together with workflow components on a separate physical tier from the other layers where possible.
- Deploy business entities on the same physical tier as the components that use them.
Distributed Deployment Patterns
Several common patterns represent application deployment structures found in most solutions. When it comes to determining the best deployment solution for your application, it helps to first identify the common patterns. Once you have a good understanding of the different patterns, you then consider scenarios, requirements, and security constraints to choose the most appropriate pattern.
This pattern represents a basic structure with two main components: a client and a server. In this scenario, the client and server will usually be located on two separate tiers. Figure 3 represents a common Web application scenario where the client interacts with a Web server.
A common Web application scenario
Consider the client/server pattern if you are developing a client that will access an application server, or a stand-alone client that will access a separate database server.
The n-tier pattern represents a general distribution pattern where components of the application are separated across one or more servers. Commonly, you will choose a 2-tier, 3-tier, or 4-tier pattern as described in the following sections. While you will often locate all of the components of a layer on the same tier, this is not always the case. Layers do not have to be confined to a single tier—you can partition workloads across multiple servers if required. For example, you may decide to have side-by-side tiers that contain different aspects of your business logic.
Effectively this is the same physical layout as the client/server pattern. It differs mainly on the ways that the components on the tiers communicate. In some cases, as shown in Figure 4, all of the application code is located on the client, and the database is located on a separate server. The client makes use of stored procedures or minimal data access functionality on the database server.
2-tier deployment with all the application code located on the client
Consider the 2-tier pattern if you are developing a client that will access an application server, or a stand-alone client that will access a separate database server.
In a 3-tier design, the client interacts with application software deployed on a separate server, and the application server interacts with a database that is located on another server, as shown in Figure 5. This is a very common pattern for most Web applications and Web services, and sufficient for most general scenarios. You might implement a firewall between the client and the Web/App tier, and another firewall between the Web/App tier and the database tier.
3-tier deployment with the application code on a separate tier
Consider the 3-tier pattern if you are developing an Intranet-based application where all servers are located within the private network or an Internet based application and security requirements do not prevent you from implementing business logic on the public facing Web or application server.
In this scenario, shown in Figure 6, the Web server is physically separated from the application server. This is often done for security reasons, where the Web server is deployed into a perimeter network and accesses the application server located on a different subnet. In this scenario, you might implement a firewall between the client and the Web tier, and another firewall between the Web tier and the application or business logic tier.
4-tier deployment with the Web code and the business logic on separate tiers
Consider the 4-tier pattern if security requirements dictate that business logic cannot be deployed to the perimeter network, or you have application code that makes heavy use of resources on the server and you want to offload that functionality to another server.
Web Application Deployment
Consider using distributed deployment for your Web applications if security concerns prohibit you from deploying your business logic on your front-end Web server. Use a message-based interface for your business layer, and consider using the TCP protocol with binary encoding to communicate with the business layer for best performance. You should also consider using load balancing to distribute requests so that they are handled by different Web servers, avoid server affinity when designing scalable Web applications, and design stateless components for your Web application. See the section "Performance Patterns" later in this chapter for more details.
Rich Internet Application Deployment
A distributed architecture is the most likely scenario for deployment because rich Internet application (RIA) implementations move presentation logic to the client. If your business logic is shared by other applications, consider using distributed deployment. Also, consider using a message-based interface for your business logic.
Rich Client Application Deployment
In an n-tier deployment, you can locate presentation and business logic on the client, or locate only the presentation logic on the client. Figure 7 illustrates the case where the presentation and business logic are deployed on the client.
Rich client with the business layer on the client tier
Figure 8 illustrates the case where the business and data access logic are deployed on an application server.
Rich client with the business layer on the application tier
Performance deployment patterns represent proven design solutions to common performance problems. When considering a high performance deployment, you can scale up or scale out. Scaling up entails improvements to the hardware on which you are already running. Scaling out entails distributing your application across multiple physical servers to distribute the load. When planning to use a scale out strategy, you will generally make use of a load balancing strategy. This is usually referred to as a load-balanced cluster or, in the case of Web servers, a Web farm. The following sections describe these patterns. For more information about choosing when and how to scale up or scale out, see the section "Scale Up and Scale Out" later in this chapter.
You can install your service or application onto multiple servers that are configured to share the workload, as shown in Figure 9. This type of configuration is known as a load-balanced cluster.
A load-balanced cluster
Load balancing scales the performance of server-based programs, such as a Web server, by distributing client requests across multiple servers. Load-balancing technologies, commonly referred to as load balancers, receive incoming requests and redirect them to a specific host if necessary. The load-balanced hosts concurrently respond to different client requests, even multiple requests from the same client. For example, a Web browser might obtain the multiple images within a single Web page from different hosts in the cluster. This distributes the load, speeds up processing, and reduces the response time.
Depending on the routing technology used, it may detect failed servers and remove them from the routing list to minimize the impact of a failure. In simple scenarios, the routing may be on a round robin basis where a DNS server hands out the addresses of individual servers in rotation. Figure 10 illustrates a simple Web farm (a load-balanced cluster of Web servers) where each server hosts all of the layers of the application except for the data store.
A simple Web farm
Load-balanced clusters are most scalable and efficient if they do not have to track and store information between each client request; in other words, if they are stateless. If they must track state, then you may need to use affinity and session techniques.
Affinity and User Sessions
Applications may rely on the maintenance of session state between requests from the same client. A Web server, for example, may need to keep track of user information between requests. A Web farm can be configured to route all requests from the same user to the same server—a process known as affinity—in order to maintain state where this is stored in memory on the Web server. However, for increased availability and reliability, you should use a separate session state store with a Web farm to remove the requirement for affinity. During development, if you are using Internet Information Services (IIS) 6.0 or later, you can configure IIS to operate in Web Garden mode to help ensure correct session state handling within your application as you develop it.
In ASP.NET, you must also configure all of the Web servers to use a consistent encryption key and method for ViewState encryption where you do not implement affinity. You should also enable affinity for sessions that use Secure Sockets Layer (SSL) encryption where the system supports this feature, or use a separate cluster for SSL requests.
As with Web servers and Web farms, you can also scale out your business and data layers if they reside on different physical tiers from the presentation layer by using an application farm. Requests from the presentation tier are distributed to each server in the farm so that each has approximately the same load. You may decide to separate the business layer components and the data layer components on different application farms, depending on the requirements of each layer and the expected loading and number of users.
Reliability deployment patterns represent proven design solutions to common reliability problems. The most common approach to improving the reliability of your deployment is to use a failover cluster to ensure the availability of your application, even if a server fails.
A failover cluster is a set of servers that are configured in such a way that if one server becomes unavailable, another server automatically takes over for the failed server and continues processing. Figure 11 shows a failover cluster.
A failover cluster
Install your application or service on multiple servers that are configured to take over for one another when a failure occurs. The process of one server taking over for a failed server is commonly known as failover. Each server in the cluster has at least one other server in the cluster identified as its standby server.
Security patterns represent proven design solutions to common security problems. The impersonation/delegation approach is a good solution when you must flow the context of the original caller to downstream layers or components in your application. The trusted subsystem approach is a good solution when you want to handle authentication and authorization in upstream components and access a downstream resource with a single trusted identity.
In the impersonation/delegation authorization model, resources and the types of operations (such as read, write, and delete) permitted for each one are secured using Windows Access Control Lists (ACLs) or the equivalent security features of the targeted resource (such as tables and procedures in SQL Server). Users access the resources using their original identity through impersonation, as illustrated in Figure 12. Bear in mind that this approach may result in the requirement for a domain account, which makes it unattractive in some scenarios.
The impersonation/delegation authorization model
In the trusted subsystem (or trusted server) model, users are partitioned into application defined, logical roles. Members of a particular role share the same privileges within the application. Access to operations (typically expressed by method calls) is authorized based on the role membership of the caller. With this role-based (or operations-based) approach to security, access to operations (not networked resources) is authorized based on the role membership of the caller. Roles, analyzed and defined at application design time, are used as logical containers that group together users who share the same security privileges or capabilities within the application. The middle-tier service uses a fixed identity to access downstream services and resources, as illustrated in Figure 13.
The trusted subsystem (or trusted server) model
Multiple Trusted Service Identities
In some situations, you might require more than one trusted identity. For example, you might have two groups of users, one who should be authorized to perform read/write operations and the other read-only operations. The use of multiple trusted service identities provides the ability to exert more granular control over resource access and auditing, without having a large impact on scalability. Figure 14 illustrates the multiple trusted service identities model.
The multiple trusted service identities model
Scale Up and Scale Out
Your approach to scaling is a critical design consideration. Whether you plan to scale out your solution through a load-balanced cluster or a partitioned database, you must ensure that your design supports the option you choose. In general cases, when you scale your application, you can choose from and combine two basic choices: scale up (get a bigger box) and scale out (get more boxes).
With the scale up approach, you add hardware such as processors, RAM, and network interface cards (NICs) to your existing servers to support increased capacity. This is a simple option and can be cost-effective up to a certain level because it does not introduce additional maintenance and support costs. However, any single points of failure remain, which is a risk. In addition, beyond a certain threshold, adding more hardware to the existing servers may not produce the desired results, and getting the last 10% of theoretical performance from a single machine though upgrades can be very expensive.
For an application to scale up effectively, the underlying framework, runtime, and computer architecture must scale up as well. When scaling up, consider which resources are limiting application performance. For example, if it is memory bound or network bound, adding CPU resources will not help.
With the scale out approach, you add more servers and use load balancing and clustering solutions. In addition to handling additional load, the scale out scenario also mitigates hardware failures. If one server fails, there are additional servers in the cluster that can take over the load. For example, you might have multiple load-balanced Web servers in a Web farm that host the presentation and business layers. Alternatively, you might physically partition your application’s business logic, and use a separate load-balanced middle tier for that logic while hosting the presentation layer on a load-balanced front tier. If your application is I/O constrained and you must support an extremely large database, you might partition your database across multiple database servers. In general, the ability of an application to scale out depends more on its architecture than on the underlying infrastructure.
Considerations for Scaling Up
Scaling up with additional processor power and increased memory can be a cost-effective solution. This approach also avoids introducing the additional management cost associated with scaling out and using Web farms and clustering technology. You should look at scale up options first and conduct performance tests to see whether scaling up your solution meets your defined scalability criteria and supports the necessary number of concurrent users at an acceptable performance level. You should have a scaling plan for your system that tracks its observed growth.
Designing to Support Scale Out
If scaling up your solution does not provide adequate scalability because you reach CPU, I/O, or memory thresholds, you must scale out and introduce additional servers. Consider the following practices in your design to ensure that your application can be scaled out successfully:
- Identity and scale out bottlenecks. Shared resources that cannot be further scaled up often represent a bottleneck. For example, you might have a single SQL Server instance that is accessed from multiple application servers. In this case, partitioning the data so that it can served by multiple SQL Server instances will allow your solution to scale out. If you anticipate that your database server may become a bottleneck, an initial design that includes data partitioning can save a significant amount of effort later.
- Define a loosely coupled and layered design. A loosely coupled layered design with clean remotable interfaces are easier to scale out than a design that uses tightly coupled layers with chatty interactions. A layered design will have natural clutch points, making it ideal for scaling out at the layer boundaries. The trick is to find the right boundaries. For example, business logic may be relocated more easily to a load-balanced middle-tier application server farm.
Design Implications and Tradeoffs
You should consider aspects of scalability that may vary by application layer, tier, or type of data. Identify tradeoffs required so that you are aware of where you have flexibility and where you do not. In some cases, scaling up and then out with Web or application servers might not be the best approach. For example, even though you could use an 8-processor server, economics would probably drive you to use a set of smaller servers instead of a one large one.
On the other hand, scaling up and then out might be the right approach for your database servers, depending on the role of the data and how the data is used. There are limitations on the number of servers that you can load balance or failover, and additional issues such as how you partition the database will affect the process. In addition, apart from technical and performance considerations, you must also take into account operational and management implications, and the related total cost of ownership ).
Typically, you optimize the price and performance within the boundaries of any other constraints. For example, using four 2-processor Web or application servers may be optimal when you evaluate price and performance compared with using two 4-processor servers. However, you must also consider other constraints, such as the maximum number of servers you can locate behind a particular load-balancing infrastructure, and power consumption or space constraints within the data center.
Also consider using virtualized servers to implement server farms and for hosting services. This approach can help you to balance performance and cost while obtaining maximum resource usage and return on investment.
The use of stateless components, such as those you may implement in a Web front end with no in-process state, means that you can produce a design that better supports both scaling up and scaling out. To achieve a stateless design, it is likely that a number of design tradeoffs will be required in your application, but the benefits in terms of scalability generally outweigh the disadvantages.
Data and Database Partitioning
If your application uses a very large database and you anticipate an I/O bottleneck, ensure that you design for database partitioning up front. Moving to a partitioned database later usually results in a significant amount of costly rework, and often requires a complete redesign of the database. Partitioning provides several benefits, including the ability to restrict queries to a single partition (thereby limiting the resource usage to only a fraction of the data), and the ability to engage multiple partitions (thereby achieving greater parallelism and superior performance because you can have more disks working to retrieve the data).
However, in some situations, multiple partitions may not be appropriate and could have a negative impact. For example, some operations that use multiple disks might be performed more efficiently with concentrated data.
When considering the impact of partitioning data storage on your deployment scenarios, the decisions depend largely on the type of data. The following list summarizes the relevant factors:
- Static, reference, and read-only data. For this type of data, you can easily maintain many replicas in the appropriate locations if this improves performance and scalability. It has minimal impact on design and can usually be driven by optimization considerations. Consolidating several logically separate and independent databases on one database server may or may not be appropriate, even if you have the disk capacity, and distributing replicas closer to the consumers of that data may be an equally valid approach. However, be aware that whenever you replicate, you have a loosely synchronized system that requires mechanisms to maintain the appropriate synchronization.
- Dynamic (often transient) data that is easily partitioned. This is data relevant to a particular user or session, such as a shopping cart, and the data for user A is not related in any way to the data for user B. This data is slightly more complicated to handle than static, read-only data, but you can still optimize and distribute it quite easily because this type of data can be partitioned. There are no dependencies between the groups, right down to the individual user level. The important aspect of this data is that you do not query across partitions. For example, you query for the contents of user A’s shopping cart but do not query all carts that contain a particular item. Note that, if subsequent requests can come to different Web or application server, all these servers must be able to access the relevant partition.
- Core data. This is the main case where the scale up, then out approach usually applies. Generally, you do not want to hold this type of data in many places because of the complexity of keeping it synchronized. This is the classic case in which you would typically want to scale up as far as you can (ideally, remaining as a single logical instance with suitable clustering), and only consider partitioning and distributing the data when scaling out is the only option. Advances in database technology, such as distributed partitioned views, have made partitioning much easier; although you should do so only when it is necessary. The decision is rarely prompted by the database growing to too large a size, but is more often driven by other considerations such as who owns the data, the geographic usage distribution, proximity to the consumer, and availability.
- Delay-synchronized data. Some data used in applications does not have to be synchronized immediately, or even at all. A good example of this is retail store data such as “Users who bought X also bought Y and Z.” This data is mined from the core data, but need not be updated in real time. Designing strategies that move data from core to partitionable (dynamic), and then to static, is a key factor in building highly scalable applications.
For information on patterns for moving and replicating data, see "Data Movement Patterns" at http://msdn.microsoft.com/en-us/library/ms998449.aspx.
Network Infrastructure Security Considerations
Make sure that you understand the network structure provided by your target environment, and understand the baseline security requirements of the network in terms of filtering rules, port restrictions, supported protocols, and so on. Recommendations for maximizing network security include:
- Identify how firewalls and firewall policies are likely to affect your application’s design and deployment. Firewalls should be used to separate Internet-facing applications from the internal network, and to protect the database servers. Firewalls only allow communication through specifically configured ports and, therefore, can block some protocols and prevent the use of some communication options. This includes authentication, such as Windows authentication, between the Web server and an application or database server behind the firewall.
- Consider which protocols, ports, and services can access internal resources from the Web servers in the perimeter network or from rich client applications. Identify the protocols and ports that the application design requires, and analyze the potential threats that occur from opening additional ports or using nonstandard protocols.
- Communicate and record any assumptions made about network and application layer security, and the security functions each component will handle. This ensures that security controls and policies are not overlooked when both the development and the network team assume that the other team is addressing the issue.
- Pay attention to the security defenses such as firewalls, packet filters, and hardware systems that your application relies upon the network to provide, and ensure that these defenses are in place.
- Consider the implications of a change in network configuration, and how this will affect security.
The choices you make when deploying an application affect the capabilities for managing and monitoring the application. You should take into account the following recommendations:
- Deploy components of the application that are used by multiple consumers in a single central location, such as a server or application farm that is available to all applications, to avoid duplication.
- Ensure that data is stored in a location where backup and restore facilities can access it.
- Components that rely on existing software or hardware (such as a proprietary network that can only be established from a particular computer) must be physically located on the same computer.
- Some libraries and adaptors cannot be deployed freely without incurring extra cost, or may be charged on a per CPU basis; therefore, you may want to centralize these features to minimize costs.
- Groups within an organization may own a particular service, component, or application that they must manage locally.
- Monitoring tools such as System Center Operations Manager require access to physical machines to obtain management information, and this may impact deployment options.
Relevant Design Patterns
Key patterns are organized into categories such as Deployment, Manageability, Performance and Reliability, and Security; as shown in the following table. Consider using these patterns when making design decisions for each category.
Layered Application. An architectural pattern where a system is organized into layers.
Three-Layered Services Application. An architectural pattern where the layers are designed to maximize performance while exposing services that other applications can use.
Tiered Distribution. An architectural pattern where the layers of a design can be distributed across physical boundaries.
Three-Tiered Distribution. An architectural pattern where the layers of a design are distributed across three physical tiers.
Deployment Plan. Describes the processes for mapping logical layers onto physical tiers, taking into account constraints imposed by the infrastructure.
Adapter. An object that supports a common interface and translates operations between the common interface and other objects that implement similar functionality with different interfaces.
Provider. A component that exposes an API that is different from the client API, in order to allow any custom implementation to be seamlessly plugged in. Many applications that provide instrumentation expose providers that can be used to capture information about the state and health of your application and the system hosting the application.
Performance & Reliability
Server Clustering. A distribution pattern where multiple servers are configured to share the workload and appear to clients as a single machine or resource.
Load-balanced Cluster. A distribution pattern where multiple servers are configured to share the workload. Load balancing provides both improvements in performance by spreading the work across multiple servers, and reliability where one server can fail and the others will continue to handle the workload.
Failover Cluster. A distribution pattern that provides a highly available infrastructure tier to protect against loss of service due to the failure of a single server or the software that it hosts.
Brokered Authentication. Authenticate against a broker, which provides a token to use for authentication when accessing services or systems.
Direct Authentication. Authenticate directly against the service or system that is being accessed.
Impersonation and Delegation. The process of assuming a different identity on a temporary basis so that a different security context or set of credentials can be used to access a resource, and where a service account is allowed to access a remote resource on behalf of another user
Trusted Subsystem. The application acts as a trusted subsystem to access additional resources. It uses its own credentials instead of the user’s credentials to access the resource.
For more information on the Layered Application, Three-Layered Services Application, Tiered Distribution, Three-Tiered Distribution, and Deployment Plan patterns, see "Deployment Patterns" at http://msdn.microsoft.com/en-us/library/ms998478.aspx.
For more information on the Adapter pattern, see Chapter 4, "Structural Patterns” in Gamma, Erich, Richard Helm, Ralph Johnson, and John Vlissides. Design Patterns: Elements of Reusable Object-Oriented Software. Addison Wesley Professional, 1995.
For more information on the Provider pattern, see "Provider Model Design Pattern and Specification, Part 1" at http://msdn.microsoft.com/en-us/library/ms972319.aspx.
For more information on the Server Clustering, Load-Balanced Cluster, and Failover Cluster patterns, see "Performance and Reliability Patterns" at http://msdn.microsoft.com/en-us/library/ms998503.aspx.
For more information on the Brokered Authentication, Direct Authentication, Impersonation and Delegation, and Trusted Subsystem patterns, see "Web Service Security" at http://msdn.microsoft.com/en-us/library/aa480545.aspx.
The following resources will be useful when designing a deployment strategy:
- For more information on authorization techniques, see "Designing Application-Managed Authorization" at http://msdn.microsoft.com/en-us/library/ms954586.aspx.
- For more information on deployment scenarios and considerations, see "Deploying .NET Framework-based Applications" at http://msdn.microsoft.com/en-us/library/ms954585.aspx.
- For more information on design patterns, see "Enterprise Solution Patterns Using Microsoft .NET" at http://msdn.microsoft.com/en-us/library/ms998469.aspx.