ADFS on Azure VMs
Updated 11/2014 after some new feature releases in Azure VM space:
While recently working on some ADFS deployments on Azure, I learned a lot of stuff. Thought of sharing my learnings with all, some of it documented already, some not documented clearly.
Possible Deployments of ADFS on Azure:
- All ADFS servers, ADFS proxies, DCs on Azure.
- Option 1: With ADFS servers behind load balanced VIP, and ADFS proxies pointing to VIP.
- Option 2: With ADFS proxies pointing to the DIP(Dedicated IP) of ADFS servers in a one to one relationship.
- Option 3 (RECOMMENDED) : Load balance the DIP(Dedicated Internal IP) of the ADFS servers. Make the ADFS proxies point to the load balanced Internal IP of the ADFS service.
2. Hybrid setup with ADFS servers, ADFS proxies and DCs distributed across on premise and Azure site, with either Azure setup as primary or DR.
3. ADFS proxies on Azure pointing to load balanced ADFS servers on-premises through the site-to-site VPN.
- Recommended to have 2 instances of ADFS and ADFS proxy for redundancy reasons, especially when the Azure site is hosting the primary/full ADFS deployment. These 2 instanced of the same role should be a part of the same Availability Set.
- ADFS servers require communication with DCs, these DCs can be setup on Azure or we can have ADFS query the DCs on-premises using the site to site VPN link.
- DCs in Azure site to Support ADFS:
- When extending the existing forest to Azure, make sure you have at least 1 DC of each domain in the forest, in the Azure site.
- If it satisfies your application's objective, you can also have a separate forest hosted on Azure with/without a NT Trust with On-premises forest.
- Its preferable to have the GC and DNS roles on the DC too. This would reduce the traffic going to the GCs on-premises, for evaluating the universal group membership.
- All the ADFS servers and DCs need to be a part of the same virtual network, else they would not be able to communicate with each other using the internal DIPs (Dedicated DHCP leased IP address assigned to the VM).
- Windows Azure-provided DNS, does not meet the advanced name resolution needs of Windows Server AD DS. For example, it does not support dynamic SRV records, and so on. Name resolution is a critical configuration item for DCs and domain-joined clients. DCs must be capable of registering resource records and resolving other DC’s resource records. Hence its best to add the DNS role to one of the DCs in Azure and add that DNS server’s IP to the VNET configuration.
- Move the DCs in Azure to a separate AD site created to represent Azure. Map the subnet in the VNet (Virtual Network) to the AD site, so that the ADFS servers in the same VNet prefer the local DC.
- The Azure site having the ADFS setup can host either RWDC and RODC.
- Since RODC is meant for site which are not secure, it may not ideally qualify for deployment on Azure, BUT one may still consider it from a cost saving perspective as RODC do not replicate any data out, resulting in less egress traffic, hence less cost.
- RODCs if deployed also need to be configured to cache passwords of all users who will be authenticated via ADFS, this it to prevent downtime on ADFS authentication, when site to site VPN is down. Now if your application or ADFS needs to authenticate all user in domain, for the same reason you are planning to cache all user passwords on RODC, you are better of with RWDC.
- While ADFS can authenticate against RODC, if the user password is cached on RODC. The ADFS server will need access to the writable DCs when there is a configuration change in ADFS like the Token signing certificate rollover.
- When using RWDC, you may have to configure site link, replication interval, change notification, compression to suite your objectives, i.e. lower latency or reduced traffic etc.
- When deploying DC on Azure VM, don’t assign a static address to it you can reserve the dedicated internal IP assigned to the DC via Powershell cmdlet Set-AzureStaticVNetIP. A tip to ensure that the dynamic IP sticks with the VM is to shutdown the VM from within the OS and not from the Azure portal. Shutting down any VM from the portal with de-allocate the resources assigned to it including the dynamic IP.
- Ensure that the DCs have an additional data disk (With NO caching ) added to the DC's VM. Store the NTDS and sysvol content on this data disk. This is to prevent data corruption in case of recovery, as data disks with NO caching ensures that the all the committed transactions are written to the database/disk.
- As a security best practice, once the site to site VPN is setup and we can access the Azure DC using its DIP, we should remove all endpoints exposed via the public IP of the VM.
- DCs on-premises to support ADFS:
- With the ADFS servers and ADFS proxies on the Azure site, if we don’t have a local DC in the same Azure VNet and the ADFS servers have to contact on-prem DC for authentication, ldap queries, Secure channel etc, In this scenario, we are looking at high amount of egress traffic leading to more cost and also the VPN link being the single point of failure.
- Some may want to just use the Azure site for ADFS proxies while the ADFS server and DC are on-premises. This may help if you want the Intranet users to get Single Sign on experience while authenticating with internal load balanced ADFS servers using Windows Integrated Authentication.
ADFS specific recommendations (Mainly for deployment option 1, i.e. full ADFS setup in Azure)
- ADFS proxies should be a part of the same cloud service, leading to common VIP (Virtual Public IP) and should have a load balanced endpoint for TCP 443. The ADFS proxy VM should have there own Availability Set too.
- The public name of the ADFS service for Example: STS.contoso.com, should be configured as a Cname pointing to the public DNS name of the VIP example: ADFSprx.cloudapp.net . We should not map the ADFS service name to the VIP IP of the ADFS proxies, as the VIP IP can change, during the lifetime of the cloud service. There is not an option to reserve this VIP assigned to a cloud service too, which can be used for ADFS proxy cloud service.
- Ideally ADFS proxies should resolve the ADFS service name to the backend ADFS servers. This can be done using Host files and split DNS configuration.
- Host file on the ADFS proxy servers, mapping the ADFS service name to VIP of the ADFS servers.
- The challenge here could be that the proxies will not be able to reach the ADFS service if the VIP of the backend ADFS servers has changed. Hence this requires you to monitor the health of the ADFS servers and the VIP assigned to it and manually change mapping on the proxy to redirect traffic to the correct ADFS farm.
- Host file on the ADFS proxy servers, mapping the ADFS service name to VIP of the ADFS servers.
- Host file on the ADFS proxy servers mapping ADFS service name to the DIP of the individual ADFS servers. Basically have the one to one mapping between the ADFS proxy and ADFS server.
- The challenge here is that when one of the ADFS servers is unresponsive, a proxy server getting a portion of the external request will try to forward it to the mapped unresponsive ADFS server and fail. Hence this requires you to monitor the health of the ADFS servers and manually change mapping on the proxy to redirect traffic to the working ADFS server.
- Configure Host file on ADFS proxy to point to the load balanced internal IP of the ADFS servers (RECOMMENDED).
- ADFS servers can also be assigned a VIP with load balanced endpoint (TCP 443). This can be a security concern as via the VIP we would be exposing the domain joined ADFS server to the internet.
- ADFS server’s VIP ACL can be configure on the load balanced VIP endpoint to only allow traffic from proxy servers to the ADFS servers, securing the ADFS servers.
- The ADFS proxy when talking to the backend ADFS service farm via load balanced VIP, will be doing it over the internet. Which means that though the ADFS server and proxies are in the same VNet, the ADFS proxy VM may attract an extra charge for the Egress traffic to backend ADFS servers.
- In a setup where the whole ADFS setup is on the cloud, its recommended that you configure IE, Web-proxy so that internal users (connected to corporate network) contact the ADFS service via the external AD FS proxy endpoints instead of internal AD FS endpoints. Using only external endpoints removes the VPN connection as a single point of failure for internal user authentication to Office 365. Using only external endpoints for the ADFS service also means that additional authentication prompts may occur for users because ADFS treats all authentication requests as coming from users on the Internet.
- The alternative to the recommended approach above is to direct internal AD FS traffic over the VPN to the ADFS servers directly or through the use of a DNS record.
- If the primary site for ADFS is Azure then ensure than the primary role is on the Azure VM.
- ADFS setup should be an ADFS farm using WID.
- Windows Azure™ Active Directory and Office 365 support several security token services, including AD FS, Shibboleth Identity Provider (http://technet.microsoft.com/en-us/library/jj205456.aspx),
and other third-party providers (http://technet.microsoft.com/en-us/library/jj679342.aspx). While it may be feasible to deploy Shibboleth or other third-party providers on Virtual Machines, these providers haven’t been tested by Microsoft for the above deployment.
- ADFS on Azure can have multiple RP, which can include Office 365, another claim based application hosted on another VM in Azure, claim based application registered with Azure Active Directory which has the domain federated, claim based application hosted externally etc.
Guidelines for Deploying Windows Server Active Directory on Windows Azure Virtual Machines
Office 365 Adapter: Deploying Office 365 Single Sign-On using Windows Azure
Hope this helps,