Friday Mail Sack: Guest Reply Edition
Hi folks, Ned here again. This week we talk:
- CA migration from 1 to 2 tier
- ADAM/ADLDS P2V ABC 123
- Managing AGPM security filters
- Multiple IIS App pools and Kerberos
- AGPM multi-domain comparison
- ADUC domain password weirdness
- DFSR deletion conflict handling
- Stale account deletion ad nauseum
- AD PowerShell, Get-Acl, and the missing objects that aren’t missing
- SMB tuning, DFSR, and Asynchronous Credits
- Other stuff
Let's gang up.
We plan to migrate our Certificate Authority from single-tier online Enterprise Root to two-tier PKI. We have an existing smart card infrastructure. TechNet docs don’t really speak to this scenario in much detail.
1. Does migration to a 2-tier CA structure require any customization?
2. Can I keep the old CA?
3. Can I create a new subordinate CA under the existing CA and take the existing CA offline?
[Provided by Jonathan Stephens, the Public Keymaster- Editor]
We covered this topic in a blog post, and it should cover many of your questions: http://blogs.technet.com/b/askds/archive/2010/08/23/moving-your-organization-from-a-single-microsoft-ca-to-a-microsoft-recommended-pki.aspx.
Aside from that post, you will also find the following information helpful: http://blogs.technet.com/b/pki/archive/2010/06/19/design-considerations-before-building-a-two-tier-pki-infrastructure.aspx.
To your questions:
- While you can migrate an online Enterprise Root CA to an offline Standalone Root CA, that probably isn't the best decision in this case with regard to security. Your current CA has issued all of your smart card logon certificates, which may have been fine when that was all you needed, but it certainly doesn't comply with best practices for a secure PKI. The root CA of any PKI should be long-lived (20 years, for example) and should only issue certificates to subordinate CAs. In a 2-tier hierarchy, the second tier of CAs should have much shorter validity periods (5 years) and is responsible for issuing certificates to end entities. In your case, I'd strong consider setting up a new PKI and migrating your organization over to it. It is more work at the outset, but it is a better decision long term.
- You can keep the currently issued certificates working by publishing a final, long-lived CRL from the old CA. This is covered in the first blog post above. This would allow you to slowly migrate your users to smart card logon certificates issued by the new PKI as the old certificates expired. You would also need to continue to publish the old root CA certificate in the AD and in the Enterprise NTAuth store. You can see these stores using the Enterprise PKI snap-in: right-click on Enterprise PKI and select Manage AD Containers. The old root CA certificate should be listed in the NTAuthCertificates tab, and in the Certificate Authorities Container tab. Uninstalling the old CA will remove these certificates; you'll need to add them back.
- You can't take an Enterprise CA offline. An Enterprise CA requires access to Active Directory in order to function. You can migrate an Enterprise CA to a Standalone CA and take that offline, but, as I've said before, that really isn't the best option in this case.
Are there any know issues with P2Ving ADAM/AD LDS servers?
[Provided by Kim Nichols, our resident ADLDS guru'ette - Editor]
No problems as far as we know. The same rules apply as P2V’ing DCs or other roles; make sure you clean up old drivers and decommission the physicals as soon as you are reasonably confident the virtual is working. Never let them run simultaneously. All the “I should have had a V-8” stuff.
Considering how simple it is to create an ADLDS replica, it might be faster and "cleaner" to create a new virtual machine, install and replicate ADLDS to it, then rename the guest and throw away the old physical; if ADLDS was its only role, naturally.
[Provided by Fabian Müller, schlau Deutsche PFE- Editor]
When using production delegation in AGPM, we can grant permissions for editing group policy objects in the production environment. But these permissions will be written to all deployed GPOs, not for specific ones. GPMC makes it easy to set “READ” and “APPLY” permissions on a GPO, but I cannot find a security filtering switch in AGPM. So how can we manage the security filtering on group policies without setting the same ACL on all deployed policies?
Ok, granting “READ” and “APPLY” permissions respectively managing security filtering in AGPM is not that obvious to find. Do it like this in the change control panel of AGPM:
- Check-out the according Group Policy Object and provide a brief overview of the changes to be done in the “comments” window, e.g. “Add important security filtering ACLs for group XYZ, dude! ”
- Edit the checked-out GPO
In the top of the Group Policy Management Editor, click “Action” –> “Properties”:
- Change to “Security” tab and provide your settings for security filtering:
- Close the Group Policy Management Editor and Check-in the policy (again with a good comment)
- If everything is done you can now safely “Deploy” the just edited GPO – now the security filter is in place in production:
Note 1: Be aware that you won’t find any information regarding the security filtering change in the AGPM history of the edited group policy object. There is nothing in the HTML reports that refer to security filtering changes. That’s why you should provide a good explanation on your changes during “check-in” and “check-out” phase:
Note 2: Be careful with “DENY” ACEs using AGPM – they might get removed. See the following blog for more information on that topic: http://blogs.technet.com/b/grouppolicy/archive/2008/10/27/agpm-3-0-doesnt-preserve-all-the-deny-aces.aspx
I have one Windows Server 2003 IIS machine with two web applications, each in its own application pool. How can I register SPNs for each application?
[This one courtesy of Rob Greene, the Abominable Authman - Editor]
There are a couple of options for you here.
- You could address each web site on the same server with different host names. Then you can add the specific HTTP SPN to each application pool account as needed.
- You could address each web site with a unique port assignment on the web server. Then you can add the specific HTTP SPN with the port attached like http/myweb.contoso.com:88
- You could use the same account to run all the application pool accounts on the same web server.
NOTE: If you choose option 1 or 2, you have to be careful about Internet Explorer behaviors. If you choose the unique host name per web site then you will need to make sure to use HOST records in DNS or put a registry key in place on all workstations if you choose CNAME. If you choose having a unique port for each web site, you will need to put a registry key in place on all workstations so that they send the port number in the TGS SPN request.
Comparing AGPM controlled GPOs within the same domain is no problem at all – but if the AGPM server serves more than one domain, how can I compare GPOs that are hosted in different domains using AGPM difference report?
[Again from Fabian, who was really on a roll last week - Editor]
Since AGPM 4.0 we provide the ability to export and import Group Policy Objects using AGPM. What you have to do is:
- To export one of the GPOs from domain 1…:
- … and import the *.cab to domain 2 using the AGPM GPO import wizard (right-click on an empty area in AGPM Contents—> Controlled tab and select “New Controlled GPO…”):
- Now you can simply compare those objects using difference report:
[Woo, finally some from Ned - Editor]
When I use the Windows 7 (RSAT) version of AD Users and Computers to connect to certain domains, I get error "unknown user name or bad password". However, when I use the XP/2003 adminpak version, no errors for the same domain. There's no way to enter a domain or password.
ADUC in Vista/2008/7/R2 does some group membership and privilege checking when it starts that the older ADUC never did. You’ll get the logon failure message for any domain you are not a domain admin in, for example. The legacy ADUC is probably broken for that account as well – it’s just not telling you.
I have 2 servers replicating with DFSR, and the network cable between them is disconnected. I delete a file on Server1, while the equivalent file on Server2 is modified. When the cable is re-connected, what is the expected behavior?
Last updater wins, even if a modification of an ostensibly deleted file. If the file was deleted first on server 1 and modified later on server 2, it would replicate back to server 1 with the modifications once the network reconnected. If it had been deleted later than the modification, that “last write” would win and it would delete from the other server once the network resumed.
More info on DFSR conflict handling here http://blogs.technet.com/b/askds/archive/2010/01/05/understanding-dfsr-conflict-algorithms-and-doing-something-about-conflicts.aspx
Is there any automatic way to delete stale user or computer accounts? Something you turn on in AD?
Nope, not automatically; you have to create a solution to detect the age and disable or delete stale accounts. This is a very dangerous operation - make sure you understand what you are getting yourself into. For example:
Whenever I try to use the PowerShell cmdlet Get-ACL against an object in AD, always get an error like " Cannot find path ou=xxx,dc=xxx,dc=xxx because it does not exist". But it does!
After you import the ActiveDirectory module, but before you run your commands, run:
Get-Acl won’t work until you change to the magical “active directory drive”.
I've read the Performance Tuning Guidelines for Windows Server, and I wonder if all SMB server tuning parameters (AsyncCredits, MinCredits, MaxCredits, etc) also work (or help) for DFSR. Also, do you know the limit is for SMB Asynchronous Credits - the document doesn’t say?
Nope, they won’t have any effect on DFSR – it does not use SMB to replicate files. SMB is only used by the DFSMGMT.MSC if you ask it to create a replicated folder on another server during RF setup. More info here:
Configuring DFSR to a Static Port - The rest of the story - http://blogs.technet.com/b/askds/archive/2009/07/16/configuring-dfsr-to-a-static-port-the-rest-of-the-story.aspx
That AsynchronousCredits SMB value does not have a true maximum, other than the fact that it is a DWORD and cannot exceed 4,294,967,295 (i.e. 0xffffffff). Its default value on Windows Server 2008 and 2008 R2 is 512; on Vista/7, it's 64.
As KB938475 (http://support.microsoft.com/kb/938475) points out, adjusting these defaults comes at the cost of paged pool (Kernel) memory. If you were to increase these values too high, you would eventually run out of paged pool and then perhaps hang or crash your file servers. So don't go crazy here.
There is no "right" value to set - it depends on your installed memory, if you are using 32-bit versus 64-bit (if 32-bit, I would not touch this value at all), the number of clients you have connecting, their usage patterns, etc. I recommend increasing this in small doses and testing the performance - for example, doubling it to 1024 would be a fairly prudent test to start.
Happy Birthday to all US Marines out there, past and present. I hope you're using Veterans Day to sleep off the hangover. I always assumed that's why they made it November 11th, not that whole WW1 thing.
Also, happy anniversary to Jonathan, who has been a Microsoft employee for 15 years. In keeping with the tradition, he had 15 pounds of M&Ms for the floor, which in case you’re wondering, it fills a salad bowl. Which around here, means:
Two of the most awesome things ever – combined:
A great baseball story about Lou Gehrig, Kurt Russell, and a historic bat.
Off to play some Battlefield 3. No wait, Skyrim. Ah crap, I mean Call of Duty MW3. And I need to hurry up as Arkham City is coming. It's a good time to be a PC gamer. Or Xbox, if you're into that sorta thing.
Have a nice weekend folks,
- Ned "and Jonathan and Kim and Fabian and Rob" Pyle