Large AD database? Probably not this large...

Over the last few months there have been a series of threads in regard to max <fill in the item here...there have been many> in a database. These items have ranged from database size to # of objects and other such things. I figured, after the latest thread over on activedir.og, I'd do a little testing and put some numbers behind it so we could say "we have done this" and not "the system should do this."

What should this testing accomplish?

First, raw DB size. Gotta create a big DB or it probably doesn't matter.

Next, # of objects. For my testing, this was the real metric I was interested in. As mentioned over on ActiveDir (I would provide a link to the thread but I can’t seem to get the mail archives to work right now…I’ll try and provide one later), there is a theoretical max # of objects in the lifetime of a database which is, all said and done, 2^31 objects. I wanted to shoot for this. After all, Dean asked what error you would get, and I didn’t know. :)

I wrote a tool which started banging against an ADAM SP1 x64 instance. It was creating pretty small objects as I wanted reduce the amt of time this test took. My objects looked like this:

                    dn: cn=leafcontX,cn=parentcontY,cn=objectsZ,ou=objdata

                    changetype: add

                    objectclass: container

(Of course, sub in values for X, Y and Z as appropriate)

I had it use anywhere from 16 to 40 threads for this work depending upon the phase of import, and I simply wrapped around ldifde for it….I figured, there is a well tested tool for this, why not let it do most of the hard work?

Next, I got my hands on a test box (thx EEC!), put it on a SAN, installed ADAM, and away I went.

Along the way, we did a few other perf tests (looking at increased checkpoint depths and the like) so it added a bit of time to the import. However, after about a month, I had nearly filled my 2TB partition:

06/08/2006 10:41 AM 2,196,927,299,584 adamntds.dit

I created just shy of 2^31 objects. When I went to create that next object (done here by hand in LDP to illustrate the error)…

***Calling Add...

ldap_add_s(ld, "cn=sample1,OU=ObjData", [1] attrs)

Error: Add: Operations Error. <1>

Server error: 000020EF: SvcErr: DSID-0208044C, problem 5012 (DIR_ERROR), data -1076

If you look up -1076, you’ll find it is JET_errOutOfAutoincrementValues (from esent98.h). Woo hoo! I ran out of DNTs.

With this DB in hand, it was time to find out what else works and what else does not…

- Promotion of a replica fails. This makes perfect sense….it tries to create a couple of objects in the config NC, and that fails.

- Create of an NC fails. Again, to be expected, this task consumes DNTs.

- I ran esentutl /ms. It chugged for nearly 30 seconds, but worked perfectly.

- I also ran esentutl /k to make sure the DB did not have any physical corruption, but also to just see how long that took. :)

- Other standard tasks (kicking off garbage collection, online defrag, restarting the service, etc.) all worked perfectly.

- Search works like a champ. Sure it takes a good bit of I/O for most interesting searches, but that’s to be expected, of course.

It is worth noting that anything which failed did so gracefully. There were no nastygrams in my event logs either.

So for those of you who are worrying….you can sleep well at night now. We have tried rolling over DNT, and it works just fine.

A fun stat…..from the esentutl /ms output:

Name Type ObjidFDP PgnoFDP PriExt Owned Available


<EFleis – snip to save some space>

  nc_guid_Index Idx 25 43 1-m 10870892 5

That owned number is in pages. That’s right, my NC_GUID index is 82.9GB…bigger than most databases. :)

While there were no major issues, we (Brett was looking at this too) did hit a few bumps along the way, and Brett was kind enough to write a few ESE tools for me to help monitor how we were doing. I’ll outline all of these things over the next few days as I have time to write them up. I’ll also provide more clarity around specific of what we did and saw as we went along.