Testing when "the code is the documentation"
Throughout the course of my testing career, I've often heard this dreaded statement. Despite the causes behind it, I think that there will always be times when you just won't get any formal documentation or specification on how a piece of software is supposed to work. Not to mention the equally evil (or perhaps more so) times, when the documentation is completely obsolete and doesn't represent reality.
When I run into this, my work commonly falls into the following pattern consisting of four different phases.
Here I spend some time upfront to get an understanding of what the software currently does and how it acts. A good chunk of time is used reading through and reviewing the code itself. I will also typically write a very basic (straight line) test making it easier to trace through the code in a debugger, which gives me a better feel for the program flow and organization.
Write the tests
Once I have some understanding of what the code is supposed to do, I then start cranking out test cases. Throughout the investigation and while I'm writing and running the tests, I'm also communicating with the developer (and the PM if there is one) in order to answer specific questions about the behavior.
After I've written a chunk of tests, there are undoubtedly a number of areas where a bug may or may not exist depending on someone's point of view. Since I've written my tests based on my perceived conception of how the software should behave (and the developer has done likewise with the code) there will be areas where my test expects one behavior, but the program does something else. And without a specification to arbitrate, this often becomes a heated battle. Especially since the developer has already written the code (and is undoubtedly working on something else now), he/she will be reluctant to go back and change it based on the whim of a tester. Thisis where your persuasion and diplomacy skills come into use as you need to make a case and argue on behalf of the customer.
After finishing up the tests and the stability of the software has solidified, I typically go back and provide some form of documentation. At the very least I ensure to comment specific weirdness along with my tests. But depending on the area, I may also provide some high level information to co-workers. For example, after working with the (undocumented) custom file system used on the Xbox, I created a basic table outlining the characteristics compared to other common file systems. This included information useful to my colleagues such as maximum filename length, maximum filesize, etc. At the other end of the documentation scale, you could always go and write the specification yourself, which would definitely help to improve the maintainability of the system.