Test Automation as Documentation
A novel way to document your system, in a way that actually works.
How do you document units, modules, libraries, and APIs in your system, so that people know how to use them? Note that for the purposes of this discussion we are not referring to code comments, which are mainly directed at the team constructing and maintaining the module, but rather to client documentation for those who will use it.
While there is general agreement in the field that modules, APIs, and systems should be documented, there is much less agreement on what form the documentation should take, and what level of documentation is enough. And even the most well-intended documentation efforts sooner or later end up being a struggle against entropy. If you document too lightly the documentation cannot serve its purpose; developers refrain from using the modules that are not well documented, resulting in wasted effort, duplicate code, and increased maintenance costs. On the other hand, documentation needs maintenance as well, and if it is heavy, it imposes its own costs. There is a temptation to think of the documentation as static, and as a result, underestimate the cost associated with it. But in reality, documentation cannot be any more static than the code that underlies the module. As a result, teams often make the mistake of creating heavy documentation once, then learn from the painful experience of having to maintain it, that it does not pay off. In the end, most teams tend to document too lightly, especially since the direct cost of doing so is often borne by their clients instead.
Let's for a moment accept the axiom that documentation will not be static, and see what requirements we need to satisfy, in order to make both the clients and the maintainers of a module efficient:
- As a client, I want to read the only the parts of the documentation that are relevant to my needs.
- As a client, I want to be notified when something that is relevant to me was updated.
- As a client, I want an example to follow when I am building my system on top of the module.
- As a client, I want a trivial way of learning how to initialize a module (this is mainly the purpose of the typical “Hello World” example).
- As a maintainer, I want to invest as low an amount of extra effort in the documentation as possible.
- As a maintainer, I want to have an easy way of detecting when the documentation is out of sync with the functionality.
- As a maintainer, I want to have an easy way of versioning the documentation with the implementation.
A scan of the above list should make one realization stand out: the issue is not so much the effort needed to adjust the relevant documentation, but rather then effort needed to identify what has changed. Most of the maintenance changes made to modules that have been in production for a while, tend to be bug fixes, changes to non-functional requirements (e.g., better performance, or improved logging), or additions to functionality. Of those, only added functionality needs to be documented. Changes to the desired behavior of existing functionality can happen, but they happen at a significantly reduced rate. The issue then becomes the following:
- As a client, do I need to re-read the documentation on every release, to see if something of interest has changed? (corresponds to item 2 above). Release notes save some of the unnecessary work, but do not make this problem go away: every time there is a change in a module of interest, or a module that a module of interest depends on, the behavior could have changed.
- As a maintainer, do I now have to scan the entire documentation to see if something needs to be adjusted? I am likely to spend a lot of time, only to verify that nothing needs to be adjusted (corresponds to item 6 above).
In essence, we need a way of automatically detecting when a meaningful behavior change has occurred. And hence the unconventional suggestion: this is exactly what test automation is for! I would propose that independent documentation should be generic enough to give the big picture. This is unlikely to ever change, and if the big picture ever does change, the change will be significant enough that it would be practically impossible to miss. The detailed documentation on how to use a module in practice, however, should really just be left to the actual automated tests. A developer, armed with the big picture, can simply scan the test suite of the module for a use case that looks similar to what (s)he has in mind. The test set-up should already contain the right steps to initialize the module, and the test case itself provides a detailed example of how to accomplish the task at hand. Even the exception conditions should be captured in negative test cases if the test automation is complete. Of course, this proposal assumes that there is a certain level of coverage in the test automation, but any behavior that is important enough to need documentation, should really be tested for in the automated tests. Let's examine how this proposal matches up with the requirements from above:
- As a client, I want to read the only the parts of the documentation that are relevant to my needs: The client accomplishes this by reading the big picture documentation, plus the test cases that are relevant. (S)he may have to scan the description of each test case the first time to identify the relevant ones, but this should only be a one-time cost.
- As a client, I want to be notified when something that is relevant to me was updated: The client can accomplish this by assembling a test suite of test cases that are of interest, and seeing if any have changed on each release. If so inclined, (s)he can even run the test cases and make sure they still succeed (they always should, otherwise the maintainer has failed in his duties). (S)he can even go a step further: if (s)he depends on behavior that is not “documented” (i.e., there is no automated test for it), (s)he can actually write an automated client test. This way, if the maintainer of the module at some point changes that behavior, the client will find out on release.
- As a client, I want an example to follow when I am building my system on top of the module: this is trivially provided by the test case itself.
- As a client, I want a trivial way of learning how to initialize a module (this is mainly the purpose of the typical “Hello World” example): this is also trivially provided in the set-up methods of the test cases. Note that cleanup is similarly provided in the tear-down methods.
- As a maintainer, I want to invest as low an amount of extra effort in the documentation as possible: The extra effort is now limited to the big-picture documentation, which should be quite small.
- As a maintainer, I want to have an easy way of detecting when the documentation is out of sync with the functionality: trivially provided, since any relevant change in behavior will show up as a test failure.
- As a maintainer, I want to have an easy way of versioning the documentation with the implementation: only an issue for the big-picture documentation which practically never changes (you are versioning your test cases along with your code, right?).
Note that there are glimpses of this technique in the REST API specification frameworks (e.g. swagger), specifically in their tendency to use a single specification to generate both the documentation, as well as the tests. The technique, however, is much more broadly applicable, and can be used all the way from unit tests to the user interface. In fact, APIs are the one area where the technique tends to add less value, since API behavior tends to change infrequently (think backwards compatibility with external clients). Even so, these toolkits make little use of the power of the proposed technique: in reality you still have to write pretty elaborate descriptions as part of the “documentation” portion of the specification, and the specification cannot serve as an example skeleton.