The reason this is a complex issue is that system documentation performs different functions to different stakeholders and these agendas can conflict. There is also an inherent problem that documentation gets out of date in a developing system almost as fast as you can write it.
On a quick count, the following people are interested in documentation:
- To prove they are producing something of value
- To get client ‘sign-off’ on the requirements
- To minimise the verbal brief to engineers (engineers usually prefer reading to talking!)
- To understand the business requirements
- To have a ‘job sheet’ for their daily work
- To prove that their implementation satisfies the requirements
- To clarify technical details such as which database table should hold which data
- To understand another engineer’s implementation after the fact
- To have a record of the system that we’ve agreed to build
- To represent part of their intellectual property
- To help with system ‘portability’ if they need to change developers
- To track progress of deliverables against commitments
- To know when the project is ‘complete’
- To see how something should be implemented and write the test plan
- To compare intended implementation with actual implementation and spot gaps
- To identify bugs or issues where implementation is not as intended
Whew! So you can see that the poor old documentation is a servant of many masters and it is certainly hard to keep everyone happy. As an Agile practitioner we respect the Agile manifesto to measure progress in terms of working software, but as a commercial agency we respect the need for clients to understand what they are buying and the staff team to have a clear brief. At the same time we need to make the whole enterprise as efficient as possible so that clients are not paying unnecessary overheads.
How to square the circle
Most of all we keep our documentation in line with Agile best practice as far as possible, meaning that with one exception we attach all documentation artifacts to the user story. The exception is the BRD (Business Requirements Document) which is our first written record of what the client wants.
The analyst writes the BRD as soon as they feel that the client has sufficiently articulated a business need that can be addressed as a discrete project. In SSA world this usually corresponds to a requirement for a web application or a mobile app. Or it might correspond to a programme of development with a single unifying theme, although in the latter case further BRDs will be created as individual modules of development take shape.
The BRD should be written as a starting point for any audience member – any role in the above list can read the BRD and should know, pretty much, what the resulting project is trying to achieve. In particular we try to cover five things:
- Business need – high level description of the business “pain point”
- User roles – who are the roles in the business system
- As-is process – what is the business currently doing, often using process flowcharts
- To-be process – what does the analyst recommend should be done to improve
- Solutioning – high level description of the expected solution, often using a few UI wireframes
This document should ‘set the scene’.
Elaborating the story
The rest of the documentation we write is attached to a user story and user stories are grouped under epic stories in the usual Agile fashion. Specifically, against a user story we write:
The story description
In the usual format: As a [user type] I can [do an action] so that [I get a business benefit]
The acceptance criteria
This is usually a bulleted or numbered list that corresponds to how the story should be implemented to satisfy the client. I’ve gone into more detail about Acceptance Criteria here.
The test checklist
Again, a bulleted or numbered list that tells the tester what should be tested. This is like lightweight test script and we judge the right level of detail on a case-by-case basis. It is usually a collaborative job with the analyst writing the “happy path” (i.e. feature working correctly) plus one or more “failure cases”. Then the engineer who implemented the feature reviews the test checklist and expands it if there are code coverage gaps. In other words, the engineer needs to be happy that the cases on the checklist fully ‘cover’ the various paths through the code.
We’ve found that annotating the test checklist with PASSED/FAILED tags doubles it up as a script output. Admittedly this is a cheap and cheerful approach to testing, suitable for Agile web app development but in more critical scenarios a more structured approach is needed with the accompanying overheads.
With a diverse team of engineers, may working remotely, good communication is critical so we try and make visuals for anything that can be drawn. A picture is definitely worth a thousand words when briefing a software engineer!
Enough is enough
In summary, we call this the ‘elaboration’ of the story and the litmus test is asking both the analyst and the engineer if they have enough information to proceed. If yes, then we go ahead in the knowledge that clarifications and queries will arise but they’ll be in the manageable 5-10% bracket.
Ditto for UAT. The client will no doubt give feedback but if the above documentation is agreed up-front, the feedback can be managed within the contingency built-in to the process.Find out more about the SSA project process on our website.