Turn your documentation writers into full-time software testers!

Do you want to increase your number of in- house testers for free? It’s hardly alchemy, but minimal time investment can turn your documentation writers into full-time software testers.

Or does that sound too good to be true? Customer-facing organisations that create instructional materials on using their software already have individuals performing many of the same tasks as manual testers. Minimal training can enable these same documentation teams to perform effective functional, exploratory, and usability testing.

For overworked test teams, that means more people testing.

Meanwhile, documentation teams avoid constantly reworking already created materials, by virtue of the resolution of defects before they are reflected in their guides and videos. Together, that spells software with fewer bugs, and an all-round better user experience.

Making the case for the Docu-tester!

This article compares the skills required to perform basic manual testing, to those already possessed by documentation teams. It makes the case that providing minimal training to documentation teams can enable them to perform effective testing, both exploratory and functional. Finally, it makes the case for including documentation teams fully in the sprint cycle, in a ‘shift left, shift right’ approach that facilitates pre-release user feedback.

Where skills overlap

Documentation teams already perform many of the same tasks as manual testers, requiring only minimal additional training to test. This is reflected in the simplified comparison below.

To document a system, I must:

  • understand the steps a user is expected to perform, to achieve a given expected result
  • perform the steps myself, entering data to follow end-to-end journeys through the system
  • document the process as I go, producing written step-by-step guides and videos.

To test a system I must:

  • understand how a system will be exercised by users, and the associated expected results
  • enter data to perform these user actions, comparing the actual results to the expected
  • classify and log defects where the actual results deviate from the expected.

In practical terms, steps one and two are equivalent. The major difference, therefore, is the output – whereas documentation teams produce written guides and videos, testers produce tickets and bug cards.

Though documentation teams do not formally log defects, they frequently perform the thinking behind them.

They often find themselves returning to development teams to ask questions like, ‘I cannot find this, should it be here?’, or ‘I followed these steps, but I get this result’.

Documentation teams are therefore already testing the systems they document, and are reporting bugs via email, chat, or in person.

They are furthermore logging their actions as they go, with screenshots, text, and video. In other words, they are creating what could be highly detailed bug reports, setting out the exact steps required to create a defect in their environment.

Showing documentation teams where to log these detailed defect reports will equip them with testing skill number three described above.

Meanwhile, brief training in classifying defects will sharpen the documentation teams’ eye to bugs of various types, detecting more defects before systems reach the end user.

Where is ‘docu-testing’ most effective?

Documentation teams will, of course, not be suited to every type of testing. They are not, for instance, already performing back-end or API testing.

Some of the types of tests that documentation teams are already executing are considered below, with explanations as to why they are readily equipped to perform them.

Functional testing

The documentation task par excellence is creating clear and concise guides on how to use applications.

These used to be housed in complete user guides, enclosed in CD cases; more frequently today, it takes the form of online ‘knowledge base’ articles and videos, crammed full of screenshots.

A typical process for creating such documentation runs as follows: I am provided with a video demonstrating a just-developed feature, or an alert to a user story that has just been developed.

This should tell me how, why, and when an end-user would use the new functionality. I then act as the user, performing the steps they would, creating written description and screenshots as I go.

This process essentially converts user stories into test cases, where the ‘how-to’ steps are equivalent to test steps. These are executed as the guide is being written, collecting screenshots along the way.

Writing instructional guides is therefore similar to functional testing – insofar as the actions exercised against the system are concerned. In particular, it is ‘happy path’, end-to-end functional testing: documentation teams act as users are intended, with the aim of documenting the complete journeys through the system that users are expected to perform.

Docu- testing, in this regard, is therefore better equipped for functional smoke testing just before a release than exhaustive functional testing.

The end-to-end functional testing will sometimes throw up functional defects. The video or user story provided by development provides a description of how the system should work; when I do not see these expected results, it might be a bug.

Some typical questions documentation teams might return to development include:

  • I followed the steps in your video, but when I clicked this, nothing happened. Have I done something wrong, or could it be the browser I am using?
  • I have set up my files in this way, but when I click ‘run’, I get this result. Have I configured something incorrectly?

Such questions will often be posed via email or chat. Training documentation teams to instead log their questions in systems such as JIRA will enable them to act as manual functional testers. This will introduce all the benefits of defect tracking and reporting provided by testing tools, in roughly the same time already spent sending emails or firing off questions.

UI & usability testing

The overall understanding of a system that is ultimately distilled into the concise user guide or articles is relatively comprehensive. I must understand what a user is expected to do, and also why the system has been developed this way.

Documentation writers therefore act as fairly unique stakeholders: they adopt the persona of a new user, but with subject matter expertise of the overall system. However, this understanding is not deeply technical, and does not extend ‘under the hood’ to the back-end systems.

Acting just as a new user would, documentation writers are well-placed to spot usability defects. They might struggle to find the location of a button, feature, or menu, for instance, or might notice when a screen is configured confusingly or is too busy.

This might not be immediately obvious to testers and developers who have seen that same screen hundreds of times, and who understand the technical reason why it is set up that way. The same applies to the terminology used to label features and buttons, which can often be fairly idiosyncratically chosen by developers or requirements gatherers.

The temptation as a documentation writer is to assume that it’s one’s poor understanding of the system that has created the confusion.

However, if something is not immediately obvious or clear, it’s potentially a usability error that will impact end-users in the same way as the documentation writer.

This is especially true for UIs. Documentation teams need to instruct users on where to find buttons and features, and must document the journey through panes, panels, pop-ups and screens. They will directly experience convoluted or over-engineered processes or hard-to-find buttons.

Typical questions that documentation teams feed-back to developers and business analysts include:

  • this button is named ‘X’; in most other tools I’ve used, the equivalent feature has been called ‘Y’. I assumed that this button would do ‘Y’
  • would it make sense to include this button on this tab of the main menu, in addition to where it is now?
  • I find myself having to frequently switch between these two windows in order to perform this task. Would it be quicker and more convenient for users to have one view?

Exploratory testing

Gaining the understanding of a system needed to document it often requires the same playful, cogitative curiosity exercised by exploratory tests.

For comprehensive documentation, I need to understand and document every button and every screen, and will frequently click buttons for no reason other than to find out they do.

This process often exercises combinations of buttons or features in unusual or unexpected ways. It can throw up defects similar to those found during exploratory testing, and questions asked of development in return can include:

  • I clicked off this screen using this button highlighted. I lost what I had been editing. Should we have a pop- up warning?
  • when I clicked these six buttons, in this order, these buttons were disabled. Is this intentional? Why?
  • when I open these three windows at once, this one renders weirdly.

The response in return is often ‘good spot’, reflecting how documentation teams are again well placed to incidentally identify bugs where others might not look.

Regression testing

Documentation needs to be kept up- to-date as systems change, adding new features but also making sure that screenshots and instructions reflect the current system.

This means capturing new screenshots and video after something has changed, following the functional steps exercised when the original documentation was written.

However, I cannot simply begin with the exact screen or location in the system that has been updated, and must re-execute the steps required to reach the point of the change.

Maintaining documentation is therefore substantially comparable to functional regression testing, and can throw up the sorts of functional defects discussed above, as well as integration errors created by changes made in development.

The case for in-sprint documentation

These are just some of the ways in which documentation writers are performing many of the same tasks that testers perform against a system.

A few easy-to- implement steps can align documentation and QA efforts, effectively creating ‘free’ full time test equivalents.

These steps might include:

  • training documentation teams on where to log the defects they find. This might already be done via email and chat. Ideally, it will be performed directly in test systems like JIRA encouraging documentation teams to use a range of environments when documenting systems. If documentation teams use different browsers and operation systems, then this is a quick win in diversifying the configurations that they are tested against including documentation teams in sprint meetings and planning sessions
  • user stories and system designs, converting them into step-by-step instructions and videos, they are essentially building on the requirements. This helps to avoid technical debt, keeping systems well documented for future development and on-boarding
  • detection of otherwise unfound bugs, more testing should mean more assurance, while documentation teams furthermore provide a fairly unique persona while testing. They can be more likely to spot bugs that might be found by non-technical users, but not by the tech-savvy testers and developers who create and validate systems.

The relatively small-time investment in attending sprint meetings and properly logging defects offers further rewards for documentation teams:

  • less frustrating and time-consuming rework, changes to existing functionality usually require updates to documentation, taking the time to laboriously update screenshots and instructions across potentially vast user guides. Identifying defects before they are developed and reflecting in documentation reduces the immediate need to make such changes later
  • easier documentation maintenance, bringing documentation teams into the sprint cycles and planning meetings keeps them up-to-date on changes being made to systems, before they are made. They can therefore identify and plan for changes that will need to be made to documentation. With documentation teams further using ALM and test tools, documentation can be linked to given user stories, with alerts set up as the system changes. The result is less out-of-date documentation piling up, improving the end-user experience
  • better upfront understanding of the system, closer communication and collaboration with those who design and develop systems provides insights into how applications have been developed, and why. This reduces the need to ask questions and await feedback when producing documentation, speeding up the day- to-day work of documentation teams.
  • It’s rewarding! Documentation teams possess good overall understanding of the systems they work with; however, it is often exercised only after the fact. Providing input during the design phase and seeing the results in the developed systems we work with is rewarding, no matter how small the impact.

These benefits are achievable with a small-time investment by documentation teams. By simply letting documentation teams know where to input the questions they are already asking, you can effectively create ‘free’ full time test equivalents!

Tom Pryce, manager, Curiosity Software

 

More
articles