Testing of Right Decision Service (RDS) toolkits

Warning

A key part of the development process for Right Decision Service toolkits is repeated testing and validation stages.  This SOP outlines the three testing processes which knowledge managers and product owners for RDS toolkits will carry out to confirm that their toolkit is fit for purpose and clinically safe.

Functional testing

This involves planning, assembly and running of carefully chosen test cases to ensure that DSS functionality and content delivery operate as intended.  You will usually want to get at least 5 users to run through the test cases, to capture issues based on different user behaviours and assumptions.

Test cases are based on scenarios that check whether the intended functionality of your software operates correctly. The test case is a set of steps to be carried out to test whether the functionality works as expected. The number and variety of use cases should be proportionate to the complexity of the toolkit. Toolkits comprising purely static content are lower risk than those with interactive functionality such as question and answer sequences.

Examples of test cases:

  • In increasing order of complexity.

 

Example test case ID 1

Step 1. Enter correct URL for homepage of the DSS toolkit.

Result: Check that homepage loads with tiles representing the required sections of the website.

 

Example test case ID 2 – using visual pathway tool

Step 1. Click on link to access a visual pathway.

Step 2 Click on “I” information icon within a node in the pathway.

Result: Check that information panel loads with all text visible and formatted as intended.

Note – for toolkits that use static content it is often not realistic to carry out test cases for every page of content, and you may choose just a cross-section of content types.

 

Example test case ID 3 – using question and answer tool

Step 1. Enter correct URL to navigate to Lower Urinary Tract Assessment Tool for people over 65.

Step 2. Click on “Get Started” button.

Result: Check that page loads with question “Is a urinary catheter present?”

Step 3: Select “Yes” radiodial.

Result: Check that page loads with the correct list of symptoms for patients with a urinary catheter in place.

 

Example test case ID 4 – using clinical calculator

Step 1. Enter correct URL to navigate to NEWS2 calculator (Early Warning Score calculator)

Steps 2-9:  These steps will prompt selection of specific ranges or values from lists for respiratory rate, oxygen saturation scale, oxygen saturation, breathing with air or oxygen, systolic blood pressure, pulse, consciousness or temperature.

Result: Check that page loads with correct risk score and recommendations for values selected.

 

For algorithmic DSS such as those produced by the Question and Answer tool and clinical calculators, you will often need to work out many permutations of values to produce a complete set of test cases. You can ask for help and advice from the Right Decision Service team. In some cases, it may be recommended to seek technical support in carrying out automated test cases.

A spreadsheet listing your test cases and results of testing is a good way to document your functional testing and to quickly identify any issues.  An example of test cases for a Question and Answer tool is embedded below.

Test cases templates example

Note that the core functionality of the underpinning Right Decision Service platform – e.g. search and browse – has already been tested at development stage. Your testing should focus on the content and functionality you create using the RDS tools.

Usability testing

Usability testing aims to:

  • Identify problems in the design of the product or service
  • Uncover opportunities to improve
  • Learn about the target user’s behaviour and preferences.

End-users are provided with instructions to carry out specific tasks. Their actions, perceptions and tasks are recorded as they perform tasks.

You can carry out usability testing in person, sitting beside the user as they work through tasks, and asking them to talk through their actions and thoughts as they go.  Or you can conduct usability testing remotely, for example via MS Teams.

Another option is to carry out un-facilitated testing, using an online questionnaire where users can record their experiences and their valuation of usability of the system. The System Usability Scale and the Mobile App Rating Scale (MARS) are validated tools for usability testing.

Usability testing does not need to involve a lot of time and resource. Nielsen advises that 70-80% of usability problems can usually be identified by only 5 users, depending on the complexity of the software.

You can find out more about usability testing at the UK Government usability testing website and at the Nielsen Norman user experience website.

End-user acceptance testing

User acceptance testing (UAT) involves testing software in the real world by its intended audience. It is often the last stage in the development process before going live with your decision support tool.

Key stages in user acceptance testing include:

  1. Identifying and defining real-world test scenarios. These test scenarios should be based on understanding of user needs and context in which they will use the decision support tools. You may also want to encourage users to carry out their own testing using their own scenarios, as they may uncover real-world situations you have not thought of or prioritised.
  2. Select the testing team. You may invite only a select number of end users to test the software or you may open up testing to more participants by offering a free trial over the web.
  3. Test and document. End-users should be provided with a template or online form to enable them to carry out the testing, logging any potential bugs or other issues. The testing template or form should prompt the user to enter relevant details that will help you to reproduce the error – e.g. which browser, device or operating system they are testing on; what particular links or downloads did not work; any error messages they encountered.
  4. Update software, retest and sign off. The development team analyses the testing results, resolves any bugs and makes agreed changes -- and then retests. Once the software meets the users' criteria, you can sign off on the changes.

In practice, for simpler toolkits it may be pragmatic to combine the tasks for usability and user acceptance testing into one exercise, as illustrated in this sample user testing guide.

 

Editorial Information

Last reviewed: 04/09/2023

Next review date: 30/04/2024

Author(s): Ann Wales.

Version: 1.0

Author email(s): Ann.Wales3@nhs.scot.

Approved By: Healthcare Improvement Scotland Evidence Directorate Senior Management Team

Reviewer name(s): Ann Wales.