
In the technology world, quality assurance exists to ensure the highest standards of digital quality. Yet, all too often, quality is sacrificed for speed as organisations rush new products to market without taking the necessary steps or applying critical safeguards. In this article, Rob Mason, the CTO of Applause, helps internal teams choose a framework that can help them achieve comprehensive digital quality, leading to exceptional user experiences.
Today, organisations tend to rely on a variety of measurements and KPIs to assess digital quality across multiple dimensions. Despite this, most quality assurance (QA) teams still have an incomplete picture of how well they’re performing. To address this gap, we’ve conducted several studies over the past few years to analyse trends in software development, testing and QA to benchmark the state of digital quality. This research gave us the launch pad we needed to introduce frameworks that help organisations to improve overall digital quality, while speeding releases and boosting efficiency.
These frameworks allow organisations and QA teams to benchmark their current capabilities and map out an improvement plan. Although it’s widely acknowledged that digital quality should be at the core of QA and testing processes, only a select few teams have established cultures where quality is thoroughly embedded into the DNA of their organisation. For example, we found that less than one-third of organisations have comprehensive documentation in place for test cases and test plans, even though most QA teams recognise the value of clearly defined test methodologies, accurate documentation, testing and feedback throughout the software development lifecycle (SDLC).
Achieving the Goal of Digital Quality
When it comes to test cases, the average rate of failure is around 20%, which leaves room for improvement on each release and test cycle, especially when new code and features are introduced. Quite often, organisations are so focused on releasing new products quickly that they don’t take time to set themselves up for long-term success. Practices like keeping code clean, writing good test cases, documenting test run results and leveraging data to focus efforts allow development and QA teams to be more effective and efficient. Ideally, digital quality should focus less on finding defects and more on creating systems and processes at all stages of development that prevent them from occurring at all.
Ensuring that digital quality can be applied across every aspect of developing digital experiences is difficult to achieve and even more difficult to maintain. But, it’s a goal worth pursuing; having a framework that charts each stage of development – from coding to test cases and finally release – will help organisations assess their progress toward achieving comprehensive digital quality.
Functional Testing Frameworks
We can look at this through the prism of functional testing, the goal of which is to ensure that the software works according to specifications and is in line with user expectations. A ‘functional testing framework’ should cover four different stages of digital quality, spanning the most basic level that we can refer to as ‘Digital Quality Emergence’, through to the most sophisticated – ‘Digital Quality Excellence’. To illustrate this, the four stages are outlined below.
- Digital Quality Emergence: A situation characterised by a lack of consistent systems, processes and documentation. Even though individual team members may have their methods and documentation, the organisation has no consistent methodology or approach to quality. Testing activities and processes might include ‘dogfooding’ (whereby employees and their family and friends are enlisted to test), running tests without documenting test cases or test run results, or even skipping test case documentation altogether.
- Digital Quality Essentials: This level refers to organisations in the early stages of defining and documenting processes and procedures, while establishing some consistency and structure around testing. Teams may have their unique processes, but efforts may still be siloed. Organisations at this level may focus on documenting test cases for feature-based tests, while also ensuring that test cases are written clearly. They may also test releases in pre-production, perform exploratory tests for new features/app changes, and automate frequently executed/rarely changed tests.
- Digital Quality Expansion: Organisations at the ‘expansion’ level have clear processes and a broad range of testing types and reporting in place, reinforced by a focus on coverage, scalability and efficiency across the organisation. Testing activities and processes for this stage would include testing user acceptance and UX for new features/app changes, leveraging test automation for repetitive tests (i.e., reviewing and updating automation scripts regularly while also documenting test cases/suites for all features), and crucially, measuring quality KPIs with data and reporting.
- Digital Quality Excellence: ‘Excellence’ is the point where quality is embedded in the organisation’s DNA and built into all products and experiences from end to end. Examples of testing practices at this level include testing throughout the SDLC, in-sprint and in staging/pre-production; incorporating the voice of the customer into product design and development; and maintaining a strong test case management process. Organisations at this level also review and refine testing processes regularly and use reports to analyse trends. Teams that have achieved ‘Excellence’ stay attuned to a variety of metrics to continuously improve digital quality.
A Culture of Continuous Improvement
While these frameworks can dramatically improve QA results, digital quality is more than just a set of guidelines – it’s about investing in a process of continuous improvement. For this reason, QA teams should consistently revisit each process to review why it’s in place, if it’s still necessary and if it can be improved. A focus on building repeatable, scalable systems and processes will lay the foundation for great digital quality. But constant testing and feedback collection are also integral to this process. The earlier you start, the easier it is to adjust, adapt and evolve.
Additionally, teams need to use a combination of metrics to assess the effectiveness of their QA programs and overall digital quality. Simply put, you can’t improve what you can’t measure. Some examples include tracking device coverage and test cases by area, the increase in percentage of quality issues by month or year, the amount of time to complete a regression suite, and so on. Factor in pre-defined KPIs, and organisations have a better readout of where to improve their development strategy, test coverage and frequency, and more.
Although most organisations focus predominantly on functional testing, our studies show that companies are increasing investment in areas like exploratory testing, accessibility and inclusive design, payments, localisation, and more recently, user experience (UX) and artificial intelligence (AI). The framework outlined above can be applied to each of these disciplines to improve digital quality throughout design, development and deployment.
Regardless of whether organisations have well-established quality systems in place or not, these types of frameworks – functional or otherwise – are designed to help them improve and innovate. They can shed light on their existing capabilities and deficiencies, besides providing them with benchmarks to determine which areas will provide the most significant improvements to quality. Users are constantly developing higher expectations for software apps across industries, especially with the rise in AI adoption. As technologies continue to evolve, it’s critical to invest in the activities and infrastructure that allow organisations to adapt while maintaining quality at speed and scale.
For media enquiries, please get in touch with vaishnavi.nashte@testassociates.co.uk



