New Report takes an in-depth look at European software testing projects

A new report written by the Judging Panel from the 2016 European Software Testing Awards, analyses past and current entries into the Awards programme, pulling out key trends and learnings.

The European Software Testing Awards celebrate companies and individuals who have accomplished significant achievements in the software testing and quality assurance market. The programme was launched four years ago and has grown from strength to strength, seeing numerous complex and fascinating testing projects submitted.

Thousands of entries have been submitted over the years, consisting of up to 3000 words each, detailing information on project criteria, the various challenges faced, overall management strategies, formulas and methodologies used in order to deliver each successful project.

In order to give back to the wider testing community, the Editor of TEST Magazine, Cecilia Rehn, decided to put together this report with the assistance of the Awards Judges, all senior testing managers, heads or directors, who have submitted detailed research pieces covering their findings per category.

Each chapter in The European Software Testing Summit Report delves into identified present and historical trends and developments regarding various different aspects of software testing and QA, be it project challenges or individual roles. A few key takeaways are outlined below.

Functional and non‑functional test automation

Test automation projects were split into functional and non‑functional for the first
time ever in 2016, but clear parallels can be drawn from the group as a whole.
Automation is a clear objective and challenge for testing departments across different sectors. Whether focused on performance and load testing, or larger functional test automation projects, evidence of CI, CD and DevOps principles were illustrated in top projects. Automation is also shown to be disrupting testing departments through transforming roles and tester skill sets.

Maturation of open source

A growing trend, as revealed in the project‑based entries this year, has been the accepted use of open source tooling, often put in place to compliment existing vendor solutions. Evident in this year’s functional test automation projects, in many instances open source and commercial tools are co‑existing and can be found working in the same environments.

The proliferation of open source tooling was not as commonplace in earlier years, though the shift is now clearly one towards more tools fit for purpose being selected to permit agile adoption, CI and CD, as opposed to just relying on large multi‑purpose toolkits.

Abundance of agile

Agile has come up in almost every category, and historic trends confirm that there’s been a boost in software testing and QA projects in 2016 utilising, or at least striving for agile delivery. In the Agile category itself, there’s been a growing number of project submissions, and a significant number detailing smaller proof of concept projects underscoring how challenging a full transition to agile can be in large organisations.

In terms of the sector‑based categories, we can see a clear shift in agile adoption from previous years. All industry sectors have been hit by fierce digital competitors, and are finding that agile and lean operations are key in order to stay competitive and achieve speed to market. Across the categories, there is a growing mention of DevOps initiatives, but this is still rare.

Mobile branches out

No other category emphasises the globalisation effect more than mobile testing. This year’s projects describe the multi‑device, multi‑browser, multi‑variant, multi‑network conundrum as norm, and it is expected that software be tested against different, global conditions. The testing projects from the gaming sector which often detailed mobile apps, all had in common the sense that extensive testing cross‑platform is the standard in 2016, whereas in earlier years this was not always a given.

Changing roles

The chapters relating to individual and team entries, such as graduate testers, testing teams and testing managers/management teams all highlight similar trends. Testing roles appear to be changing; roles are becoming more diverse and technical in nature. The graduate entries indicate higher education requirements, but also that organisations are recruiting across a wide range of talent pools to find future testers. There has been a development of new management techniques, and testing managers are expected to cover a range of skill sets. Although the emphasis on communication and stakeholder management is ever‑present, as business demands become embedded in test and QA objectives. Testing teams appear to be moving into development teams, especially as the push for agile delivery methods continues.

The full European Software Testing Summit Report can be downloaded here.

Methodology

The European Software Testing Awards has always intended to be an annual celebration of the best projects in the industry, and to recognise those businesses, teams, and individuals leading the way. The European Software Testing Summit Report further supports that ethos by providing additional learning and value.

The chapters and analysis have been written by the 2016 European Software Testing Awards Judges after objectively analysing past and present, project‑based entries into the Awards programme. The Judging Panel, which is made up of senior Heads of Testing and Directors of QA from a wide range of end user organisations, was tasked with analysing the entries to determine key industry trends, similarities and differences. Entries were also assessed against industry best practices.

Entries from previous years were juxtaposed with the 2016 entries to determine any patterns and trends over time.

The chapters do not single out individual entries, as all entries are submitted to the programme in confidence, and this is meant to be an overview.

All chapter texts in the Report have been edited for clarity and length.

Accompanying graphs

The accompanying graphs are from The European Software Testing Benchmark Report 2016. These are based on results from senior testing managers across Europe filling in surveys posted online throughout 2016. The full survey results can be found online: www.softwaretestingnews.co.uk/survey

Survey questions have been supplied by TEST Magazine’s 2016 Editorial Board, which is made up of: Rod Armstrong, Programme Quality Manager, EasyJet; Riel Carol, Head of Test, YouView TV Limited; Iain McCowatt, Head of Testing for the Treasury Function, High Street Bank; Asia Shahzad, QA Manager, Hotels.com; and Jim Woods, Director of Development Services, Sega West.

About the publisher

31 Media is a multi-platform media company that serves the IT sector. As a business that consistently endeavours to produce top quality products, the reputation of 31 Media has grown to become one of the market leading IT publishers and event organisers in the UK. The core business of 31 Media consists of: publishing, events, online media, research, executive networking and recruitment.

Special thanks

31 Media would like to extend its thanks to the 2016 European Software Testing Awards Judging Panel who took time out of their busy schedules to judge, critique and write about the entries.

The findings in the Report were also be presented at The European Software Testing Summit, held on the 16th of November 2016. The Summit would not have been possible without the support of sponsors: Amdocs, CA Technologies and Perfecto.

Written by Cecilia Rehn

More
articles