Challenges for mobile testing in the enterprise

Eran Kinsbruner, Mobile Evangelist, Perfecto, highlights the key takeaways from the 2016
TEST Focus Groups.

There could not be a more challenging time to be in app development or testing. With the constant and rapid proliferation of new devices, operating systems, and mobile browsers, testing has to account for an astonishing number of combinations and scenarios. For enterprise teams moving to agile development and integrating automated testing in their process, it can be difficult to devise a test strategy that delivers great user experiences across all platforms.

To explore some of these topics and learn more about what testers are facing today, we recently held three roundtable discussions at the 2016 TEST Focus Groups, a one‑day event for professionals in the testing industry. During our roundtables, we focused on strategies and challenges specific to mobile testing and cross‑platform support and testing. These are the key takeaways and insights from our discussions.

To automate or not to automate

The percentage of organisations doing test automation continues to grow, but there are still major challenges that need to be addressed. When it comes to setting the number of releases each year, every organisation is approaching it differently based on unique needs and obstacles. The needs of a long‑established company with a large customer base are going to be quite different from the needs of a small, hungry start‑up. During the roundtable, it was revealed that release schedules run anywhere from bi‑monthly to multiple releases every day, with one participant sharing that her company has hundreds of releases a day.

That said, many of the larger organisations that are doing multiple releases every day aren’t running automated tests against all of them. It’s a great idea in theory to be constantly testing if you’re practicing continuous deployment, but for many, 100% automation is simply unrealistic.

Some companies use a different approach, putting off automated testing altogether as long as they can, saying they prefer to get feedback from end‑users. These companies also justify putting off automated testing because they’re in the process of updating features. Setting up automation feels like a waste, they say. This approach is a gamble, but many organisations take that gamble precisely because they use continuous delivery. Their logic is that because they’re able to fix bugs in less than five minutes, where’s the harm?

Device coverage and user condition testing

Another common problem occurs when a company transitions to agile development. They introduce automation as part of the team’s deliverables while still having a large backlog of legacy systems. And while they’re trying to set up automation on the new systems, they simply don’t have the resources to get through the backlog.

And don’t forget about testing for outdated devices or browsers – just because users should move on, doesn’t mean they will (or can). For example, it’s not unheard of for users in the banking or federal industry to still be using company‑issued Blackberries or Internet Explorer 7 for security reasons. You can’t tell users to upgrade their device when it’s against company policy to use anything else. You simply have to make sure old devices and outdated software is supported.

Participants also shared being frustrated that it’s not possible to include everything in their regression tests. They’d all like to find a way to avoid manual regression testing and leave their testers to focus on exploratory testing only. And yet, many companies are approaching testing from a lean UX perspective, where everything is end user focused.

That’s one reason that testing for real user conditions based on an understanding of the organisation’s target personas is so important. When used well, persona traits and real user condition testing can provide test coverage for a large percentage of a user base — creating a huge value for the entire business.

Of course, setting up devices to include every user condition, such as network latency and text message interruptions, can be tricky, especially when you consider that many organisations are testing between 10 and 15 different browsers across mobile and desktop. But new approaches, such as using Perfecto’s Continuous Quality Lab to set up and test real user conditions can help cut down on laborious tasks.


We learned a lot about the state of the industry at the TEST Focus Groups sessions. Many of the testing challenges that enterprises encounter are the very same issues that we focus on solving at Perfecto. That’s why, each quarter, we release
The Digital Test Coverage Index, which aggregates data from testing on 5000 devices in the Perfecto CQ Lab with market share
data to give readers benchmarks for determining the right mix of devices, operating systems and form factors for testing.

We’d like to thank everyone who participated in the roundtable discussions. It was a great conversation with many important takeaways.

Edited for web by Jordan Platt