The lasting impact of application response times

Kevin Surace, CEO, Appvance Inc., discusses the increased expectations in application response times.

In a world of instant gratification, application transaction times have a lasting impact on user and brand perception, as well as worker productivity. Smart companies are striving for applications to have transaction times less than 1 second, through the server, internet, browser, JavaScript engine and to the user’s eyes. For each second of delay, 7% of users get distracted and wander.

A few years ago, website designers and application developers tried to achieve transaction times under 10 seconds. Back in the HTML 1.0 days, load testing only verified if a site would crash under load. No client-side code existed, no smartphone, sites ran on slower servers, and productivity increased simply due to the use of computers in the workplace.

Time limits determined by human perceptual abilities

User experience professionals as early as 1968 identified three time limits determined by human perceptual abilities to keep in mind for app development. Jakob Nielsen identified these levels of computer responses in his book Usability Engineering from 1993 (essentially pre-web), which states:

0.1 second: is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.

1.0 second: is about the limit for the user’s flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.

10 seconds: is about the limit for keeping the user’s attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish. Users should be given feedback indicating when the computer expects to be done.

Increased expectations

While perceptual abilities have not improved, expectations have. In the 2000s, people began investigating the correlation between the speed of a website and loss of customers. Yahoo found that a 400-millisecond improvement to the speed of its home page increased page views by 9%. Firefox shaved 2.2 seconds off its average page load time and increased download conversions by 15.4%. Shopzilla reduced load time from 7 seconds to 2 seconds, and increased page views by 25% and revenue by 7 – 12%. One Google experiment designed to look at response times increased the number of search results per page from 10 to 30, with a corresponding increase in page load times from 400 ms to 900 ms. This resulted in a 25% drop off in first result page searches. By adding the checkout icon (a shopping cart) to search results, responses were 2% slower (added 8 ms), with a corresponding 2% drop in searches/user. Similar results are now showing up in mobile apps, with users expecting near instantaneous response rates.

Users have become used to sub-1-second response times. Driven by long-term efforts from leaders such as Google, Amazon, Facebook and others, companies aim to deliver against any request as close to 100 ms as possible, which means it’s near instantaneous to the user. By focusing on the user experience (not just server response), companies can tune server code and architecture, as well as client side code, to deliver rapid response rates. This has started to condition the world to believe that instantaneous delivery is a reasonable expectation to any request. If Google can search the world’s largest database and return results in a few hundred milliseconds, why is it acceptable to allow corporate transactions to take 6 seconds?

Setting the bar high

The bar should be set at 100 ms. One second is too long because users have lost their train of thought by this time, and have started to wander off. With a 1 second response time, 7% of users show a 16% decrease in customer satisfaction at every transaction. If the largest players can drive toward 100 ms, anyone can. Their methods have required full beginning-to-end performance validation (i.e. UX through back end) at every build (true agile) and a set of goals to continue to bring the transaction times down at every release (hourly or daily). Nothing gets added to the code or architecture that could affect instantaneous response. Every day is an exercise in driving down transaction times in a matter of cult-like behaviour.

Continuous integration should also be continuous improvement. At each build, QA or DevOps should verify transaction times of all major use cases, and drive them down. If a corporation has 100,000 workers using a common application that can see a transaction time improvement of 2 seconds, it can save some US$14 million per year. How about improving every transaction in all applications? Millions of dollars could be saved in worker time, truly impacting the bottom-line. It should be a C-suite imperative, giving a lot of smart developers, DevOps and QA teams plenty to work on in the coming year.

More
articles