Testing times for the banking sector

Richard Lowe, Financial Services Account Director, SQS, argues that IT failures in the banking world are now commonplace. While some are more visible than others, they will keep happening as long as the banking industry continues to ignore simple quality assurance policies. 

In January this year, Sainsbury’s Bank suffered a widespread system failure, leaving customers unable to use their credit cards. They were quick to voice their anger on Twitter and the story was picked up by the mainstream press.

The issue at Sainsbury’s Bank came just two months after RBS was fined some £56m for a similar IT failure in 2012 that saw 6.5 million customers of Natwest, RBS and Ulster Bank unable to make payments for as long as three weeks. The fault, related to a botched software upgrade, prevented customers from using online banking, properly checking their account details, making mortgage payments and receiving salaries.

The failures are a worrying sign of things to come, and prove that the growing pressure on complex IT systems can and will affect consumers. Banks desperately want to avoid this at a time when customer confidence is already at a low point, and industry regulation is increasing.

It is easy to point the finger at legacy systems that have been adapted and patched up over decades of mergers and acquisitions (M&A). But is it time for the industry to start looking at the problem from a new angle? Just because something is old it doesn’t mean it can’t be effective; what is required is a deeper understanding of its capabilities and performance under differing levels of load, stress and circumstance. By having an integrated quality assurance approach, businesses can weigh up the benefits of implementing new systems over functionally rich ones that have been developed over many years of IT change. The gaps identified can then be assessed against the business process requirements in order to make the right decision to create a fit-for-purpose platform that benefits both the bank and its customers,

How to prevent a repetition

While the exact cause of the system failures at the two banks above is unknown, it is possible they could have been prevented by a more thorough and substantial quality assurance process. A change of culture and mind set is needed. Every individual team needs to tick its own boxes and only pass it on to the next when they are satisfied the code is working correctly and secure. This should happen throughout the software development life cycle (SDLC) all the way through to completion. There seems to be little end-to-end ownership from conception to production across not just the financial services industry but throughout the wider business society, yet the problem is often accentuated in businesses that have seen substantial M&A activity.

Quality gates need to be tightened, implemented and adhered to, over and above the pressures of meeting deadlines to the detriment of a product’s reliability. This also requires those quality gates to be drawn in accordance with the needs of the business (i.e. with the end goal in mind), rather than adopting an ‘ivory tower’ approach, which opens them to being bypassed for a greater business imperative. All of this can be best achieved by aligning quality assurance with the needs of the business.

Testing should happen from the very start of any new project and by sector specific testers – beginning with static testing of business requirements. Not only will the testers have an impact on the overall build by understanding thoroughly what needs to be tested, but any faults found at this stage will be much more easily fixed, resulting in cost savings and avoiding unnecessary rework. This is known in the industry as ‘shift left’. The practice requires accountability and increased communication with all stakeholders in the process to ensure the project is staying in-scope. In our experience of testing in the financial services market, the typical cost savings from the shift left approach are in the 20%-25% range.

Testing needs to begin before development, right at the inception stage of a product, and requires board-level buy-in from the start. However, this will often ignore the risks posed by integration with other systems. A thorough change impact assessment should be part of any early testing effort, to understand the risks to other legacy systems, as well as the risks created by the integration with those other systems. For example, a core online banking solution may have been thoroughly tested in isolation, but unless the risks of connecting it to the institution’s payment gateways are considered, the business processes that the solution serves are still at risk. 

The problems with legacy systems

This brings us to the danger of having an unfounded loyalty to legacy IT systems. This inbuilt fear of change is having a huge impact upon businesses’ ability to keep up with the pace of IT change. Technology is now considered to be the backbone of a successful business strategy in today’s fast-paced, competitive environments. The IT department may in fact be the bottleneck for change when it comes to keeping up with consumer and market demand. Whilst we don’t advocate ripping out legacy systems and simply replacing them with a shinier modern alternative without consideration, it is important to truly understand older IT systems to provide an accurate answer as to whether there is a genuine need to modernise.

For years now, the IT department has had a bad reputation when it comes to collaborating with the rest of the business, and the recent high-profile software failures at Sainsbury’s Bank and RBS have done little to alter this opinion. The reality is that the sheer pace of IT development has meant that they are struggling to keep up, and resort to upgrading legacy systems in a bid to adapt and adhere to budget and time constraints. This hand-to-mouth approach means that cracks start to appear. The business objective of speed-to-market is often incorrectly prioritised to the detriment of the IT department’s ability to undertake the appropriate due diligence and testing. A stringent quality regime needs to be at the heart of this process, to ensure that any system changes happen for the right reasons and with little or no risk to the business.

Testing times

Where testing verifies that specific requirements are met, quality assurance seeks to implement a fit-for-purpose solution, which involves ensuring that the requirements are correct to begin with, and working with the project team to build a solution that will meet its requirements.

As quality assurance is part of the implementation and testing phase it needs to be independent of the testing performed throughout the development phase. Ideally, it will be carried out by different people, with specific skillsets. In a way, quality assurance aims to pre-empt testing, and testing aims to prove whether quality assurance was effective.

The rising cost of compliance and risk mitigation

Higher standards are now required. Unless there is a holistic quality assurance approach, then the risk of failure will only increase. With the cost of non-compliance in the form of regulatory fines running into the millions, and the cost of having to undertake corrective measures potentially millions more, effective quality assurance has to be a focus for all financial services organisations. The intangible detrimental effects that IT failures can have on consumer confidence also need to be considered.

With the increase in internet-facing functionality and mobility features, legacy systems require frequent code changes and upgrades to implement security measures and render them compliant with regulatory requirements. However, the legacy nature of the application can make these changes complicated to implement and test when compared to a newer platform. Legacy kit needs to receive continual security updates from the vendor, with these then needing to be applied in a controlled manner, including regression testing, by the organisation to ensure no breaches occur. The issue tends to be the downtime often required for security updates and whether the business can accommodate this, difficult when access to banking systems is required 24/7 via a plethora of devices.

The banking industry should guard against the tendency to reduce an objective or a requirement to a list of checkboxes and bear the original objective of the regulation in mind. This is best done with a testing team that not only has the skills to assess a system for compliance, but also the domain knowledge required to understand the regulation and its impact on the business.

While it is impossible to get away from the issues that banks are currently facing, it is possible to identify one of the root causes for these problems as a lack of understanding when it comes to legacy IT systems. Customers start to feel the pain when organisations work outside the parameters of their systems and start to add extra components without investigating the potential pitfalls in performance. Expectations of legacy IT systems need to be managed as they are unlikely to be able to cope with the new customer demands and new product diversification that is needed for banks to remain competitive in a highly regulated market.

More
articles