Test Environment Management

How an international energy and services company wrestled control of 1400 test environments

The IT team of an international energy and services company is charged with continuous delivery of business innovation and rock-solid application stability for over 25 million customers. They strive for agility and DevOps efficiencies, but they do so in a highly challenging ecosystem. The enterprise IT portfolio is enormously complex, with thousands of moving parts.

Software projects encompass tightly-coupled system architectures, geographically dispersed teams, customised third-party projects, and stringent regulatory requirements. Comprehensive tests of new projects and applications require highly specific test environments. The team zeroed in on management of these test environments as a major source of inefficiency. Every month they struggled to answer even the most basic of questions: “Do we have enough test environments to supply this change plan?”

From this humble beginning, they implemented a range of best practices for managing test environments that not only helps them bring value to market faster, they do so with greater confidence in application quality. The team has even been nominated for a computing DevOps Excellence Award to highlight their work in implementing DevOps transformation for their core system of record, field service applications used by thousands of remote technicians, and websites for millions of customers.

Foundation for shift-left testing

Given the scale and significance of their applications, they could not compromise quality for speed of delivery. Increased release cycle time had to co-exist with a reduction in production incidents as defects and outages were discouraging end-user adoption of new tools and services. They analysed their end to end test processes and realised they were discovering defects too late in the test cycle, significantly impacting application quality.

To address this challenge, the QA team wanted to create a foundation for shift-left testing, and outlined two key elements needed in their environment planning for each project:

  • Define test stages and the criteria needed to pass code onto the next testing environment.
  • Capture all the requirements for test environments at project kickoff to ensure configuration accuracy.

In addition to application and component version tracking, requirements for test environments might also include data needs, when test environments can be shared with other projects or any 3rd party code that needs to be incorporated. Once captured, these requirements are replicated for each stage of the pipeline, so each test platform is fit for purpose.

Establishing both governance and configuration stability to test phases helped them reduce complex triage scenarios as code progressed into more complex test environments; especially important since QA timelines were always getting compressed towards the end of the project.

Multi-stage test strategy

The team created a multi-stage test strategy that becomes the foundation for their delivery pipeline:

Stage 1: Functional testing: To isolate defects in new code as early in the delivery pipeline as possible, functional tests are run in a localised environment with no checks against any other projects.

Stage 2: Regression testing: New code is incorporated into the production baseline. Test teams share environments in early stages and can test against each other.

Stage 3: Performance testing: Once regression issues are sorted out, the performance of the new code is stress tested along with a stress test of the entire system.

Stage 4: Final inspection.

Stage 5: Go live.

Managing heterogeneous test environments

Their tightly-coupled architecture consists of SAP at the core. Non-core applications comprise a range of remote services for field technicians (installing meters or home meter inspections, for example) and website apps for home and business customers.

SAP is the only application that has been deployed to the cloud, with remaining apps located on-prem in VMs and bare metal instances. Code released to production is incorporated into the golden copy for SAP, and those changes are fed back into all test environments to always align with production. In addition, their Environment Data Services group ensures test environments have the correct amount of database records. As a baseline, the SAP environment has 5000 accounts on it by default, and if a project needs more or different data, this team creates the appropriate volume and/or type of records.

Cost control

With over 1400 test environments and almost 2000 components, containing costs is a constant concern. Spin up and spin down of SAP instances is based on historical usage and consumption rates. This demand-based deployment model helps them save money by identifying projects that could efficiently share an environment or environments that can be spun down to avoid unnecessary charges. By evaluating previous usage reports of the cloud-based SAP environment, they can validate consumption against invoices received from their cloud vendor, and cross charge to the correct internal project.

Streamlining bookings and change requests

The Environment Delivery Assurance team processes over 50 booking and change requests for test environments per month. Scheduling, resolving conflicts, and tracking complex configuration items and cross-project dependencies weren’t feasible using spreadsheets. To ensure fast-moving test teams weren’t waiting for an environment to be available, or running tests on an incorrect configuration, they consolidated the tracking and management of all environments. When booking requests are received by the Environments Delivery Assurance team, they have a centralised view of what environments are available, as well as the correct application set and the respective configurations. These efficiencies save time and allow them to focus on application testing rather than on trying to resolve test environment issues.

Hardware lab for end-user journey testing and…time travel?

With solid practices in place, they turned their attention to the hardware labs used to test end user journeys. Government regulated smart meters have been installed for every gas and electricity customer, and quality of device firmware is validated by testing customer usage scenarios prior to any firmware updates. And because firmware projects generate data used for R&D, new versions need to be tested on actual end devices.

Their comprehensive test suites represent any and all customer scenarios for end-user journey testing. For example: When firmware is updated, can the communication hub still receive the meter read? Can a new customer still be bought online? The team has even implemented what they call ‘time travel’ scenarios to validate user journeys that might occur in the future. For example: Is billing still accurate when there is a change to a business customer contract? Does new firmware support the loss of a customer? How long will it take to execute a change in the mode of payment next year, given projected changes in system load?

Alongside smart meters in the hardware lab, the IT team manages in-house displays, communication hubs, electric meters, gas meters, firmware…over 1200 artifacts to track, allocate, and schedule for testing purposes. Now configuration metadata is centralised to provide managers a consolidated view for accurate and timely provisioning of all lab components. In addition, the same audit trail and change history available for IT releases is also available for the hardware lab.

Delivering quality code, faster

Wrestling control of 1400 complex test environments plus a large end-user hardware lab was no small task. With the right processes and tools in place, the environment management team has become a center of excellence and a model of organisation and efficiency. Better yet, they have confidence in their delivery of correctly configured testing environments on time, resulting in higher quality code releases day after day.

About the author: Lenore Adam has over 15 years’ of experience in product management and marketing for enterprise hardware and software product lines. She is a Sr. Product Marketing Manager for Plutora Continuous Delivery Management in the San Francisco Bay area.