Capacity planning in DevOps

It’s not OK for a release to fail, explains Peter Duffy, CTO, Sumerian. Truly agile DevOps demands capacity planning insights.

With the growth of DevOps comes a fundamentally different approach to software delivery. In contrast to the traditional method of releasing large, sporadic code updates, the almost uninterrupted cycle of delivery enabled by DevOps has dramatically changed user expectations. The key challenge, however, is to ensure that this release model does not negatively impact the operational (production environment) and in turn the user experience.

Continuous deployment to legacy physical servers, a virtualised environment or a public cloud is indeed possible – although more focus has been given to virtual and cloud platforms, and to automate deployment to physical environments can be more challenging.

A common misconception is that it is okay for a release to fail – the argument being that in a world of continuous delivery, one can either back out a change or quickly deploy another release. As soon as a mobile element is introduced, however, app store deployment needs to be factored in, along with the approval process timeframe. Not so continuous now and not without risk.

Successfully implementing a DevOps model requires giving thought to the three standard components of all businesses – people, process and technology.

People

In a traditional operating model, developers work in a development environment; release code to QA teams using one or more test environments, and this is eventually deployed by operations teams into one or more production environments. By the time code hits production, the development team is typically busy working on one or two iterations further ahead.

In a DevOps model, you need to bring these teams much closer together. This leads to much closer collaboration and tighter planning, which is a necessity given the shorter release cycles. And it’s also crucial that each team understands the others’ operating environments – as well as using the same monitoring tools across development, test and production, in each environment. This provides the starting point for understanding the future impact of development changes on the production environment, thereby ensuring our goal of no adverse impact on release.

Process

Processes are all about structuring workflows so that they are efficient and consistently deliver the right outcomes. The adoption of best practice guidance and frameworks like ITIL have transformed the IT operations and ITSM landscape from the (not so) organised chaos portrayed in ‘The IT Crowd’ into lean, user-focused, self-service delivery organisations.

DevOps is very reliant on process automation to deliver agility. Automation certainly can make processes more efficient and faster to execute, but automating a process where you have little visibility of the potential outcome can lead to disaster. When automating process workflows it’s important to integrate a good information flow – ensuring that your processes are measuring and surfacing the right information at each workflow stage.

Technology

DevOps is a movement that uses technology to deliver technological solutions more efficiently into production environments. Much of the focus of to date has been on the power of technology to automate core processes.

But as discussed in the previous sections, it is just as important to use technology to understand how your solution behaves as it moves between environments; that way you can predict at an early stage how it will ultimately behave in production.

DevOps and capacity planning

If the ultimate goal of DevOps is to deliver software in a more agile manner while avoiding problems in the operational environment, then we need to have a DevOps focus on the operational requirements. These include things like security, availability and capacity.

The closer collaboration between development, QA and operations promoted by DevOps offers an opportunity for organisations to significantly improve their capacity planning.

An understanding of the footprint of existing deployed application components on the production infrastructure is a necessity to manage that infrastructure. But how to model the impact of deploying a changed component into that environment? Using predictive analytics the data collected in the test environment can be analysed to create component models, which compare the relative resource requirements of the old and new instances.

These component models can then be used to construct a ‘what-if’ scenario model, to predict in advance the expected resource requirements when the new component is deployed. This can further be overlaid with growth projections to ensure that enough resource exists to support business plans (or to determine what additional resource will be required).

By continuing to apply predictive analytics to resource consumption in the production environment production teams will be able to identify potential threats to service and take action before they become service affecting. This information can then be used to close the loop by helping development teams identify long-term code problems.

By powering up your DevOps processes with insight from this continuous analysis of capacity data across the triumvirate of development, QA and operations you can ensure you true agility rather than just speed.

 

Edited for web by Cecilia Rehn.

More
articles