Back to Blog
Developer Experience
Best Practice

Turning the 2024 State of DevOps into your 2025 Playbook for DevOps Excellence

The DevOps Research and Assessment (DORA) report is an annual study that provides data-driven insights into what drives high performance in technology organizations. DORA originated as a team at Google, focused on increasing velocity, performance, and collaboration in DevOps.

Cortex

Cortex | January 30, 2025

Turning the 2024 State of DevOps into your 2025 Playbook for DevOps Excellence

The DevOps Research and Assessment (DORA) report is an annual study that provides data-driven insights into what drives high performance in technology organizations. DORA originated as a team at Google, focused on increasing velocity, performance, and collaboration in DevOps.  

Drawing on feedback from over 39,000 professionals globally, the 2024 report categorizes the practices and metrics that define high-performing teams in two higher-level groups. The first is Software delivery throughput, measured through change lead time, deployment frequency, and failed deployment recovery time. The second group is Software delivery stability, measured through change failure rate, and amount of change rework.

Through the lens of these high-level measurements, this year’s report investigates two popular current trends: AI adoption and platform engineering investments. They can boost productivity and performance, and both come with some potential side effects (such as reduced delivery stability and increased change size and complexity). 

In this post, we show how DORA report insights can be turned into a plan for DevOps excellence in 2025. We’ll cover metrics for success, strategic AI use, platform optimization, resilient teams, and continuous improvement, and we’ll provide a checklist to help guide you as you make your plan.

Success is rooted in well-tuned metrics

When looking to set up your DevOps team for success in 2025, the first step is making sure that you can identify improvements and opportunities for improvement as they happen, which translates into choosing and monitoring useful, well-tuned metrics. The DORA report uses two key areas when evaluating companies: throughput and stability. 

Throughput metrics

High-performing teams deliver value to their customers quite quickly. For throughput, the DORA report tracks the three dimensions below, and high-performing teams stand out in all three: 

  • Change lead time: How long does it take for a change to get delivered to your users, after development work is complete? Elite teams deliver changes to production in under one day, many times faster than the multiple-month timelines that low performers tend to have.

  • Deployment frequency: How often are updates shipped? Elite teams will ship multiple updates per day, while low performers may see months between updates and releases.

  • Recovery time: If something goes wrong during the deployment process, how quickly do you recover? Elite performers measure recovery time in minutes, while low performers can take several weeks.

Stability metrics

High-performing teams don’t only move quickly; they also make sure to break very little in the process. Measuring how stable and reliable your releases are is as important as measuring delivery speed. The DORA team looks at two stability metrics:

  • Change failure rate: What percentage of your team's changes cause issues to your users once they reach production? For elite teams, this number is usually below 5%, while for low-performance teams it can be as high as 40%

  • Rework rate: What percentage of your releases, or of the work that goes into each release, is unplanned? How much of your work fixes bugs or problems in work that was considered finished? These measurements are closely correlated to and follow the same distribution as the Change failure rate.

For each of these metrics, consider whether you already have means for measuring them (or good approximations) for your existing systems and teams. Is there anything else that might need to be measured for your teams to help measure throughput and stability? What does it mean for your team to complete development work on a change? When do you consider a change to be completely delivered to your users?

It’s worth noting that the DORA report classification for high-performance teams is intentionally skewed towards rapid delivery. The report acknowledges that, and highlights that there often is a tradeoff between speed and stability, and that the tradeoff might change with team and organizational priorities. Reaching elite status across categories is likely self-selecting; companies that remain in business for long enough to be successful enough will eventually afford to be elite on all measurements. So while this is much more difficult to measure, it’s probably true that the best-performing and most elite teams are the ones that can adapt fastest. 

So it’s likely that elite improvement is more important than elite performance if performance metrics are skewed towards speed. Prioritization is key. Once you have a plan for which metrics to track, and how they are relevant to your teams and customers, consider how changes along each might align with your organizational goals for 2025. This should help highlight which changes are improvements, how to prioritize them, and what might be some good checkpoints and thresholds to set to trigger a re-evaluation.

Leveraging AI for strategic advantage

AI is a game-changer for DevOps teams in 2025, and the pressure to adopt it is tremendous. Many teams feel that if they do not move quickly enough, they are liable to be left behind. The possibilities for improvement, by, for example, enhancing code quality, improving documentation, and increasing test coverage, are exciting. The DORA report reminds us that adopting AI comes with some challenges, and a thoughtfully strategic approach is key.

The benefits

It’s difficult to stay on top of all of the AI-powered changes that are available for teams to adopt. The DORA team groups benefits into three rough categories:

  • Writing, rewriting, and optimizing code

  • Debugging and summarizing complex code and systems

  • Automating and accelerating repetitive tasks, freeing up time for higher-value work

These benefits boost productivity and overall engineer satisfaction with their work. Choosing and maintaining a good list of AI tools and AI-based processes to adopt is challenging. Consider setting up some metrics for each of these three groups in advance, and keeping track of them throughout. High-level metrics to track include ones for measuring time saved, the quality of improvements, and the volume of work that is automatically produced. 

The challenges

With great power come… well, more than a few headaches. The high volume of work that can be produced with AI creates several measurable challenges:

  • Distractions and lost focus: AI frees up time and energy, leading to a much higher volume of productive work being finished much faster. It’s important to make sure that organizational inertia doesn’t fill the extra time with “busy work”, such as spurious meetings, “bike-shed” conversations, and arbitrary processes. Double down efforts to focus on tasks that genuinely add value, such as product innovation and user experience improvements.

  • Delivery instability: AI-powered engineers can generate much larger change sets, much faster. Large volumes of changes and large change sets increase risks and are correlated with system instability.

  • Trust issues: AI-generated outputs have inherent problems, which are beginning to be familiar to the mainstream. Hallucinations, subtle errors, and inconsistent outputs make the output of AI-powered systems difficult to trust. Taking time to understand the tools and build trust in them is essential.

Successfully adopting AI means balancing the benefits and challenges. There are many strategies and tools you might use to do that, and the work is still evolving. Ultimately, though AI may sound like magic, it is just a tool like any other tool in your arsenal. Carefully integrating it into your workflows while moving quickly is challenging, but with thoughtful metrics and a flexible, adaptable approach, you can harness the benefits and avoid the pitfalls. The key is to become more intentional, more agile, and more focused on driving real value to your customers.

Optimizing platform engineering

Platform engineering is seeing a deep, industry-wide focus and adoption. It’s goal is to create tools and workflows that make developers’ lives easier and more productive. Internal developer platforms (IDPs) streamline repetitive tasks and enable teams to focus on building great software. The 2024 DORA report highlights that the productivity boost and team performance benefits from platform engineering can come at the cost of slower throughput and increased complexity if systems aren’t thoughtfully designed.

The benefits

Adapted correctly, IDPs can become a core component of your transformational journey. They drive data-informed decisions, a core recommendation from the DORA team, by continuously tracking and reporting on application and service health. For the companies DORA investigated, IDP adoption caused individual productivity to improve by 8% on average, and team productivity by 10%. Companies adopting IDPs also reported significant improvements in software delivery metrics and operational efficiency.

The challenges

Challenges with IDP adoption can impact throughput and stability. An inefficient adoption process can add spurious automation and unintended handoffs, reducing delivery speed by 8%. In general, adopting new platforms can increase instability as a side effect of changes, and reduce reliability by as much as 14%.

Making IDPs work well for you

Platforms in general, and IDPs in particular, are a powerful way to improve the developer experience and organizational outcomes. They can be a crucial part of making your organization data-informed. DORA highlights some high-level guiding principles to make adoption optimal, including:

  • User-centric design. Your developers are customers of the IDP product and should be treated as such. Platforms should be adapted to reduce friction and have intuitive, self-service workflows.

  • Developer independence. Focus on empowering engineers to perform tasks autonomously, avoiding handoffs and escalations into platform engineering teams.

  • Feedback loops. Regularly collect feedback from your users (your developers), using the same tools as for any other product: issue trackers, surveys, interviews, and related instrumentation. Ensure the platform evolves with their needs, and apply the same product iteration process as you would for external-facing products.

  • Thoughtful metrics:  Metrics can track throughput, stability, and user satisfaction, and correlate them with IDP-related changes. They identify which changes are causing improvements and which are causing issues. It is important to orient metrics around product and service performance, and so use them to support and empower engineers. Avoid using IDPs to measure individual, team, or line of business performance

Keeping a focus on user needs and an eye on performance metrics is core to ensuring your platform improves as an enabler for your team through 2025.

Team resilience–leadership and organizational stability

At its core DevOps is about people, not about tools and metrics. Building resilient teams, especially in the face of the rapidly changing operational environment, requires balancing transformational leadership with organizational stability. The 2024 DORA report highlights that strong leadership and clear priorities directly correlate with reduced burnout, improved team performance, and continued innovation within a fast-changing environment.

Great leaders drive productivity and satisfaction. They drive collaboration and trust within teams, innovation, and adaptability in workflows, and maintain a shared, cohesive vision, keeping teams aligned in the face of rapid changes. Coupled with stable priorities and well-defined goals, they can keep team burnout at bay and ensure quality outcomes. Teams can thrive, driving meaningful work, without feeling overwhelmed with shifting objectives and unexpected firefights. This is crucial and will support rapid changes driven by AI adoption and continuous platform refinement.

When outlining a plan for 2025, consider including a few additional high-level objectives:

  • Transformational leadership training: Invest in appropriate leadership training programs to make sure leaders have the required skills and that approaches across the company are consistent.

  • Clarify priorities: Define clear, high-enough-level goals for 2025. Perform diligent work to balance flexibility with clarity, to avoid mid-year overhauls. Ensure teams have clear ways to align their contributions to these goals.

  • Foster predictability: Limit major shifts in focus. Avoid stagnation by defining clear decision frameworks, where changes and improvements are made according to predictable processes and guidelines.

  • Balance transformation and stability: Encourage teams to embrace continuous improvement and innovation. Lead with consistency, clear communication, and confident leadership that provides a stable foundation for continuous change.

Leaders who are equipped with the right tools and are focused on providing a stable base for exceptional transformation will create the environment you’ll need to thrive in 2025. With the right support, teams can become the resilient backbone for DevOps excellence through rapid changes. Investing in their well-being will pay dividends across the board.

DevOps Success: continuous improvement

The 2024 report emphasizes that the best-performing teams have an outstanding ability to improve and adapt to change. They continuously refine their process through research, data-driven insights, and iterative adjustments. Continuous improvement is the key to lasting success, including in tackling AI trust, AI adoption, platform engineering challenges, or leadership dynamics. A careful data-informed approach, rooted in judicious tracking of metrics, and applied with a rigorous experimental mindset, backs the most adaptable teams.

Keeping improvements ongoing and vibrant requires support and encouragement. If you and your team treat challenges as opportunities for learning and exploration, have an open culture around metrics and feedback, and keep changes small enough, change becomes a practice. You and your team can remain agile and tackle whatever 2025 has in store.  

Leveraging an IDP to make the 2024 report actionable

IDPs are game changers for organizations that aim to become, or remain, continuously self-improving. They can help unify data, streamline workflows, and drive the experimental feedback loops that drive DevOps excellence. An IDP can turn the insights from the 2024 report into actionable strategies for 2025.

Look for an IDP to provide the following kinds of support:

  • Centralized metrics: Aggregate your data, from across all of your operational and engineering systems, into a single source of truth.

  • Measure progress and success: Scorecards or dashboards that track transformation and help highlight project completion and success.

  • Actionable insights: Use reports and scorecards to assess your current state and track changes and progress against goals, hypotheses, and experiments.

  • Automation: Workflows, alerts, reports, and other tracking and reporting work should not take time from your engineering teams and their leadership. An IDP will automate most of this for you.

An IDP like Cortex will empower your teams with the insights, visibility, and low-friction workflows required to turn ambitious transformational goals into measurable outcomes for 2025.

Talk to an expert today