Back to Blog
Best Practice
Productivity

The pocket guide to engineering metrics

Want to monitor and and improve your engineering team’s performance and align your output with business objectives? Start here.

Lauren Craigie

Lauren Craigie | November 20, 2023

The pocket guide to engineering metrics

Businesses today have more access to information about their products and engineering teams than ever before, and the push to be data-driven is also at an all-time high. Engineering metrics can provide actionable insights that help accelerate technology and business impact.

In this article, we will explore what makes an effective engineering metric, common categories and examples of metrics suitable for software development teams, best practices for implementing a metrics-driven approach, and how to strategically choose metrics that connect to wider business objectives. Whether you are just starting to adopt metrics or looking to mature your use of them, this guide will outline how to construct a framework that enables continuous improvement and drives results that matter for your business.

What are engineering metrics?

Engineering metrics are measurements of engineering processes and output. Collecting, monitoring, and reviewing engineering metrics is the basis of data-driven decision making in engineering organizations. Engineering metrics are often tied to key performance indicators (KPIs), a category of metrics used to evaluate the success or progress toward organizational goals.

Engineering metrics can generally be grouped into one of four categories:

  • Product: Measure the quality and performance of the product itself. Examples include system uptime, response time, error rates, customer satisfaction, etc.

  • Process:  Evaluate the efficiency and effectiveness of engineering processes like development, testing, deployment, and incident response. Cycle time, lead time, deployment frequency, and defect rates are process metrics.

  • Project: Track progress and performance of specific projects and initiatives. Velocity, burndown metrics, and cost metrics would fall under project metrics.

  • People: Provide insights into the productivity, skills, and health of engineering teams. Examples include team velocity and morale scores.

Benefits of engineering metrics

Organizations have many ways of making decisions. Sometimes it’s based on a manager’s intuition or understanding of what’s worked in the past. Sometimes the CEO gets a tip from a friend about a new industry trend and pivots the company in a different direction. Although these methods have their place, we can be more scientific in the way we make decisions. We have the ability to collect large amounts of data, run analyses, and use the results for decision making based on facts and evidence.

Tracking the right metrics provides a number of benefits that directly translate to improved outcomes. Metrics serve as a diagnostic tool to identify what's working, what needs adjusting, and how processes can be improved in your team or across the organization.

  • Product excellence: One of the key benefits of implementing engineering metrics is improving product quality and performance, also known as product excellence. Metrics provide visibility into the health of the product that directly enables teams to improve code quality and reliability.

  • Process optimization: Tracking engineering metrics helps you to increase efficiency and profitability. Data can guide improvements to cost-effectiveness and resource allocation by comparing costs with a service’s utility. The metrics can inform improvements in development processes as well.

  • Business alignment: Engineering metrics can help with aligning engineering with business objectives and improving cross-team communication. They give business teams insight into engineering practices as they provide a reliable and consistent way to report progress. Using metrics to communicate helps cross-functional teams find a shared language.

  • Visibility for the team: Metrics make priorities and objectives clear for engineering teams and individual engineers, so that they can understand what outcomes really matter and why their work is important. Metrics unlock autonomy and ownership.

With clearer visibility into team productivity, product quality, development velocity and more, organizations gain actionable data to make smarter decisions that accelerate technology and business impact. This data-driven approach steers engineering in the right direction, helping instill a high-performance culture focused on meaningful results that ladder up to wider business objectives.

Multiple metrics can also be combined to yield a bigger-picture view. Team productivity and product quality, for example, are not easily measured or understood by one dimension alone. Analyzing multiple metrics also helps you understand which levers to pull to improve these higher level goals. In short, a well designed system of engineering metrics enables the continuous improvement of products and processes needed to deliver excellent products consistently.

14 engineering metrics to consider implementing

The important metrics for your organization will depend on a number of factors, such as your specific business needs, maturity, and team size. Different disciplines, such as software, hardware, and systems engineering, will often require different metrics. In general, here are some engineering metrics to consider tracking:

  • Code churn: Code churn measures the frequency of code changes by counting the number of lines of code that are added, deleted, and modified within a codebase during a certain time period. High churn indicates volatility, which impacts quality and stability due to increased risk of introducing bugs.

  • Code coverage: This metrics tells you how much of a given set of code is covered by certain kinds of tests, mainly unit and integration tests. Having higher code coverage is good practice for code hygiene and likely speaks to the code quality, but it’s not sufficient for preventing all bugs and errors.

  • Cost: Cost can encompass both labor costs and the costs of servers and services. It can provide insight into tradeoffs with other metrics, for example if an improvement in another metric is worth the increased associated cost.

  • Customer satisfaction: Tracking customer satisfaction sometimes requires some creativity, but it’s important to understand how other metrics affect customer satisfaction, and how customer satisfaction affects revenue. Some forms of engagement can be used as a proxy, or you can build in ways for customers to tell you their level of satisfaction with a rating or survey.

  • Cumulative flow: Cumulative flow is a metric that shows the amount of work in progress throughout the development process. It tracks the number of items in each state of the development process over time. In an agile context, these states typically include backlog, in progress, testing, and done.

  • Defect rate: Sometimes referred to as the defect density, the defect rate measures the number of defects for a given amount of code. It measures the quality of the code by counting the number of issues or bugs present, and it monitors the overall quality and stability of code being produced.

  • Morale: Engineering morale tracks how satisfied team members are with their work and work environment. Morale tends to drop when tech debt increases and code quality decreases, so understanding the attitude of the team can indicate a lot about the health of the codebase. You can gather this data in surveys and ask about different components of satisfaction, including an employee’s attitudes toward the company direction, their engineering managers, team culture, their work environment, their tools, tech debt, and their individual contributions.

  • Mean time between failures (MTBF): MTBF measures the average uptime between service outages. It provides understanding of the frequency of failures impacting users, which directly impacts the perceived product quality.

  • Pull requests raised versus reviewed (PRs raised/reviewed):  This metric is used to track how many pull requests are opened versus how many are code reviewed and merged in a code repository. If many pull requests are being opened but not reviewed in a timely manner, it indicates reviewers are overwhelmed or there is insufficient review bandwidth. This can serve as a proxy for velocity since it’s a metric for development throughput.

  • Release burndown: This metric tracks remaining work across the product timeline (longer than a sprint). It communicates release predictability, and can be measured as the number of remaining agile story points for the release per day.

  • Rework ratio: Rework ratio measures the amount of time and effort spent redoing work that had already been considered complete. It’s the time spent on rework divided by the total development time. A higher ratio can indicate one of many inefficiencies, such as excessive rework caused by defects, changes in requirements, poor code quality, or technical debt.

  • Server uptime: Service availability is often represented by server uptime, and it’s generally a critical metric for DevOps teams. The phrase “five 9s” is used to describe a high target for this metric, referring to an availability level of 99.999% of the time. Server uptime is contingent on other factors such as server capacity and change failure rates.

  • Sprint burndown: In the context of an agile sprint, burndown tracks remaining work through iterations and communicates the sprint progress. It’s measured by counting the remaining points per day, and it can help inform a team’s process efficiency and productivity.

  • Velocity: Velocity is a throughput metric that measures the amount of work a team completes in a set time period. In agile, this would often be expressed as story points per sprint. It’s used to inform engineering productivity.

Some of these metrics overlap, and the list is certainly not exhaustive! Work with your product managers and engineering leaders to generate a list of potentially relevant metrics for your team given your business KPIs.

DORA metrics

The DevOps Research and Assessment (DORA), a program run by Google Cloud, spent years researching the metrics that best capture the performance of engineering teams. After collecting data from tens of thousands of engineers, they introduced four metrics in 2020, and later added a fifth. These metrics used together are useful as a tool to capture the overall performance of a DevOps team.

  • Deploy frequency: This metric measures how frequently a team releases to production successfully on average for a given time period. Separate from measuring volume pushed, deploy frequency is considered high if there are multiple releases per day.

  • Change lead time: Used interchangeably with cycle time, change lead time is the time it takes from a code commit to get deployed into production. This reflects how efficiently and frequently a team can release to the servers.

  • Change failure rate (CFR): Similar to but slightly different from than the defect rate, the CFR is the percentage of deployments that result in a failure, such as downtime or a rollback. Lower rates signal higher quality and stability.

  • Mean time to recovery (MTTR): The MTTR measures how long it takes for an organization to recover from a production failure. Related to server uptime, MMTR is concerned with how long it took to detect and resolve an issue from the beginning of the incident.

  • Reliability: Added to the list of DORA metrics in 2021, reliability covers a few concepts including availability, latency, performance, and scalability. It judges the ability of a team to meet or exceed their own reliability targets.

More information on DORA is available here.

Best practices for incorporating engineering metrics into your business

After deciding on the right metrics, you have to incorporate using and reviewing metrics into daily business and engineering processes. Consider the following best practices for implementing a data-driven culture.

  • Ensure the metrics support your business objectives: Metrics have to be tied to something relevant to the business. Engineering teams don’t work in a vacuum, and their work and outcomes should be relevant to the bottom line of the organization. Everyone wants to see their work making an impact at the business.

  • Establish benchmarks: Collecting metrics is only the first part of the equation to operating as a data-driven organization. Next, you need to measure these metrics against a set of benchmarks. This allows teams to make changes and see the outcomes to learn and improve over time. The chosen benchmarks should be related to business outcomes and goals. Metrics are means, not an end.

  • Iterate on metrics as needed: Even with the best preparation, you likely won’t get the right metrics on the first try. Metrics should be reviewed regularly and coupled with processes to identify and address any issues. As teams and products evolve, the metrics collected and considered may need to adapt and change. Different metrics might be relevant for a new product with low traffic and few features, compared to the same product when it’s mature. When first adding metrics to your team, start small and plan to iterate and add metrics over time.

  • Automate data collection: To achieve the required volume and specificity, data needs to be collected automatically. Some metrics, such as those measuring server uptime, may already be collected by a cloud provider, but some metrics will require an engineering team to add code and a collection pipeline to capture the data and make it usable.

  • Focus on quality over quantity: Tracking too many metrics can lead to data overload and lack of actionable insights. Carefully curate the most meaningful metrics aligned to team goals. Start with 3 to 5 metrics, and add more over time. Depending on the size and complexity of the team, a good number of metrics could be between 5 and 15.

  • Continuously monitor metrics: Review metrics every team meeting so individuals and teams are aligned on the current status and objectives. Measuring the metrics once and not using them again won’t drive real change, as metrics help you understand the effects of change over time. Make the metrics visible and easily accessible; use a shared dashboard between teams to make sure the different engineering and business teams are viewing the same metrics.

How to pick and track engineering metrics

There is a lot of hype around being a data-driven organization, and it’s important to recognize that using metrics incorrectly can be more detrimental than not using metrics at all. Choosing the right metrics is important, as well as tracking the metrics accurately.

Metrics must have a purpose. Engineering metrics on their own provide limited value unless they are tied to specific business goals and outcomes, even those that are highly technical. The purpose of measuring engineering metrics is to provide visibility into capabilities that drive important business results. Metrics without any business context may quantify activity but fail to provide meaningful insights.

For example, tracking deployment frequency and lead time for changes makes it possible to release new features faster. But faster delivery is only useful if it helps achieve business objectives like increasing customer signups, reducing churn, or boosting revenue.

Similarly, monitoring MTTR and change failure rate improves system stability and uptime. However, the impact of better uptime is only felt if it translates to outcomes like higher customer satisfaction, more sales conversions, or lower operational costs.

Before choosing engineering metrics, make sure your business goals are clearly defined and are communicated to the engineering team. Engineers can work more effectively when they understand how their work fits into the larger organization, and ultimately will be one of the best resources for selecting the right metrics. Listen to our on-demand webinar about measuring and growing developer productivity.

Monitoring metrics with Cortex

Setting up metrics can be challenging. Frequently, engineers must build systems to collect data, set up pipelines, and create reports manually or with SQL dashboards and tools. Metrics tracked manually in spreadsheets have errors and gaps, and the volume and frequency required to track progress across multiple metrics can’t scale. Cloud providers, like AWS, provide some dashboards that reflect some real-time metrics, but aren’t easily readable or accessible by non-engineers.

Cortex Scorecards provide an easy way to get insight into many of your targeted metrics by integrating with existing services, some of which your organization may already have up and running. With Scorecards, metrics become easy to capture and visible to stakeholders. To learn more about Cortex Scorecards, check out the following article: Top three scorecards every organization needs for operational efficiency.

Going beyond Scorecards, Cortex makes communicating and enforcing organizational priorities easier with Initiatives. Cortex Initiatives help align organizational priorities across teams by bundling scorecard rules with a deadline. This enables tracking service compliance against those rules to see if the service is meeting the established benchmark. To learn more, read Cortex Initiatives: When scorecards need a deadline.

Interested in learning more about how Cortex can help you manage metrics? Schedule a demo today.

Moving ahead with metrics

Engineering metrics empower technology organizations to base decisions on data instead of assumptions. They provide the visibility needed to identify opportunities for improvement.  With the right metrics framework in place that relates to overarching KPIs, engineering teams can systematically drive impactful results that boost technology capabilities and business outcomes. A data-driven culture fueled by meaningful metrics accelerates innovation and strategic progress.

More information

Talk to an expert today