Back to Blog
Eng Intelligence
Product Updates

Introducing Custom Metrics

We launched Custom Metrics in Eng Intelligence to enable teams to build any measure of team health that matches their unique definition of engineering excellence.

Lauren Craigie

Lauren Craigie | October 29, 2024

Introducing Custom Metrics

In 2024, there’s no shortage of tools to help engineering teams track team health, productivity, or efficiency. But the problem with those solutions has been two-fold. First, most only track “output” metrics, with no insight into what’s causing teams to ship slower, or resolve fewer incidents. Second, these tools lack flexibility in data inputs and metric definitions, preventing teams from building their own unique measures of excellence.

Cortex’s Eng Intelligence solution was designed to solve for the gap between input and output metrics, and with today’s announcement of Custom Metrics, we’re also making it possible for every organization to decide for themselves what matters most. Want to track deployment frequency alongside escalation rate? Done. Need to keep a close eye on how incident frequency trends against code coverage? Easy. Thanks to 50+ out-of-the-box integrations and the ability to bring in custom data from any source, there’s no limit to what you can define.

How it works

Custom Metrics builds upon the standing Cortex principle of high flexibility without unscalable overhead. If pulling a metric from a “known” source (aka any of 50+ first-class integrations), and utilizing defined fields, computing a new metric is as easy as writing a CQL query. If Cortex doesn't yet have knowledge of the field(s) from the source system, users can simply post the data they’d like to feed into Eng Intelligence via our API.

Example of three custom metrics added to pre-defined fields in Eng Intelligence

CQL method

CQL allows you to perform SQL-like operations on Cortex data, such as joins between services and incident logs, to calculate highly specific insights. With the CQL option in Eng Intelligence, you can join data across tools like Jira and GitHub, and perform operations like rolling averages or percentile calculations—all of which can be visualized as trends over time in Eng Intelligence.

API method: 

To set up a metric programmatically, Cortex’s API provides endpoints to define your custom metrics and sources. For example, you can create a metric to track "Code Churn" by pre-calculating outside Cortex and then using the API to ingest.

Detail view of the API method of adding a metric in Eng Intelligence

Pre-baked metrics:

Of course, Eng Intelligence also supports several pre-baked metrics, so you can get started right away. PagerDuty, GitHub, GitLab, Jira, and your deployment data are all used to calculate things like PR open to close time, incidents opened, deploy change failure rate, and more—all available out of the box once you enable these integrations in Cortex. 

Detailed view of pre-baked metrics in Cortex‍

For everything else, there’s Custom Metrics!

Use cases and example metrics

Whether you’re focused on optimizing complex workflows, tracking nuanced code quality signals, trying to pinpoint bottlenecks in your on-call rotation, or just trying to understand how tech debt is growing relative to your feature velocity, Custom Metrics lets you define, measure, analyze, and visualize exactly what you care about.

While there’s no limit to what you can create, here are a few examples of what’s possible:

  • Escalation Rate by Service Ownership

    • Measure how often incidents are escalated between teams or service owners. This can help identify gaps in ownership clarity and incident response inefficiencies.

    • Integrations: Pull data from PagerDuty or OpsGenie combined with Cortex’s service catalog to match incidents with service ownership.

  • Refactoring Effort vs. Feature Development Ratio

    • Track the proportion of engineering effort spent on refactoring and tech debt reduction compared to new feature development. A high ratio might indicate that foundational issues are slowing down the team.

    • Integrations: Use Linear to track refactoring tickets and combine this data with GitHub PR comments to quantify the time spent.

  • Failed Deployments as a Percentage of Total Deployments

    • Go beyond tracking deployment frequency by monitoring the quality of deployments. This metric will highlight how often deployments are failing or causing regressions.

    • Integrations: Integrate with your CI/CD systems (e.g., Jenkins, GitHub Actions) and observability tools like Datadog to measure deployment outcomes.

  • Time Spent in “Blocked” State

    • Measure how much time tickets spend in a blocked state to identify workflow or process issues that prevent smooth development.

    • Integrations: Track this using project management tools like Jira, Clubhouse, or Asana and combine it with Git data to see if pull request reviews are the cause.

  • Incident Recurrence Rate

    • Calculate the percentage of incidents that recur for the same services or code areas. This metric helps spot areas of code that are inherently unstable or being repeatedly patched rather than truly fixed.

    • Integrations: Use data from PagerDuty or Datadog along with GitHub commit history to track repeated issues.

  • Review Influence Score

    • Measure how often specific team members’ code reviews lead to improvements in code quality. This can help identify strong reviewers or areas for mentorship.

    • Integrations: Analyze PR comments and code diffs from GitHub or GitLab to see how code changes evolve based on reviewer feedback.

  • Percent of Successful Builds

    • Track the percentage of successful builds to identify stability and reliability issues in your CI/CD pipeline.

    • Integrations: Connect CI/CD tools such as Jenkins, GitHub Actions, CircleCI, or GitLab. Add monitoring tools like Datadog or New Relic to track performance-related build features.

  • Number of flaky tests

    • Track the number of tests that fail inconsistently without code changes to surface areas of the codebase or tests that are unreliable and need fixing.

    • Integrations: Use data from issue trackers not pre-baked from Cortex, like Linear, if flaky tests are being documented and tracked as technical debt.

  • Average time for pipeline build

    • Measure the average duration of your CI/CD pipeline runs to spot higher build times that could signal inefficiencies.

    • Integrations: Use CI/CD platforms like Jenkins to gather data on how long builds are taking. Combine with observability tools like Prometheus to monitor performance metrics that could affect build times (e..g resource usage on build machines).

  • Code Coverage Metrics

    • Track the percentage of your codebase covered by automated tests—an essential metric for maintaining code quality.

    • Integrations: use status code analysis tools like SonarQube or Codecov to measure and report on code coverage.

Why Cortex Eng Intelligence with Custom Metrics?

Unlike other engineering intelligence solutions like Jellyfish or LinearB, Cortex tracks both output metrics (typically pertaining to developer activities), as well as input metrics (typically pertaining to software, tooling, or ecosystem health). Together, these views provide a much richer picture of your progress to engineering excellence, as well as the means of setting goals and iterative paths forward for each, all without leaving a central system of record.

Additionally, while even some IDPs have also followed suit to now offer basic pre-defined engineering metrics from integrations they support, only Cortex provides both pre-defined fields, as well as the ability to add any metric from any source, and visualize in any manner via our robust plugin architecture.

Ready to build your own?

Utilizing Cortex Custom Metrics in is straightforward whether you prefer to automate via API, or get creative in CQL. Get started today and start measuring what really matters. Looking for some inspiration? Get in touch with our team!

Talk to an expert today