I was recently cruising LinkedIn and saw Abi Noda from GetDX reference a paper Meta wrote in 2022 on improving code review time. Meta’s focus on this metric was sparked by a developer survey, but what they did as a result is even more interesting (at least to me!). Meta developed a “NudgeBot” to alert reviewers to take action on stale diffs. And it worked; both time to review and time in review went down.
So my question is, why stop there? Code reviews are just one part of a much larger software development lifecycle riddled with opportunities to “nudge” developers into alignment without creating noise. And with insight into all the details of your software state, Internal Developer Portals are a fantastic path to scaling a program like this.
Recapping Meta's study
Meta surveyed developers for top pains, and learned that code review time had some room for improvement. They developed NudgeBot to identify and ping certain reviewers of "stale" diffs that had remained idle for over 24 hours. They A/B tested deployment of this bot on 30,000 engineers, and the results were clear:
- Time spent in review decreased by 6.8%.
- Time to the first reviewer action improved by 9.9%.
- There was no statistically significant drop off in quality of reviews.
NudgeBots for everything?
Meta's experiment showed us that a well-timed nudge worked wonders for incentivizing developer action. Of course, NudgeBot’s functionality was laser-focused on code review delays.
But what if we had a nudging mechanism for things like code coverage, vulnerability SLOs, package upgrades, containerization efforts… you see where I’m going with this.
So, should we just build a bunch of NudgeBots? That approach comes at a cost:
- Maintenance considerations: Each bot would require ongoing upkeep and tuning to remain effective, especially as workflows change.
- Integration considerations: Multiple bots can lead to fragmented toolchains, requiring constant coordination and integration between systems, especially when sources change.
- Scaling considerations: The model used to evaluate delays in code coverage don’t apply to most other use cases, which means you’re building a new rule set for every use case.
- Reporting considerations: Meta had a team focused on this experiment, including measuring outcomes. Without a way to report on progress at scale, nudging programs might be running without much impact.
In short, the NudgeBot was a success, but creating similar tools for every potential bottleneck would be a logistical challenge—hard to build, harder to maintain, and likely a drain on engineering resources.
How can IDPs scale an SDLC improvement program?
Internal Developer Portals (IDPs) were designed to centralize data from across your ecosystem—providing a live view of software and team state across multiple dimensions. Cortex’s IDP goes one step further, putting that data to work by enabling users to create active Scorecards of best practices or target metrics fed from any source.
Here’s how Cortex helps:
- Centralize everything: Cortex’s 60+ integrations plus support for custom data means the platform is always aware of what’s happening in your services, resources, APIs, pipelines—whatever components comprise software you build.
- Build custom scorecards: Define your own set of best practices, expectations, or target metrics using any slice of data from across your stack. Combine security, observability, and documentation requirements into a Maturity Scorecard. Slice off all your compliance requirements into Compliance Scorecards. Set simple flags for package upgrades in a Migration scorecard. Target everyone and everything, or specific teams and entities. Whether you’re hoping to enforce a specific code review SLA or setting thresholds for technical debt, Cortex Scorecards can be completely adapted to your goals and environment.
- Set up automatic nudges: Set notification windows in Cortex so your team knows when a goal is a long-term nice to have… or an immediate need that should be prioritized above all else. Alert in Slack, Teams, or via email, and build inclusion or exclusion lists by team or component type. Because if it’s not relevant, it’s noise.
- Auto-track alignment: Eliminate stand-ups and status meetings for anything from engineering excellence initiatives to critical EOL deadlines. Auto-track alignment in out of the box reporting that lets you chart progress and identify outliers.
What can you track and improve with Cortex?
Because Scorecards enable you to set any goal using data from anywhere, there’s no limit on applicable use cases, like the below production readiness scorecard, which pulls from multiple tools:
Here’s 10 other examples that might spark something for your team:
1. Time to Review (Code Review Cycle Time)
- Track: Time taken from a pull request (PR) being opened to its approval or rejection.
- Improve: Reduce the time between submitting a PR and receiving feedback to improve developer velocity.
- Potential Integrations or Custom Sources:
- GitHub, GitLab, Bitbucket: To track PR creation, review, and merge times.
- Jira: For issue tracking linked to PRs, giving insight into review cycle times.
- Slack, Microsoft Teams: To notify team members about pending reviews and improve turnaround times.
2. Production Readiness
- Track: Whether all software is ready for production, and stays aligned to standards continuously thereafter.
- Improve: Ensure consistent production readiness expectations and reduce red tape and time to production.
- Potential Integrations or Custom Sources:
- GitHub, GitLab, Bitbucket: To track PR creation, review, and merge times.
- Jira: For issue tracking linked to PRs, giving insight into review cycle times.
- SonarQube: Check code coverage percentages
- Docs linked: Ensure SLOs defined and rollbacks clear
3. Service Ownership & On-Call Information
- Track: Ownership of software components or applications.
- Improve: Increase visibility of ownership and who's on-call.
- Potential Integrations or Custom Sources:
- Google, Okta: To sync ownership information
- PagerDuty, Opsgenie: To integrate on-call and service ownership information.
4. Documentation
- Track: Presence of documentation for each component.
- Improve: Standardize and increase documentation coverage.
- Potential Integrations or Custom Sources:
- Confluence, Notion, GitHub Wiki: To track service documentation.
- Swagger, Postman: For API documentation and service contracts.
5. Deployment Frequency
- Track: Number of deployments or releases over time.
- Improve: Identify CI/CD pipeline bottlenecks.
- Potential Integrations or Custom Sources:
- Jenkins, CircleCI: To track deployment frequency.
- Kubernetes, AWS Lambda: For tracking actual deployment occurrences.
- Argo CD, Spinnaker: For deployment orchestration visibility.
6. Incident Response Time
- Track: Time to respond to and resolve incidents.
- Improve: Optimize incident management.
- Potential Integrations or Custom Sources:
- PagerDuty, Opsgenie, Firehydrant, Incident.io: For tracking on-call incidents and response times.
- Slack, Microsoft Teams: For communication and collaboration during incidents.
- Datadog, New Relic: For monitoring and alerting on system performance.
7. Runbook Availability & Usage
- Track: Availability and usage of runbooks for troubleshooting incidents and performing operational tasks.
- Improve: Ensure runbooks are up-to-date, easy to access, and widely used during incidents or routine operations.
- Potential Integrations or Custom Sources:
- Confluence, Notion, Google Docs: For storing and managing runbooks.
- PagerDuty, Opsgenie: To link runbooks with incidents and track usage during incident resolution.
- Slack, Microsoft Teams: For quick access to runbooks during incidents and to facilitate collaboration.
- GitHub, GitLab: For versioning and tracking updates to runbooks.
8. Error Rates & Service Reliability
- Track: Error rates and reliability metrics.
- Improve: Improve time to act on service reliability issues, and set thresholds for issues and resolution time.
- Potential Integrations or Custom Sources:
- Datadog, New Relic, Sentry: For error tracking and performance monitoring.
- Grafana, Prometheus: To visualize and track reliability metrics over time.
9. Security & Compliance Alignment
- Track: Security compliance and vulnerability details
- Improve: Ensure ongoing compliance and vulnerability remediation time
- Potential Integrations or Custom Sources:
- SonarQube, Snyk: To set standards for code coverage and vulnerability volumes and remediation times
10. Tech Debt & Service Maturity
- Track: Technical debt (e.g., outdated libraries).
- Improve: Prioritize efforts to reduce tech debt.
- Potential Integrations or Custom Sources:
- SonarQube, Snyk: For identifying outdated dependencies and security risks.
- GitHub, GitLab, Bitbucket: To analyze commit history and detect aging codebases.
- Jira, Trello: To track technical debt issues and prioritize resolutions.
Conclusion
Nudges work, if you can 1) Target who receives them, and when 2) Measure impact. Cortex enables users to scale their nudging program without burdening developers to ensure alignment to any standard of excellence.
Do you have a current of upcoming initiative that would benefit from real-time monitoring and active alignment? Schedule a custom demo today.