Join 5000 CTOs and engineering managers for opinionated insights Subscribe

Stephan Schmidt - December 2, 2025

Metrics and KPIs for Engineering

What metrics to choose and how to make them work


Everyone is talking about engineering metrics. I do think you should focus on impact, impact, impact, but CTOs are forced to talk about metrics, so here we go.

We first need to start with a small side note on vision and strategy - which should be the foundation of everything you do. Many people get vision and strategy wrong. They confuse an idea for a product with vision and they confuse writing down what they want to do with a strategy. Neither is what they think it is.

Vision is a place in the future that is golden, and talks about your customers not about you. Vision is outwards. Strategy is your plan on how to get there. Strategy is inward. An example:

  • Vision: Make people see the view from Mt. Everest to clear their minds
  • Strategy:
    1. Things to achieve: Arriving at Mt. Everest, Basecamp I, Basecamp II, Summit
    2. Things to have: Icepick, tent, oxygen tank

People think metrics are some magicical ingredient to success. They are not. Metrics are just numbers. There are two kinds of metrics with two different goals:

  1. For your boss so they know you’re not wasting money and get the company the most for the money
  2. For you so you can manage your department

Goal: something to reach.

KPI: check if you’re doing great and are on track

KPIs (Key Performance Indicators):

  • Engine working as expected
  • meters gained last hour
  • pulse
  • oxygen level

KPIs are Key Performance Indicators. If you have 10, not all can be key.

OKRs (Objectives and Key Results):

  • OKR go neatly with strategy
  • Objectives:
    • Reach Basecamp II
    • Buy an icepick
  • Objective first, then Key Results - biggest mistake: Key results first
  • Second biggest mistake: Objective is a number - KRs are the numbers
  • OKRs are about aligning the company, top down OKRs
  • Numbers like revenue are results, no objectives, numbers do not align, e.g. 10% revenue growth means marketing, sales, product & tech can do whatever they think is the right thing to increase revenue

Development productivity hard to measure.

  • We don’t know what value is, LOC it’s not
  • Productivity metrics easy to game, e.g. reduce quality to 2x output
  • Non functional requirements become more important when the company grows: Engineering is about edge cases, scaling and long term maintainability
  • People neglect every other metric when you put too much focus on some metrics

Development metrics:

  • Time to market (from idea to feature flag activated)
  • Bugs / Developer / Sprint

Store all in data warehouse, so you can correlate them with other metrics (revenue). Put in git commits, files changes, CI/CD runs, local build times, CI/CD build times, deployments, releases, feature flag activation etc. CTOs do not utilize companies data warehouse often at all!

Two ways to get metrics

  • Measuring (e.g. compile times)
  • Surveys (CTOs do not use surveys enough)

Difficulty of comparing metrics. As soon as you create numbers, people will compare them. But numbers have no context, when creating them, all context is stripped. One team has 58 PRs/week, the other team has 10 PRs/week. Which team is better? The second? The second team works on a complex, brown field, backend integration project, the first team works on a greenfield frontent project with no integrations.

Developer Metrics Catalog

Developer Productivity Metrics

MetricDescriptionAction
Developer EngagementMeasures how excited and stimulated developers feel about their work. Closely linked to productivity—“happy developers are productive developers”—and serves as a counterbalance to speed-focused metrics.
Engineer EngagementMeasures how excited and stimulated developers feel about their work. Closely linked to productivity—“happy developers are productive developers”—and serves as a counterbalance to speed-focused metrics.
Ease of DeliveryA qualitative measure tracking how easy or difficult developers feel it is to do their jobs, covering cognitive load and feedback loops.
Perceived ProductivitySelf-reported measure of how productive developers believe they are.
Weekly Time LossThe average percentage of developer time lost to obstacles in the work environment. Can be translated into dollar costs for business leaders.
Focus Time (Deep Work)Measures the amount of uninterrupted time engineers have (e.g., “Average Number of Days with Sufficient Focus Time” or “Weekly Focus Time per Engineer”).
Developer NSATNet User Satisfaction (#satisfaction - #dissatisfaction) measures how happy developers are overall with development systems.
Developer CSATCustomer Satisfaction (#satisfaction) a score derived from surveys measuring satisfaction with specific tools and services developers use.
Developer Experience IndexAggregate score based on multiple metrics, used directionally rather than as a target.
Adoption RateTracks how many developers are actively using a specific tool, service, or adhering to established standards. Especially useful for AI tools (Cursor, Claude, Gemini CLI, …)
Time to first releaseTime for a developer from starting on a new machine, setting up and starting environment, writing code, pushing to git, CI/CD build, deployment to production
Time to 1st and 10th PRA velocity metric used to measure onboarding speed for new hires.
Weekly PRs per DeveloperA quantitative volume metric tracking the number of Pull Requests submitted per engineer.
Design Docs Generated per EngineerTracks how frequently developers write design docs before meaningful work begins to reduce total project time.
Experiment VelocityMeasures learning speed by tracking how many experiments are started, stopped, and their positive hit rates.
% of PRs Under 250 LinesA quality metric tracking code chunk size, encouraging smaller, more manageable peer reviews.
Speed, Ease, and Quality FrameworkThree dimensions used to evaluate productivity that exist in tension with one another, helping surface potential tradeoffs. Each dimension uses specific metrics depending on what’s being measured.
SPACE MetricsA framework (Satisfaction, Performance, Activity, Communication, Efficiency) used to categorize various productivity measurements.
Developer Build Time (P50/P90)Measures the time developers spend waiting for local builds to finish during development.
Code Reviewer Response TimeMeasures the time (in business hours) it takes for code reviewers to respond to a code review update.
Post-Commit CI SpeedMeasures the time (in minutes) it takes for a commit to get through the Continuous Integration (CI) pipeline.
CI DeterminismMeasures the likelihood that a test suite’s result is valid and not a “flake” (the opposite of test flakiness).
Deployment Success RateMeasures the percentage of deployments to production that succeed without failure.
Change Failure RateA DORA metric measuring the percentage of changes/releases that result in degraded service or require remediation (hotfix, rollback, patch).
Time to Restore ServicesHow long it takes to recover from a service impairment or outage.
Lead TimeThe time from code commit to production deployment.
Tme to MarketThe time from idea to feature flag activation.
Deployment FrequencyHow often code is deployed to production.

Developer Quality Metrics

MetricDescriptionAction
Bug FrequencyThe number of bugs per developer per time unit (e.g. week / sprint )

About me: Hey, I'm Stephan, I help CTOs with Coaching, with 40+ years of software development and 25+ years of engineering management experience. I've coached and mentored 80+ CTOs and founders. I've founded 3 startups. 1 nice exit. I help CTOs and engineering leaders grow, scale their teams, gain clarity, lead with confidence and navigate the challenges of fast-growing companies.

Join 5000 CTOs and engineering managers for weekly insights from CTO Coach Stephan Schmidt - just me, no ads, no sponsorships, free.