CareerClimbCareerClimb
Self Review
Performance Review
Review Season
Impact
Quantify
March 1, 20268 min read

How to quantify impact in a self-review without metrics

How to quantify impact in a self-review without metrics

The advice is easy: quantify your impact. Put numbers in your self-review. Show what changed because of your work.

But if you spent the last six months on a database migration, an internal tooling overhaul, or reducing on-call noise, you probably don't have a conversion rate to cite. You don't know how your work moved the company's revenue. The calibration panel is going to read your review and see "improved system reliability" and you're going to get a Meets All when you deserved Exceeds.

This isn't just an infrastructure engineer problem. It affects most engineers who work on platform teams, tooling, reliability, developer experience, data pipelines, or supporting work. The work matters. It's just harder to explain why.

Here's how to quantify (or at least specifically describe) work that doesn't have an obvious dollar sign attached.

Why numbers matter in calibration

Before the frameworks, a quick reminder of why this matters. When your manager takes your self-review into a calibration room full of other managers, they need something to say out loud. "She improved system reliability" loses to "He reduced p99 latency from 800ms to 120ms" every time, because the second one is specific enough to defend under questioning.

The managers who win calibration arguments aren't the ones whose engineers had the most impressive projects. They're the ones with the most specific claims. Give your manager a specific, defensible number and you've made their job easier.

For more on how calibration works and why your self-review is your manager's script, read how performance review calibration works.

Is Your Self-Review Going to Hold You Back?

Find out if your self-review will help or hurt your promotion case.

1 of 7

Can you put a number on your biggest accomplishment from the last review period?

Framework 1: Time saved

Time is always measurable, even when money isn't. Ask yourself: before your change, how long did this task take? After your change, how long does it take now?

This works for:

  • CI/CD pipeline optimizations ("reduced build time from 22 minutes to 8 minutes")
  • Developer environment setup ("automated onboarding setup that reduced new hire ramp-up from 3 days to 4 hours")
  • Deployment processes ("eliminated manual deployment steps that saved the team roughly 2 hours per release")
  • Monitoring improvements ("reduced mean time to detection on incidents from 40 minutes to 6 minutes")
  • Review process changes ("standardized PR review checklist that reduced back-and-forth cycles from 4 to 1.5 on average")

You don't always need precise tracking to estimate this. You know roughly how long things took before. If you don't, ask teammates. Even directional estimates ("reduced this from hours to minutes") are more useful in calibration than no number at all.

Framework 2: Error rates and reliability

Every bug prevented, every incident avoided, every regression caught before production has measurable impact, even if you're not tracking it in a dashboard.

Useful metrics:

  • Flaky test rate ("reduced flaky test rate from 18% to 3%, eliminating roughly 40 false CI failures per week")
  • Change failure rate ("our deployment change failure rate dropped from 12% to 2% after implementing staged rollouts")
  • On-call incident volume ("on-call incidents in the payments service dropped from 15 per month to 3 after the alerting overhaul")
  • Error rates in logs ("reduced 500 errors on the search endpoint from ~200/day to under 5/day")
  • P1/P2 incident count ("zero P1 incidents for 4 months following the database failover improvements")

The on-call incident reduction is particularly strong evidence because it's directly visible to other engineers. If your manager is building your calibration packet, they can verify it, and teammates will confirm it.

Framework 3: Adoption and usage

If you built something (a library, a tool, an internal platform, a new API), who uses it and how much?

Adoption metrics work well for:

  • Internal tooling ("deployed to 15 teams within 2 months of launch")
  • Libraries and SDKs ("adopted by 8 of 12 product teams for auth handling, replacing 3 divergent custom implementations")
  • APIs and services ("now handles 2.3M requests per day across 6 dependent services")
  • Developer experience improvements ("89% of engineers surveyed rated the new local development setup as significantly faster")
  • Documentation ("most-viewed internal wiki page for the month following publication, with 3 teams crediting it for faster onboarding")

If you don't have dashboard metrics for adoption, a count of Slack messages asking for help with something you documented, before versus after the documentation, is still a real number.

Framework 4: Scope of unblocking

Infrastructure and platform work often creates value by unblocking other teams. That value is real. Make the dependency chain explicit.

  • "My migration of the authentication service unblocked three product teams who had been blocked for six weeks, enabling two new features to launch in Q2."
  • "The data pipeline refactor I led cleared a critical dependency for the ML team's ranking model experiment, which had been paused for two months."
  • "Resolved the certificate rotation issue that had been preventing the mobile team from deploying to production. Their release was delayed 18 days; they shipped within 48 hours of my fix."

These statements don't require revenue numbers. They require you to know what you unblocked and how long the block lasted. If you tracked this during the cycle, even informally, you have the evidence. If you didn't, reach out to the teams you unblocked and ask what the delay cost them. They'll tell you.

Framework 5: Performance and capacity

Any metric touching latency, throughput, cost, or scale is quantifiable.

  • Latency: "Reduced p99 latency on the recommendations endpoint from 400ms to 85ms"
  • Throughput: "New batching logic increased event processing throughput from 10K to 90K events per second"
  • Infrastructure cost: "Identified and removed idle compute resources that reduced monthly cloud costs by approximately $8,000"
  • Scale: "Migrated the job scheduling service to support 10x the previous load limit, enabling the platform's expansion to enterprise customers"
  • Database performance: "Query optimization reduced average read latency from 1.2s to 180ms, which eliminated a category of timeout errors in the checkout flow"

The infrastructure cost metric is frequently overlooked. If you found and fixed something wasteful, someone knows the before and after numbers. Check with your infrastructure team or manager.

Framework 6: Team velocity and process

Did you change how the team works, not just what the team shipped? That's measurable too.

  • Cycle time: "Established a shared definition of done and PR review process that reduced average PR cycle time from 4.5 days to 1.8 days"
  • Deployment frequency: "We went from deploying twice a week to deploying on-demand after the feature flag infrastructure I built, roughly 3x more frequent"
  • Ramp-up time: "Mentored two new engineers through their first production launches. Both were fully independent within 8 weeks, versus the 4-5 month ramp we'd seen previously."
  • Code review throughput: "My standardized review guidelines reduced average comments per PR from 12 to 4 while maintaining quality, making reviews faster for everyone"

Team velocity improvements often require you to collect numbers yourself. Start now for next cycle: note cycle times before and after a process change, track how long PRs sit before review, ask teammates for their impression of ramp-up time.

Framework 7: Risk reduction

Work that prevents bad things from happening is hard to credit in reviews because the bad thing didn't happen. Make the potential consequence explicit.

  • "Identified and patched a SQL injection vulnerability before it was exploited. The affected endpoint processed payment data for 400K users."
  • "Completed the SOC 2 logging requirements six weeks before the audit, eliminating the risk of a compliance gap that would have delayed our enterprise contract."
  • "Migrated 12TB of user data to the new infrastructure before end of quarter, removing the risk of a $40K/month legacy contract extension."
  • "Caught a data inconsistency in the billing logic during code review that, if shipped, would have undercharged approximately 3,000 enterprise customers."

Risk reduction statements work best when they name the thing that didn't happen and the scale of the avoided problem. "Caught a billing bug" is weak. "Caught a billing bug that would have affected 3,000 enterprise customers" is calibration material.

Framework 8: User and developer experience

Developer experience improvements are harder to measure but not impossible. Surveys, adoption data, and qualitative feedback can all serve as evidence.

  • Developer satisfaction: "Post-launch survey showed 78% of engineers rated the new CI setup as significantly faster than the previous one"
  • Tool adoption: "The internal debugging tool I shipped was voluntarily adopted by 22 of 30 engineers on the platform team within 6 weeks"
  • Support ticket reduction: "Improved error messages in the SDK reduced developer support requests by roughly 60% in the quarter following the release"
  • Documentation quality: "The new API reference I wrote reduced 'how do I do X' questions in the dev-tools Slack channel from roughly 8/week to 1-2/week"

For developer experience work, the signal is sometimes the absence of noise. Track the before: how often do people ask for help with this thing? After your improvement, check again.

What to do when you genuinely have no numbers

Some work is difficult to quantify even with these frameworks. If you led a major architectural refactor that improved maintainability but didn't yet produce measurable velocity improvements, or if you contributed to exploratory research that hasn't shipped anything, you may not have numbers. If your cycle was genuinely rough and the challenge is framing work that didn't go as planned, what to write in your self-review when you missed your goals addresses that specific situation.

In those cases, be specific about scope and ownership. Who was involved? What decisions were made? What complexity was managed? What would the alternative have looked like? Calibration reviewers are asking whether your work was above your expected level. Understanding what "Exceeds Expectations" actually means in that room helps you frame scope evidence even without hard numbers.

"Led the service decomposition design for the monolith-to-microservices migration. Defined service boundaries for 8 services, built consensus among 4 engineering teams, and produced the technical design doc the migration team is working from. The migration is now underway with a clear path." That's not a number. It's still specific enough to defend in calibration.

The thing to avoid is vague: "Contributed to the architectural planning process for the migration." That gives your manager nothing to say in the calibration room.

Read the complete guide to writing your software engineer self-review for more on structuring impact statements, including how to connect them to level-appropriate scope claims.

Start tracking now, not in three weeks

Every framework here requires data. Some of it you can reconstruct from logs, commit history, or Slack archives. But the numbers that really land are the ones you recorded close to the event.

The p99 latency stat is compelling because it's precise. You can only write it precisely if you looked at the before number when you started the work and the after number when you finished. If you wait until review season, the dashboards may not go back far enough, the tickets are closed, and the specific numbers are gone.

Pick one thing you're working on right now. Note the baseline. When it ships, note the result. That's two minutes of documentation that becomes a calibration-ready impact statement in six months.


The engineers who write specific self-reviews aren't the ones who tracked everything obsessively. They're the ones who got into the habit of capturing the before and after when something mattered. CareerClimb helps you log those wins in real time, so when review season opens, the numbers are already there. Download CareerClimb

Frequently Asked Questions

Related Articles