CareerClimbCareerClimb
Performance Review
Calibration
Review Season
Promotion
February 25, 202610 min read

How performance review calibration actually works

How performance review calibration actually works

Here's something most engineers don't know: submitting your self-review is not the last step in the performance review process. It's the first.

After you hit submit, your manager takes what you wrote into a room full of other managers. They argue over who gets which rating. There's a budget. Somebody loses. Your self-review is the script your manager uses to fight for you, or the silence they fill with nothing useful.

This process is called calibration, and almost every major tech company runs some version of it. Understanding how it works doesn't just change how you think about review season. It changes how you write your self-review, how you communicate with your manager all year, and how you build your promotion case.

What calibration is

Calibration is a structured meeting (or series of meetings) where managers align on performance ratings across their teams. The goal is to reduce bias, to make sure a "Meets Expectations" at Google means roughly the same thing whether you're on a search team in Mountain View or an infrastructure team in New York.

In practice, it's a negotiation. And like any negotiation, the person with the most preparation and the most credibility tends to win.

How it varies by company

The mechanics differ, but the core dynamic is consistent: your manager defends your rating against a room of peers, and there's a finite budget for how many people get each tier.

CompanyCalibration structureRating budgetWhat managers argue from
GooglePanel calibration; manager presents each report to a calibration committeeYes. Limited "Superb" and "CEE" slotsPeer reviews, self-review, manager assessment
MetaPSC (Performance Summary Cycle) calibration; manager + calibration manager review each level's ratingsYes. "Exceeds" and "Redefines" are cappedSelf-review, peer feedback, manager write-up
AmazonOLR (Organizational Leadership Review) each quarter; annual perf calibration in JanYes. Forced distribution at higher ratingsLeadership Principle evidence, manager narrative
MicrosoftRewards calibration in September; manager reviews across teamYes. Calibrated against peers in same bandManager assessment, some peer input
StripeSemi-annual calibration; smaller calibration groupsLess formal, but budget still existsManager write-up, peer feedback

For a full breakdown of how any one company runs its cycle, the Google performance review guide and the Uber performance review guide both walk through how calibration connects to compensation and promotion decisions at those companies.

What happens in the room

The details vary by company, but the general sequence looks like this.

Before the meeting: your manager builds a case

Managers come into calibration having already written a short narrative for each person they're evaluating. At most companies, this narrative draws heavily from your self-review. If you gave your manager detailed evidence of your impact, they have material to work with. If you gave them a vague list of tasks, they have to fill in the blanks themselves.

This is why the quality of your self-review matters beyond just checking a box. You're essentially ghostwriting your manager's pitch.

During the meeting: the negotiation

Managers present their team's ratings one by one. Others in the room push back. Common pushback:

  • "That impact sounds like L5 work, not L6"
  • "Three people on your team are claiming the same project as their top contribution"
  • "The bar for 'Exceeds' this cycle should be [specific example]. How does this person compare?"

The managers who win these arguments are the ones who can get specific. The managers who lose are the ones who resort to "she's just really great" without evidence.

Software engineers on r/ExperiencedDevs frequently describe the frustration of finding out their manager didn't advocate for them in calibration. One common thread: "My manager told me I was doing great all year, but said they 'didn't push back' when my rating got downgraded in calibration. I had no idea that was even a thing."

The budget constraint

At most tech companies, each rating tier has a cap. The number of people who can receive "Exceeds Expectations" or its equivalent is not unlimited. It's a percentage of the team, decided before the meeting starts.

This means that in a strong cohort, good work might not be enough. Your manager has to argue that your work was stronger than someone else's. That argument needs specifics: project impact, scope expansion, problems you solved that others couldn't. Understanding what "Exceeds Expectations" actually requires helps clarify what kind of evidence survives that challenge.

On Team Blind, where engineers are verified by company email, the rating budget is one of the most common sources of frustration. Google engineers regularly report that ratings of CEE (Consistently Exceeds Expectations) and above are informally capped, and that managers with more influence in the room tend to protect their reports' ratings more effectively than newer managers.

After the meeting: what changes

Ratings sometimes get moved in calibration. An engineer expecting a strong rating gets adjusted down because their peer's contribution to the same project looked similar and someone had to give way. An engineer who might have gotten a middling rating gets a bump because their manager made a compelling case.

In some companies, you'll never know exactly what was said. The rating you see is the post-calibration result. If the outcome was worse than expected, what to do after a bad performance review covers how to respond constructively and build a stronger case for the next cycle.

Is Your Self-Review Going to Hold You Back?

Find out if your self-review will help or hurt your promotion case.

1 of 7

When you think about writing your self-review, what's the first feeling that comes up?

Why your self-review is your manager's ammunition

Most engineers write their self-review to satisfy a form. They list what they worked on, check the boxes, and submit. That's a missed opportunity.

Your manager needs three things from your self-review to defend your rating:

  • A clear statement of the impact you had, not just what you built, but what it meant for the team or product
  • Evidence of scope: did you work at your level, above it, or below it?
  • A narrative that holds up under questioning from people who don't know you

If your self-review doesn't give them those things, they either improvise or they don't fight very hard.

Here's what the difference looks like in practice:

Self-review entryWhy it fails in calibration
"Worked on the authentication service refactor"No impact. What did the refactor change? What broke before that doesn't break now?
"Collaborated with the frontend and backend teams"Not a contribution. Every engineer collaborates. What did you specifically contribute?
"Improved query performance across several endpoints"Vague. By how much? Which endpoints? What was the effect on latency or user experience?
"Led the migration of the user data pipeline to the new infrastructure, reducing p99 latency from 800ms to 120ms"Specific, measurable, attributable. This is what a manager can say out loud in a meeting.

The last entry is ammunition. The first three are things your manager has to hope nobody questions.

What managers actually look for when preparing for calibration

When your manager sits down to write their narrative before the meeting, these are the questions they're trying to answer:

  • What is the clearest proof that this person worked at their expected level or above it?
  • If someone challenges this rating, what's my best example?
  • How does this person's impact compare to others in the same band I'm also arguing for?
  • Is there anything in this person's self-review that would confuse the calibration panel?

Your job is to make the first two questions easy to answer and the last two questions irrelevant.

What calibration means for promotion

At most tech companies, promotion decisions are separate from the review cycle but heavily influenced by it. A strong calibration rating builds the case. A middle-of-the-road rating, especially two or three cycles in a row, raises questions about readiness. If you've already been passed over, what to actually do after getting passed over for promotion covers the recovery steps in detail.

Some companies tie promotions directly to a review cycle rating. At Meta, for example, a "Greatly Exceeds" or "Redefines" in a PSC cycle is often a prerequisite for promotion discussions. At Google, strong calibration ratings are evidence that goes into your promotion packet.

At Amazon, calibration feeds the OLR process each quarter, where talent health and trajectory are discussed even outside of formal reviews.

This is why calibration isn't just about this cycle's rating. It's about the narrative that builds over multiple cycles. Each cycle is an opportunity to strengthen your case, or to let it drift.

How to prepare before the next review opens

You have more influence over calibration than you think, and most of it happens during the cycle, not during the three weeks you spend writing your self-review.

  1. Document your wins in real time. The specifics that matter in calibration come from your own logs. "In Q2, I reduced deployment time by 40% by consolidating the pipeline" is a statement you can only make if you tracked it when it happened.
  2. Have regular conversations with your manager about what a strong rating looks like. Not just "how am I doing" conversations, but "what would make a strong case for X rating this cycle" conversations. Do this at the start of each cycle, not two weeks before reviews are due.
  3. Understand the level expectations you're being evaluated against. The thing that gets engineers downgraded in calibration is doing solid work at their current level when the room is asking whether they're operating above it. Know the bar explicitly.
  4. Write your self-review as a brief for your manager, not a personal summary. Every line you write should be usable ammunition. Vague is worse than nothing.
  5. Ask your manager directly what they need from your self-review. "I want to make sure what I write is helpful to you in calibration. What do you need?" Most managers will tell you.

The engineers who consistently get strong calibration results aren't the ones who work the hardest. They're the ones who document continuously, communicate clearly, and give their managers real material to work with. CareerClimb helps you build that documentation habit all year, so when the self-review window opens, you're not scrambling. Download CareerClimb

Frequently Asked Questions

Related Articles