Google performance review process for software engineers

You've been at Google for 18 months. Your tech lead thinks your work is solid. Your manager says you're "on a good trajectory." Then perf results come back and you got Consistently Meets Expectations. Again.
That's the Google performance review experience for a lot of engineers who are doing genuinely good work but not playing the system correctly. The process rewards specific behaviors, and if you don't know what those behaviors are, you can spend two or three cycles being surprised.
This guide covers how Google's review cycle actually works: the timeline, the ratings, the calibration process, and what current and former Googlers say separates the engineers who advance from the ones who plateau.
Google's performance review structure at a glance
Google runs two full review cycles per year, referred to as H1 and H2. These are separate, full cycles, not a mid-year check-in and a year-end review.
| Cycle | Perf period | Self-review window | Results |
|---|---|---|---|
| H1 | January through June | ~late June to mid-July | August |
| H2 | July through December | ~late October to mid-November | January/February |
Each cycle includes a self-review, peer reviews (you select nominators, your manager approves), and manager assessment. Calibration happens after submissions close.
Google's leveling structure for software engineers
Understanding your level is essential to understanding what "good" looks like in Google's review system. The expectations at each level are different, and calibration explicitly evaluates whether you're operating at, above, or below the expectations for your band.
| Level | Title | Typical time at level | Promotion trigger |
|---|---|---|---|
| L3 | Software Engineer | 1-3 years | Demonstrating L4 scope consistently |
| L4 | Software Engineer | 2-4 years | Owning projects end-to-end; L5 scope emerging |
| L5 | Senior Software Engineer | 3-6 years | Leading cross-team work; direct impact on product |
| L6 | Staff Software Engineer | 4+ years | Influencing org-level direction; often requires "Superb" |
| L7+ | Senior Staff and above | Variable | Organizational impact, leadership at scale |
Promotions above L6 typically require committee review and are distinct from the standard cycle.
The ratings scale: what they actually mean
Google's official ratings scale has four tiers:
- Needs Improvement (NI): performance is below the expected bar for the level
- Consistently Meets Expectations (CME): solid performance at the expected level
- Consistently Exceeds Expectations (CEE): operating clearly above the expected level for the role
- Superb: exceptional impact; only a small number of engineers receive this in any cycle
CME is a passing grade, not a celebration. For most engineers aiming at promotion within a reasonable timeline, CEE is the target. Superb is reserved for contributions that visibly moved the needle on something Google cared about at a significant scale.
What the ratings budget looks like in practice
Google does not publish official rating distributions. Anecdotally, Blind discussions from verified Google engineers suggest that the majority of engineers receive CME in a given cycle, CEE is competitive, and Superb is rare.
According to recurring Team Blind discussions, many Google engineers say a single CEE often does not move promotion conversations much. The repeated anecdotal pattern is that two or three consecutive CEEs in the same level band, alongside a strong packet, tends to trigger L4→L5 promotion discussions.
Understanding what calibration committees actually look for in an "Exceeds" candidate helps explain why CEE requires more than solid execution.
How the review cycle actually works
Phase 1: Self-review and peer nomination (weeks 1-3 of the window)
You open the review tool (Google Grow or the equivalent internal system) and write your self-review. You also nominate peers who will write feedback about you. Your manager approves or adjusts the list.
Your peer nominators matter. Choose people who saw your work directly: the engineer who relied on your API, the PM whose launch you supported, the TL who watched you debug the production incident at 2am. Avoid nominating people who will write generic positive feedback. Calibration reviewers can spot vague peer reviews. For a practical guide on what specific, evidence-based peer feedback actually looks like, read how to write peer feedback that works in calibration.
Your self-review should address three things: what you built or contributed, what impact it had on the team or product, and whether you operated at or above your level band. The last point is the most commonly missed.
Phase 2: Peer reviews (weeks 2-4 of the window)
Your nominated peers write their feedback in parallel. You also write feedback for anyone who nominated you.
Peer reviews feed your calibration packet alongside your self-review and manager assessment. Strong peer reviews are specific and behavioral. "She consistently drove clarity on ambiguous projects" is useful. "Great engineer, always helpful" is not.
The peer review you write for others also reflects on you. Calibration reviewers sometimes look at the quality of feedback a candidate gives as a signal of their judgment and communication level.
Phase 3: Manager assessment
Your manager writes their own assessment of your performance, separate from your self-review. This is the narrative they'll defend in calibration.
At this stage, your relationship with your manager becomes more critical than almost anything else. If your manager understands what you did this cycle and why it matters, they can build a strong argument. If they're vague on the details, they'll default to a middle-of-the-road assessment that survives calibration without requiring a fight.
Regular manager 1:1s where you explicitly share what you're working on and what it's producing are not just good practice; they're how you build the foundation for a strong manager assessment before the window even opens.
Phase 4: Calibration
After submissions close, your manager takes your complete packet into a calibration committee. At Google, calibration typically involves multiple managers reviewing one another's teams and discussing whether ratings are appropriate relative to the level expectations and compared to peers.
This is where your written self-review either helps or hurts. The calibration panel includes people who don't know you. They're reading your packet cold, looking for evidence that your stated impact is credible and that the proposed rating is defensible.
Ratings do get changed in calibration, in both directions. An engineer who wrote a vague self-review might see a CEE become a CME because the panel couldn't find a compelling reason to defend the higher rating. An engineer with a specific, evidence-backed review sometimes gets bumped when their manager advocates forcefully.
A pattern that shows up consistently in Blind threads from verified Googlers: managers who win calibration arguments are the ones who can point to a specific sentence from the engineer's self-review. The managers who struggle are the ones who have to fall back on "they're really solid" without specifics to back it up.
Phase 5: Results and feedback
Results typically come via the review tool, followed by a 1:1 with your manager to discuss. Calibration discussions are confidential, so you generally won't know what was said in the room, who argued for or against your rating, or how close a decision was.
What you do get is the rating and your manager's summary assessment. Use the post-results conversation to understand what would have produced a different outcome and what to focus on in the next cycle.
What the handbook says vs. what actually happens
There's a version of the Google performance process that lives in internal documentation and manager training. Then there's what experienced Googlers actually observe.
| "Official" version | What Googlers actually report |
|---|---|
| Peer nominators are approved by your manager to ensure balance | In practice, the people who write the best feedback for you are the people you've invested in most. Your peer list reflects your relationships. |
| Calibration ensures fair and consistent ratings | Calibration is influenced by manager advocacy skills and existing relationships in the room. A senior manager who's well-regarded carries more weight than a new manager. |
| CME means you're performing well at your level | CME two or more cycles in a row in the same level typically signals to the promotion process that you're not ready to advance. It's a baseline, not a positive signal. |
| Promotion is based on demonstrated impact | Promotions also require a sponsor, a promotion packet, and often a committee vote. Strong impact without a sponsor rarely produces a promotion above L5. |
| Feedback should address specific behaviors | Many engineers receive vague feedback (e.g., "could improve communication") with no specifics. Asking your manager to help you interpret vague feedback is normal and expected. |
The practical takeaway: the process is fair in structure and imperfect in execution, like most human systems at scale. The engineers who navigate it best are the ones who work with the system rather than waiting for it to work for them.
Common mistakes Google engineers make
- Don't write a task list. "I worked on the search relevance pipeline, the auth service, and three cross-team initiatives" is not a self-review. What changed because of your work?
- Skipping level-appropriate scope claims leaves the judgment to the calibration panel. The panel is asking: is this person operating at L4 or L5? If you don't address it, your manager has to improvise.
- Don't nominate peers who will write generic reviews. "Fantastic to work with" doesn't survive calibration. Choose people who saw specific work you did.
- Waiting until the review window opens to think about the cycle is the most common mistake. The engineers who write strong self-reviews are the ones who logged wins in real time. The ones who scramble are working from memory.
- Treating your manager's assessment as their problem backfires. If your manager doesn't know what you did this cycle, the calibration packet will reflect that gap.
- Don't interpret a single CME as stable performance. CME is a passing grade. If you want to advance, you need a plan to build a CEE-level case. For comparison, Meta enforces a much more explicit timeline. Meta's PSC process automatically triggers a check-in clock after a single below-bar rating.
- If you've been passed over despite strong ratings, how to recover from a failed promotion attempt covers the steps to reposition your case.
What engineers who got promoted at Google actually did
Across Blind threads, Reddit discussions in r/ExperiencedDevs, and career forum posts from verified Googlers, some consistent patterns show up among engineers who moved through the levels.
They built scope before asking for the title
The engineers who advance don't wait for permission to work at the next level. They identify what L5 work looks like from their L4 position and start doing it. By the time the promotion conversation starts, they've been operating above the bar for at least two cycles.
They maintained a live brag document all year
Not a list of tasks. A document of impact statements. Every week or two, they added a line: what they shipped, what changed because of it, what they unblocked. When self-review season opened, they had a database to draw from, not a memory to excavate.
They had an explicit promotion conversation with their manager early
Not in the week before perf, but at the start of the cycle. "What does a CEE look like for me this period? What would make my case undeniable?" Engineers who got promoted typically had this conversation multiple times across multiple cycles. They weren't surprised by the bar.
They invested in peer relationships deliberately
Not to be liked. To be known. Calibration depends on peer feedback. The engineers who got strong feedback chose collaborators and then actually collaborated visibly: driving clarity on shared projects, owning the cross-team communication, being the person others mentioned when asked who made their work easier.
Timeline expectations for promotion at Google
The following is an approximate timeline based on patterns reported by Google engineers. Individual results vary based on manager, team, level, and available promotion budget.
| Level jump | Typical range | What usually precedes it |
|---|---|---|
| L3 to L4 | 12-18 months | Consistent ownership, L4 scope beginning to show |
| L4 to L5 | 18-30 months | 2+ CEE cycles, end-to-end project ownership, often cross-team impact |
| L5 to L6 | 24-48 months | Demonstrated org-level influence, strong sponsor, sometimes requires Superb |
| L6 to L7 | 3-6+ years | Large-scale organizational impact, senior leadership support |
Promotions are also influenced by team and org budgets in any given cycle. Being ready is necessary but not sufficient on its own. Timing matters.
Google runs two perf cycles per year. That's two opportunities to build your case, two calibration rounds, two shots at putting evidence in front of a panel that decides your trajectory. CareerClimb helps you track your wins throughout the cycle so you're always prepared, not scrambling when the window opens. Download CareerClimb



