What to Do After a Bad Performance Review

You got a bad performance review. The rating landed: Meets Expectations when you thought you'd hit Exceeds, or maybe lower. Now you're trying to figure out what actually happened and what to do about it.
The first instinct is usually to do something fast. Argue the rating. Update your resume. Go quiet and check out for the rest of the cycle. Almost none of those responses actually help. What does help is treating this like the information problem it usually is, and then taking specific, directed action.
Before you decide what to do, though, it's worth understanding what actually produced the rating, because most bad reviews aren't what they seem.
Why a bad review feels wrong (and often is)
The frustration is legitimate. The system implied a deal: do your job well, get recognized for it. Bad reviews feel like a violation of that deal.
The research on how ratings work complicates the story. A study by Scullen, Mount, and Goff published in the Journal of Applied Psychology found that 62–65% of the variance in performance ratings reflected the personal tendencies of the individual rater, not what the employee actually did. Only about 25% tracked to actual job performance. Deloitte cited this research when it abolished its annual review system entirely.
That's not an argument that ratings are arbitrary. It's a reminder that the person writing the review carries most of the variance. Factors like how recently they saw your work, what was competing for their attention, and how well they understood what you were building all shape the number more than a clean performance spreadsheet would.
What happens to your work before anyone evaluates it
By the time a manager writes your review, they're reconstructing a six-to-twelve-month period from memory. Hermann Ebbinghaus's research on human memory, published in 1885 and replicated consistently since, showed that people forget roughly 70% of new information within 24 hours without reinforcement. Your manager has the same memory constraints you do. Work you shipped in Q1 that wasn't made visible again at review time essentially disappears.
Recency bias in performance ratings is well-documented. The last six to eight weeks before reviews carry disproportionate weight. A difficult late-cycle period, a canceled project, or a quiet stretch during review prep can pull a strong year down significantly.
On Team Blind, this pattern comes up in company after company. Engineers describe reviewing their output, then looking at what ended up in their manager's write-up: only the most recent work, the most visible work, and whatever stuck in memory for reasons unrelated to actual impact.
What's actually going on in most bad reviews
There's a distinction worth drawing: a bad review caused by a performance problem is different from a bad review caused by an evidence problem.
Performance problems are about the quality of your work. Evidence problems are about whether the system had the information it needed to evaluate that work accurately.
Most bad reviews at strong tech companies are the second type. Not all. Some engineers get feedback that their work genuinely wasn't at the expected level, and that deserves to be taken seriously. But a significant portion of bad reviews happen to engineers who did meaningful work and had no system for making it legible to the people evaluating it.
The calibration process compounds this. At most tech companies, your manager doesn't decide your rating alone. They take your case into a room with other managers and advocate for you against a finite budget of high ratings. That advocacy depends on how much specific, concrete evidence they have. If the evidence is thin, the default outcome is the median.
Understanding how calibration actually works changes how you read a bad rating. It's often less about what you did and more about what your manager could articulate in a room you weren't in.
The part that matters now
You can't change the rating that just landed. You can change what happens next. The engineers who recover from bad reviews fastest figured out the real cause and addressed that specifically, rather than working harder on the same invisible track.
What to actually do
Sit with the feedback before you react to the rating
There's a difference between the rating and the feedback. The rating is a number. The feedback is the actual information.
Most engineers focus on the number and skim the written commentary. That's backwards. The written feedback, even when vague, usually contains something usable. What did your manager say was missing? Which examples did they choose to include, and which did they leave out?
Read it twice before reacting. Write down the specific phrases that felt wrong or surprising, separately from the ones that felt fair. This separates signal from noise before you decide what to do.
Request a debrief with specific questions
The post-review 1:1 is the most important conversation you'll have this cycle. Most engineers treat it as a formality and leave with nothing useful.
Bring specific questions:
- What would have made this a stronger review cycle for you?
- Which contributions did you weigh most heavily, and which were less visible to you?
- What's the one area where you need to see different work from me in the next cycle?
Write down the answers during the conversation. Send a follow-up summary afterward. This creates shared clarity on what "ready" looks like, before the criteria shift six months from now.
Engineers on Team Blind describe this conversation as a turning point:
"I stopped trying to relitigate the rating and started asking: 'What do you need to see?' Once I had a specific answer, I had something to work toward. Vague ratings with vague feedback just produce more of the same."
Diagnose whether the problem is performance or evidence
After the debrief, you should be able to answer one question: was this review a reflection of the quality of my work, or a reflection of how much of my work was actually visible?
If the feedback points to genuine skill gaps (technical depth, scope of ownership, cross-team influence), the path forward involves different work, not just more documentation.
If the feedback points to things your manager didn't see or couldn't articulate, the problem is visibility. The path forward is the same quality of work made more legible: regular written summaries of progress and impact, explicit connections between your output and team-level goals, more active communication about what you're building and why it matters.
Both are fixable. Conflating them leads engineers to either document busily without closing a real skill gap, or keep doing strong work while staying invisible.
Start building evidence now, not next review season
Whatever the diagnosis, the one universal response to a bad review is to start logging contributions immediately.
Not the week before reviews open. Now.
Every week, write down one or two things you shipped with measurable impact. Be specific: the latency that dropped, the incident that didn't happen, the tool three other teams started using. If you can't quantify it, describe the scope. If the scope was narrow, describe what you learned.
This document serves two people: you, when you're writing your self-review, and your manager, when they walk into calibration needing specific examples to defend your rating. You're building the evidence for a conversation that happens without you in the room.
How to stop being invisible at work goes deeper on this pattern: why strong work stays invisible and the specific habits that close the gap before the next review cycle.
What engineers who recovered actually did
On Team Blind and r/ExperiencedDevs, engineers who recovered from bad reviews share a consistent pattern.
They asked for specific, written feedback and followed up in writing. They diagnosed whether the problem was performance-based or visibility-based before deciding what to change. They started a running log immediately after the review, updated weekly rather than reconstructed at the next review season. Several asked their manager directly: "What's the strongest case I can give you for next cycle?" and treated the answer as a project brief.
The engineers who stayed stuck followed a different path: they worked harder on the same invisible track, stayed quiet out of embarrassment or frustration, and expected the next cycle to produce a different result without changing the inputs.
A bad performance review is a data point, not a verdict. What you do with it matters more than the rating itself. If you had been actively building toward a promotion and this review interrupted that, how to recover from a failed promotion attempt covers the diagnostic work that applies when both a review and a promotion bid went the wrong way in the same cycle. And if your company is going through layoffs or restructuring at the same time, a low rating makes the urgency even sharper. The warning signs and the playbook for protecting yourself apply doubly when you're already on shaky ground.
Your next review cycle doesn't have to look like this one
CareerClimb logs your wins weekly, builds your evidence over time, and helps your manager walk into calibration with everything they need to fight for you. Download CareerClimb



