How to write peer feedback for performance reviews

When your manager walks into calibration with a stack of peer feedback, most of it is useless. "Great collaborator." "Strong technical skills." "Always willing to help." These phrases appear in dozens of submissions, they don't distinguish anyone, and no calibration manager can cite them to defend a rating.
You've probably received feedback like that too, and noticed it didn't tell you anything. The goal of this guide is to make you better at writing peer feedback: not because it's a morally important task (though it is), but because specific, evidenced peer feedback is the kind that actually moves ratings, gets cited in calibration, and gives your peers the honest signal they need to grow.
What managers do with peer feedback in calibration
Understanding how peer feedback is used changes how you write it.
In most review systems, peer feedback doesn't directly set a rating. Your manager reads it, looks for patterns, and uses it to support or challenge the rating they're inclined to give. If five peers independently describe the same strength (say, "she identifies problems no one else saw"), that becomes credible calibration evidence. If five peers write vague positives about someone, the feedback tells the manager nothing new.
This means the useful question to ask when writing feedback isn't "what's nice to say about this person?" It's "what specific thing happened that my manager wouldn't know about from their own observations?"
Your manager sees your peer's code reviews. They don't necessarily see how your peer handled a disagreement in a design review, mentored a junior engineer through a hard debugging session, or quietly unblocked a dependency that was blocking two other teams. If you were there and your manager wasn't, that's what your feedback should capture.
For more on how calibration rooms work, read how performance review calibration actually works.
Is Your Self-Review Going to Hold You Back?
Find out if your self-review will help or hurt your promotion case.
1 of 7
When you think about writing your self-review, what's the first feeling that comes up?
The structure that works
Good peer feedback has four parts. They don't need to be labeled. They can flow as prose. But each part needs to be present.
The situation: what was the context? Name the project, the incident, the design review, the quarter. "During the Q3 migration" or "in the authentication service refactor" is specific enough. "Recently" is not.
The behavior: what did your peer actually do? Not personality traits, not intentions. Observable actions. "She wrote the initial design doc, ran three rounds of review with stakeholders from two other teams, and revised it twice based on their input" is behavior. "She's a great communicator" is a label.
The impact: what changed? For calibration, this is the most important part. "The design doc became the foundation for the team's implementation plan and reduced scope disagreement during sprint planning for the next six weeks" is impact. "It was really helpful" is not.
Growth area (optional): not every piece of feedback needs a criticism. But if you have a specific, constructive observation, include it. "One area I'd suggest developing: the initial estimates were off by about 3x, which affected our sprint planning. Building in a spike to prototype before estimating would help" is useful. "Could improve estimation" is not.
Before and after
Here's the same observation written two ways:
Weak: "She's a great collaborator and always helps when asked."
Strong: "When I was blocked on the payments service timeout issue in November, she spent three hours working through it with me, identified that the root cause was in the retry logic (not the timeout config I'd been focused on), and helped me write a test that would have caught it earlier. I would have lost another two days without that session."
The first version could describe anyone. The second version describes one person at one time doing one specific thing. Only the second is useful in calibration.
Another example:
Weak: "Strong technical skills, especially in system design."
Strong: "In the October architecture review, he proposed a caching approach that reduced the estimated infrastructure cost by about $12K per month and was simpler to implement than the approach the team had been planning. The proposal required him to understand two systems he hadn't worked on before, which he did on his own in about a week."
How to handle feedback for underperformers
This is the part engineers avoid, and the avoidance is what makes 360 feedback systems less useful than they could be.
If your peer has genuine weaknesses, the honest, specific version is better for everyone. Not because criticism is good, but because vague positive feedback on someone who's struggling doesn't help them, doesn't help their manager, and inflates calibration signals across the whole org.
The key is framing around observable behavior and impact, not personality or effort. You don't know how hard someone is trying. You do know what happened in the projects you worked on together.
Specific and constructive: "In the three PRs I reviewed from his work on the batch processing system, the error handling was incomplete in ways that caused two production incidents. After the second incident, I shared some patterns I use for error handling in that type of code, and the next PR was significantly better."
Vague and unhelpful: "Sometimes code quality could be better."
The first version gives the engineer's manager something specific to work with. It also gives the engineer a specific pattern to address. The second version is true but useless.
If the situation is serious enough that you're concerned about the relationship, keep the focus on the work: "I want to give useful feedback rather than just positive feedback. The area I'd flag is..."
How to handle feedback for overperformers
The temptation when someone is excellent is to write superlatives. "The best engineer I've worked with." "Completely changed how our team operates." These feel meaningful but they're not specific enough to cite in calibration.
Strong feedback for a high performer looks like strong feedback for anyone else: specific situations, specific behaviors, specific impact. The difference is that you have more impressive material to work with.
"She identified that our alerting setup was creating false positives that were training the on-call rotation to ignore pages. She redesigned the alert thresholds across 12 services, which she wasn't asked to do, and on-call pages dropped from roughly 15 per week to 2-3 in the month after the change. This work wasn't on anyone's roadmap. She found it, scoped it, and shipped it."
That's a calibration-ready EE argument. Your manager can cite it. The other managers in the room can push back and your manager has specific evidence. This is how peer feedback actually helps someone. If you want to understand what what an "Exceeds Expectations" case requires in calibration, the same principles apply: specificity and scope evidence are what distinguish an EE argument from a Meets argument.
When you haven't worked closely with someone
Don't write feedback you can't support. It's better to write a shorter, accurate response than a longer one built on limited knowledge.
If you've had limited interaction: "My direct collaboration with him has been limited this cycle, so I can only speak to what I've observed. In team meetings, his debugging intuition stands out. He consistently asks the question that moves the investigation forward. That's all I can speak to from direct observation."
If you genuinely haven't worked with someone at all: it's acceptable to say so. "I haven't worked directly with her this cycle and don't have enough direct observation to provide useful feedback." That's more honest and more useful than invented praise.
What to avoid: writing positive feedback because declining feels awkward. Generic feedback that you can't support with evidence reflects poorly on the credibility of your other feedback.
Reciprocity and relationships
Engineers sometimes worry that honest feedback will damage professional relationships. This is a real concern, and it's worth being thoughtful about. But it's worth distinguishing between useful critical feedback and harmful critical feedback.
Useful critical feedback is specific, about observable behavior, focused on impact, and constructive: "This area affected the team's sprint planning. Here's what I'd suggest."
Harmful critical feedback is personal, about intent or character, and punishing: "He doesn't care about quality."
The first kind rarely damages relationships among engineers who understand that growth requires honest signal. The second kind is unprofessional regardless of whether it's accurate.
If you're writing feedback for someone you'll keep working with, and you have critical observations, it's worth asking yourself: have you shared this directly with them during the cycle? The best feedback conversation often happens before review season, not in a form submitted to your manager. The review is a summary, not a surprise.
At Netflix, peer feedback is signed. Your name is attached to everything you write. The Netflix performance review guide explains why that changes how engineers approach the 360 cycle there.
The time investment
Writing strong peer feedback takes longer than writing vague positive summaries. For a full review cycle, you might write feedback for 3-8 people, and doing it well might take 30-45 minutes per person.
That's worth it if you're writing feedback for people who genuinely affected your work. It's less worth it if you're writing feedback for people you barely know.
A useful heuristic: only nominate peers who you can write specific, evidenced feedback about. If you're struggling to think of a concrete example for someone, that's a signal that they shouldn't be on your feedback list, or that you should be honest about the limited scope of what you observed.
Read the complete guide to writing your software engineer self-review for more on how self-assessments work in the same calibration context as peer feedback.
The engineers who write specific, evidenced peer feedback tend to receive it too. It sets a norm. CareerClimb helps you track the moments worth capturing as they happen, so when peer feedback season opens, you're working from real notes, not reconstructed impressions. Download CareerClimb



