How Promotion Panels Work at Big Tech Companies

You hit submit on your self-review and wait. Weeks pass. Then the result comes back: not this cycle.
What happened between submit and that message is the part most engineers never see. At every major tech company, your promotion case goes through some form of panel, committee, or calibration group. The structure varies. The dynamic is the same: a room full of people who mostly don't know you will spend a few minutes deciding whether you move up.
Understanding the specific system at your company changes how you prepare. A Google promotion committee reads packets cold. An Amazon Organizational Leadership Review (OLR) starts with silent reading of a narrative document. Meta's Performance Summary Cycle (PSC) calibrates by level with peer managers comparing cases. Microsoft's talent discussions involve "aunts and uncles" who challenge your manager's pitch.
These are not interchangeable processes. Each one rewards different kinds of evidence and gives your manager a different amount of influence over the outcome.
How each company's panel works
Google: the promotion committee
Google's promotion system was originally modeled after the university tenure process. Before 2022, an independent committee of senior engineers and managers who had never worked with the candidate reviewed the promotion packet and voted. The candidate's direct manager had no vote. Peer reviews carried equal or stronger weight than the manager's statement.
In May 2022, Google rolled out Googler Reviews and Development (GRAD) and shifted promotions to be primarily determined by management through group calibration. Under the current system, your manager writes a next-level assessment, summarizes peer feedback (the committee no longer sees raw feedback), and presents your case to a calibration group of peer managers.
The thing about Google's process is how much rides on written evidence. Susnata Basak, a former Google engineer who served on promotion committees, described how strong voices on the committee can tilt a borderline case. If several committee members with similar opinions speak first, it usually tips the decision their way unless someone on the other side pushes back hard. A split vote is borderline, and borderline almost always means no.
Promotion windows: March (main cycle) and September (off-cycle). In 2023, Google announced it was planning for fewer promotions into L6 and above following the January layoffs of roughly 12,000 employees.
Meta: calibration by level
Meta runs its PSC once per year. The process is structured around level-based calibration: all E3s are discussed together, all E4s together, and so on. Your manager presents your case with a proposed rating to a group of peer managers and team leads who compare contributions across teams.
Who is in the room:
- Your manager — the primary advocate. If your manager does not support your promotion, your chances are close to zero.
- Peer managers and team leads — comparing achievements across teams for consistency
- Senior individual contributors — providing technical perspective on scope and depth
- Directors and leadership — reviewing aggregated ratings at a higher level
Each case gets roughly three to five minutes. The manager gives a one-to-two-minute verbal summary, followed by questions and comparison to previously discussed engineers. After initial ratings are set, they go up the chain for VP approval, where budget constraints can force ratings up or down.
Meta does not use a formal forced curve, but managers are required to rate 10-16% of their reports at Meets Most (MM) or below. The practical effect is that the top ratings — Greatly Exceeds (GE) and Redefines Expectations (RE) — are capped, which means promotion cases are competing for limited slots just like everywhere else.
The terminal-level problem: E5 is the terminal level at Meta. Only about 15% of Meta engineers go beyond E5 to E6. Promotion to senior requires demonstrating impact at a significantly larger scale, and the bar rises every cycle as more engineers compete for fewer spots.
Amazon: the promo doc and OLR
Amazon's promotion process centers on a written narrative document. The promo doc is a comprehensive case built on STAR-format narratives (Situation, Task, Action, Result) mapped to Amazon's 16 Leadership Principles. For SDE1 to SDE2, the doc runs five or more pages. For SDE2 to SDE3, expect fifteen or more.
Originally the manager wrote the promo doc without the employee's input. That has changed. The individual now writes their own narrative with manager guidance. Amazon even has an internal tool called Doc Bar Raiser Reviewer, where volunteer reviewers pressure-test promo docs before they go to the OLR.
What happens in the OLR:
- The meeting begins with silent reading of a six-page narrative outlining the agenda
- Managers present employees with proposed ratings
- Executives calibrate ratings across the organization
- Other managers and Bar Raisers challenge or support individual candidates
- Anyone in the meeting can deny a promotion
Bar Raisers are Amazon's quality control layer: highly experienced employees who act as impartial evaluators. Their job is to make sure promotions only go to people who actually raise the performance bar. The OLR runs twice per year.
The hidden rating system: Amazon's external-facing Forte system gives qualitative ratings like Exceeds High Bar and Meets High Bar. But the internal OLR ratings are more granular — Top Tier (TT, roughly 20%), High Value 3 (HV3, roughly 15%), High Value 2 (HV2, roughly 25%), High Value 1 (HV1, roughly 35%), and Least Effective (LE, roughly 5%). You generally need TT or HV3 to get promoted. Employees may receive positive feedback through Forte while their OLR rating tells a different story entirely.
Microsoft: talent discussions and "aunts and uncles"
Microsoft moved away from formal stack ranking years ago, but the calibration process still involves comparison. Managers prepare a one-page summary for each employee using data from Connects (Microsoft's ongoing feedback documents) and Perspectives (peer reviews). These summaries are presented in annual talent discussions.
The escalating approval chain:
- Up to L62 — direct manager has primary authority
- L62 to L64 — skip-level manager (typically an L67 Group Engineering Manager) must approve
- L65 and above — VP or CVP approval required, with demonstrated cross-organizational impact
The "aunts and uncles" system is where Microsoft's process gets interesting. When your manager presents your case at L63 and above, their peers — other managers at the same level — actively challenge the recommendation. These peer managers have no stake in your success and every reason to protect the integrity of the bar. If they don't find the case convincing, it stalls.
On Team Blind, a verified Microsoft employee described the pattern: "I've been at L63 for almost 4 years. I'm told I'm 'basically ready for L64.' Then rewards come and suddenly it's headcount, calibration, timing — the same excuses every cycle." Cross-band promotions at Microsoft (62 to 63, 64 to 65) typically take two to three years, and the L63-to-L64 jump is where many engineers get stuck indefinitely.
What panelists are actually evaluating
The specific criteria vary, but across companies the panel is not asking whether you're a good engineer. Everyone being discussed is a good engineer. The questions are structural:
-
Can the presenter articulate what this person did at the next level? Not generally good work — specific examples of scope, complexity, and impact that match the leveling bar. If the manager can't answer follow-up questions with concrete details, the case weakens in real time.
-
Does anyone else in the room recognize this person? When another manager says "I've seen her work — she drove that cross-team initiative effectively," the case gains a second voice. When only the presenting manager speaks and the room is silent, they're asking strangers to trust their word alone.
-
Does the candidate meet the bar across all dimensions? Big tech companies evaluate across multiple areas — technical depth, leadership, communication, scope. Excelling in one but lagging in another is one of the most common rejection reasons.
-
Is there budget for this promotion right now? Data from Pave, analyzing 245,000 employees across over 1,000 companies, shows the average annual promotion rate in tech is about 14%. At companies with declining headcount, it drops to 11.5%. The structural ceiling is real, and it shifts with economic conditions.
Why borderline cases almost always lose
The Pragmatic Engineer's analysis of calibration systems found that committee-driven decisions are almost always conservative. Most "maybe" cases become a "no." All it takes is one person voicing a concern for the group to become skeptical.
Here is what pushes a case from "promote" to "borderline," and why borderline almost always resolves to rejected:
Impact framed as activity, not outcomes. "Built the new API endpoint" is activity. "Designed and shipped the API gateway that reduced integration time from weeks to days" is an outcome. Panels filter for the second. The first sounds like the current level doing their job.
Strongest evidence too recent. If your biggest project shipped last month, the panel doesn't have enough signal to assess lasting impact. Panels want sustained evidence over multiple quarters, not a final sprint before the window closes.
Your manager can't survive the follow-up questions. Other panelists will probe: "How is this different from what an L5 would do?" "Who else was involved?" "What was the scope of the decision-making?" If the manager doesn't have specific answers prepared, the case deflates in the room. Making sure your manager has that language ready matters more than most engineers realize.
Behavioral concerns surface. Susnata Basak described a Google case where an engineer had strong impact and scope but consistently refused tasks that wouldn't help his promotion case, forcing teammates to pick up less visible work. The committee voted unanimously against promotion. Technical strength does not override concerns about how you operate on a team.
No second voice in the room. This is the quiet killer. When your presenting manager is the only person who can speak to your work, the case rests on one person's credibility. When a second or third manager corroborates with firsthand experience, the case becomes much harder to reject.
How to make your case land when you're not there
You cannot be in the room. But you can shape what's in the room before the door closes.
Give your manager the material, not just the relationship. Your manager needs to defend your case under time pressure with specific, outcome-framed evidence. A vague relationship where they "know you're doing great work" does not survive a five-minute panel slot. Concrete wins with metrics, scope descriptions, and business impact do. Building the documented case your manager can take into that room is the single best thing you can do before review season.
Build recognition beyond your direct chain. Cross-team projects, technical reviews where other managers see your output, and work that touches adjacent teams all create the second voice your case needs. Not networking for its own sake. The goal is making your work visible to the people who will be in the room when your name comes up.
Understand the specific system you're in. Google's committee rewards written artifacts and quantified impact above everything else. Meta's calibration rewards your manager's persuasiveness and your rating trajectory. Amazon's OLR rewards detailed narratives mapped to Leadership Principles. Microsoft's tiered approval system rewards skip-level relationships at higher levels. Optimizing for the wrong system wastes effort. How calibration works at your company is the foundation — understanding what the promotion panel specifically needs to see is the next step.
Ask what the panel objected to last time. If you've been through a cycle and didn't get through, the most valuable question is what specifically the panel said. Not "you're not ready yet" — that's not actionable. Which evidence was insufficient? What level gap did they identify? What would they need to see next time? Push until you get something concrete you can build toward.
The panel meets without you. The case it evaluates was built before the meeting started. Every engineer who's gotten through a tough promotion panel describes the same realization: they stopped treating the process as something that would work itself out, and started treating the months before it as the actual game. And the verdict isn't just pass or fail — the strength of the room's conviction carries consequences for comp, perception, and the next cycle that most engineers never see.
The promotion panel meets without you. CareerClimb makes sure your case is ready when it does — the app helps you capture wins as they happen, frame them as business outcomes, and build the documented evidence your manager needs to make your case land in that room. Download CareerClimb and start building your case before the next panel convenes.



