CareerClimbCareerClimb
Amazon
Performance Review
Review Season
OLR
Forte
Leadership Principles
March 5, 202610 min read

Amazon performance review process for software engineers

Amazon performance review process for software engineers

Amazon's performance review process is unlike most other tech companies. The Leadership Principles aren't a cultural backdrop; they're scored criteria that directly influence your rating. The stack ranking is real, enforced by fixed distribution targets. And the review system itself has gone through significant changes: in 2025, Amazon moved to a new Forte-based format requiring engineers to submit specific accomplishments rather than broad self-assessments.

If you're preparing for an Amazon review cycle or trying to understand why your ratings are what they are, this guide covers the full process: how the Organizational Leadership Review (OLR) works, the rating tiers and their distributions, what Forte expects from you, how Leadership Principles are evaluated, and what separates a strong Amazon self-review from a weak one.

Amazon's review structure

Amazon runs two OLR cycles per year, typically in Q1 and Q3. Each cycle includes:

  1. Self-review submission through Forte
  2. Peer feedback collection
  3. Manager calibration
  4. OLR committee discussion
  5. Final rating and compensation decisions

The OLR is the calibration equivalent at Amazon. Your manager builds your case and presents it to a committee of peer managers. The committee discussions determine final ratings across the org, with fixed distribution targets enforced. Your manager is your sole advocate in that room.

For context on how calibration rooms work and why your self-review is your manager's script, read how performance review calibration actually works.

Amazon's levels

LevelTitleNotes
L4SDE INew grad entry point
L5SDE IIMid-level; majority of SDEs
L6SDE III (Senior)Senior; significant promotion bar
L7Principal SDEOrg-level scope expected
L8Sr. Principal SDECompany-wide scope
L10Distinguished EngineerTop of individual contributor track

The L5 to L6 promotion is the most talked-about bar at Amazon, equivalent to the L5 to L6 jump at Google. At L6 and above, Amazon expects engineers to drive outcomes beyond their immediate team. The Google performance review guide covers how that equivalent bar shows up in Google's calibration process.

Rating tiers and distribution

Amazon uses a five-tier stack-ranked system with fixed distribution percentages enforced for teams above a certain size:

Rating Tier% of EngineersWhat It Means
Top Tier (TT)~20%Exceptional performance and LP adherence
Highly Valued 3~15% (est.)Strong performance, above most peers
Highly Valued 2~25% (est.)Solid performance, meets high bar consistently
Highly Valued 1~35% (est.)Meets expectations, reliable contributor
Least Effective~5%Below bar; typically triggers improvement plan

The stack ranking is real and enforced. If a strong performance year produces calibration results in HV1 instead of HV2, that's not necessarily a reflection of your work. It can reflect the distribution math applied across your org. Engineers who join high-performing teams sometimes find it harder to land HV2 or above simply because the competition is denser.

The Leadership Principles are evaluated separately. Only 5% of engineers receive a "role model" grade on LP adherence, regardless of overall performance.

The Forte system

In 2025, Amazon formalized the Forte system for self-reviews. The old format included broad questions about your strengths and contributions. Forte replaced that with a specific requirement: submit 3-5 accomplishments, each showing concrete impact on a project, goal, or initiative.

Each accomplishment should cover what you did, what the result was, and optionally what risks you took or what innovation was involved (Amazon explicitly counts unsuccessful innovation as a positive signal, not a negative one).

The Forte submission feeds directly into what your manager says in OLR calibration. Vague accomplishments give your manager nothing to cite. Specific ones with numbers, scope, and clear LP connection give them the material to build your case.

Leadership Principles in the review

Amazon's 16 Leadership Principles aren't just culture; they're a scored dimension of your OLR review. Starting in the mid-2025 cycle, LP adherence became a formal criterion alongside performance and potential, combining into an Overall Value (OV) score.

The principles most commonly cited in strong engineering reviews:

Customer Obsession: not just building what was asked, but investigating whether it was the right thing to build.

"Before starting the rebuild, I ran three rounds of user interviews with 12 internal teams, which changed the feature set significantly and avoided a $150K rebuild we would have had to redo in Q3."

Ownership: taking responsibility beyond your assigned scope.

"When the incident revealed a gap in our alerting coverage, I didn't wait for a ticket. I audited all 40 services in our domain and fixed the coverage gap across the board."

Invent and Simplify: finding simpler solutions to complex problems, especially ones that eliminate complexity permanently.

"Replaced 8,000 lines of legacy retry logic with a 200-line implementation using standard backoff patterns, reducing on-call incidents related to that system by 90%."

Dive Deep: engaging with technical details rather than delegating investigation.

"When latency spiked in the recommendations service, I traced the root cause to a specific query pattern introduced three months earlier. Most of the team had assumed it was a network issue."

Deliver Results: shipping things that matter, on time, even through obstacles.

"Despite the scope change in week 6, we delivered the migration by Q2 deadline by cutting three non-critical features that we had flagged as optional in the original design doc."

You don't need to explicitly name the LP in your Forte submission. But you should write your accomplishments so that your manager can clearly map them to principles. If every accomplishment is just "shipped feature X," there's nothing to cite under Ownership or Dive Deep.

The OV score and what it means

The OV score combines three dimensions:

  • Leadership Principle adherence (the LP-specific evaluation)
  • Performance (output and impact against your goals)
  • Potential (trajectory, growth, and expected future contribution)

The OV score influences where you land in the distribution and what happens to your compensation and promotion eligibility. Engineers in HV1 are typically at risk of stagnation in total compensation. Engineers in HV3 and TT are the pool from which promotions are drawn.

Writing strong Forte accomplishments

The format that works: What you did, what changed as a result, and how large the impact was.

Weak: "Led the migration of our billing service to the new infrastructure."

Stronger: "Led the migration of our billing service to the new infrastructure under an aggressive deadline, completing it three weeks before the contractual end-of-support date. The migration affected 4 downstream services and 8M active accounts. No service interruptions during cutover."

The difference isn't length. It's specificity. The second version gives your manager two things to say in OLR: the deadline pressure (Deliver Results) and the scale of impact.

On writing for LPs specifically: the accomplishment doesn't need to say "I demonstrated Ownership here." It should just describe what you did with enough detail that the LP connection is obvious. "I identified the problem before anyone else knew it existed and fixed it end-to-end without being asked" is Ownership. You don't need to label it.

For more on structuring impact statements, read the complete guide to writing your software engineer self-review.

How reviews connect to promotions

Amazon promotions are manager-driven. Your manager nominates you, builds your promotion case, and presents it in the same OLR calibration process. There's no separate committee vote separate from OLR. The calibration process IS the promotion process.

The bar for each level is defined by what Amazon expects of engineers already at that level. To get promoted to L6, you need to demonstrate that you've been consistently operating at L6 scope for a sustained period, not just occasionally. "Ready now" isn't enough. You need evidence across multiple cycles.

Timing matters. Promotion nominations happen within the OLR cycle, and HV3 or TT ratings are typically prerequisite. Getting HV1 in back-to-back cycles makes a promotion case very difficult to build, regardless of tenure. If you've been nominated and the committee didn't approve it, how to recover from a failed promotion attempt walks through what to do next.

Common mistakes in Amazon reviews

Writing at the wrong level of abstraction. "Contributed to Q3 roadmap deliveries" is not a Forte accomplishment. Three specific projects with outcomes is.

Treating all accomplishments as equal. Amazon's review process rewards depth on a few high-impact items more than breadth across many small ones. If you have eight things you could mention, pick the three or four with the largest scope and document those thoroughly.

Forgetting to write for LP alignment. If your manager can't point to specific LP evidence in your Forte submission, they'll struggle to argue for high LP scores in calibration. Every accomplishment should have at least one clear LP it demonstrates.

Leaving out the context. "Reduced service latency by 40%" is weaker than "Reduced service latency on the checkout service by 40% (from 850ms p99 to 510ms), which eliminated the leading category of customer-facing timeout errors during peak traffic." The second version tells the calibration committee why it mattered.

Ignoring unsuccessful work. Amazon explicitly values risk-taking, even when it fails. If you led an initiative that didn't pan out but generated learnings, put it in your Forte review. Bias for Action and Invent and Simplify apply to attempts, not just wins. Writing a self-review after a rough cycle covers how to frame cancelled projects and missed goals honestly without letting them undermine your case.

The Least Effective rating

An LE rating typically triggers a performance improvement plan (PIP). Amazon doesn't have a formal up-or-out timeline like Meta (which has specific months at level before action), but engineers receiving LE or repeated HV1 ratings in competitive orgs face real pressure to either improve or transition out.

The LE distribution is fixed at 5%, which means in any review cycle, approximately 1 in 20 engineers will receive it regardless of absolute performance. In a high-performing team, HV1 may be genuinely strong work. In a different org, the same output might be HV2 or HV3. Stack ranking makes team context matter.


Amazon's review process rewards engineers who do high-impact work, document it in terms of Leadership Principles and specific outcomes, and give their managers strong material to work from in calibration. CareerClimb helps you log wins and LP-connected evidence in real time, so when Forte opens, you're working from documentation rather than memory. Download CareerClimb

Frequently Asked Questions

Related Articles