Apple performance review process for software engineers

You scored a 9. Three Exceeds Expectations (EE) ratings across Teamwork, Innovation, and Results. Your manager gave you a strong review summary. You're at Individual Contributor Technical 4 (ICT4), Apple's senior engineer tier, and by any measure, you had a good cycle.
Then promotion season passed without a conversation. Same level. No change.
This pattern shows up repeatedly in Apple engineer discussions on Team Blind: the review score and the promotion track are not the same thing. Engineers who treat them as equivalent spend years wondering why strong performance isn't moving them forward. This guide covers how Apple's performance review cycle actually works, what the ratings signal in practice, and what separates the engineers who advance from those who accumulate strong reviews at the same level for years.
Apple's performance review structure at a glance
| Element | Details |
|---|---|
| Review frequency | Annual |
| Comp decisions | September / October, at fiscal year end |
| Rating system | Three axes, each scored 1–3; composite total 3–9 |
| Self-review tool | MyPage (Apple internal); 2,500 character limit |
| Peer feedback | Automatic from immediate teammates + up to 5 you request |
| Promotion decision | Manager calibration; no formal nomination required |
Apple's ICT levels for software engineers
Apple's engineering ladder uses the ICT system. The scope expectations at each level are distinct, and the gap between some levels is harder to cross than it first appears.
| Level | Title | Notes |
|---|---|---|
| ICT2 | Software Engineer | Entry level; typically recent graduates |
| ICT3 | Software Engineer | Mid-level; 2–4 years at Apple |
| ICT4 | Senior Software Engineer | Terminal level for many; 10+ year tenures are common |
| ICT5 | Staff Software Engineer | Requires demonstrated cross-team scope and influence |
| ICT6 | Principal Software Engineer | Organizational impact; very few reach this level |
ICT4 is where most Apple engineers land long-term. Apple's culture genuinely supports staying there for years, and most engineers who are there aren't trying to move past it. But if ICT5 is your goal, understanding why the jump is harder than it looks is where to start.
The ratings system: three axes, one composite score
Apple rates performance on three dimensions each cycle. Every axis receives a score of 1, 2, or 3:
| Score | Rating |
|---|---|
| 3 | Exceeds Expectations |
| 2 | Meets Expectations |
| 1 | Needs Improvement |
The three scores combine into a composite ranging from 3 to 9. Here's what the composites actually signal:
| Composite | What it typically means |
|---|---|
| 9 | Exceptional; rare in any given cycle |
| 8 | Very strong |
| 7 | Above average; typical for high performers |
| 6 | Average; where most engineers land |
| 5 | Warning territory; may trigger a formal performance conversation |
| 4 or below | High risk of a formal exit process |
The three axes Apple scores engineers on:
| Axis | What it covers |
|---|---|
| Teamwork | Collaboration quality, process improvement, support within and across teams |
| Innovation | New solutions identified, opportunities surfaced in your org, contribution to technical direction |
| Results | Delivery quality and timeliness, measurable impact on user experience and product |
Getting Meets Expectations (ME) on all three keeps you clear of a formal performance process. Scoring Needs Improvement (NI) on any single axis is a meaningful signal. Consistently landing EE everywhere is a strong review, but it is not, by itself, a promotion case. That last point trips up a lot of ICT4 engineers who are doing genuinely good work.
How the review cycle actually works
Self-review: the 2,500-character constraint changes how you write
Apple engineers submit their self-reviews through MyPage, an internal tool with a hard limit of 2,500 characters. That's fewer than 500 words to summarize a full performance cycle across three axes.
That constraint changes how you have to write. No room for warm-up paragraphs or general statements about approach. Every sentence needs specific evidence of impact.
The difference between a self-review that survives calibration and one that doesn't comes down to this: does it say what happened, or does it say what you did? "Led the search backend migration" describes activity. "Led the search backend migration, cutting p99 latency by 30% and clearing a cross-team dependency that had blocked another team for two quarters" gives your manager something concrete to defend. The goal is to make the manager's job easier when they're in a room with other managers explaining why your rating is what it is.
For more on how to structure evidence-based self-reviews, the software engineer self-review guide covers the approach in detail.
Peer feedback: the part most engineers underinvest in
Your immediate teammates are automatically asked for feedback about you; this happens without any action on your part. You can also request feedback from up to five peers outside your immediate team.
Most engineers miss this one. Your manager weighs peer feedback heavily when forming your rating. The automatic feedback from your team is fixed. The five external peers you select are your choice, and that choice is strategic.
Apple engineers on Team Blind and career forums keep describing the same mistake: requesting feedback from whoever is easiest to ask, rather than whoever saw the most relevant work. Generic positive feedback ("great collaborator, always helpful") does not give a calibration committee specific material to work with.
The guide on writing peer feedback that actually holds up in calibration explains what specific, evidenced feedback looks like, and why the people you nominate should be writing in exactly that format.
The feedback you receive is loosely anonymized when shared with you. Engineers report being able to identify who wrote what from writing style and content. What matters more: the manager reading your packet before calibration is not working from anonymized data.
Manager assessment and calibration
After self-reviews and peer feedback close, your manager writes their own assessment. Calibration follows.
Apple's calibration is a meeting where managers discuss ratings across their teams. One structural aspect worth knowing: formal nomination is not required for promotion. The calibration committee can advance an engineer without explicit advocacy from their manager, if the committee independently believes the case is there.
In practice, having a manager who understands your specific work and can articulate concrete outcomes is the more reliable path. The managers who hold their ground in calibration are the ones who can point to specific evidence from your materials. The ones who struggle fall back on general statements about strong performance.
Understanding what happens in calibration meetings, and how ratings get adjusted in either direction, is covered in the performance review calibration guide.
Results and compensation
Review outcomes arrive in September or October alongside compensation decisions. Annual salary increases are tied directly to your composite score, with reported ranges from 0–8%. Restricted Stock Unit (RSU) refreshers are also decided in this window.
Promotion decisions and compensation adjustments happen in the same period but are separate conversations. A strong review improves your refresh. Promotion requires a different discussion about scope.
What the process looks like officially vs. what engineers report
| Official framing | What Apple engineers describe in practice |
|---|---|
| Three axes rated independently | Manager perception shapes scoring, especially on Innovation, which has no clean metric |
| Peer feedback is anonymized | Engineers frequently identify who wrote what from writing style and content |
| No formal nomination needed for promotion | Promotion still almost always requires active manager advocacy in the calibration room |
| Review reflects current-cycle performance | ICT5 decisions are really about demonstrated cross-team scope, not this cycle's scores alone |
The system is fair in structure and manager-dependent in execution. Engineers who navigate it best tend to work with that reality rather than waiting for the process to recognize them on its own.
Common mistakes Apple engineers make
-
Writing the self-review as a task list. "Led the payments API refactor" is a task. "Led the payments API refactor, reducing p99 latency by 45% and enabling the checkout team to launch a feature they'd been blocked on for a quarter" is evidence. The second version is what your manager uses in the calibration room. Given the 2,500-character limit, every sentence has to earn its place.
-
Treating the peer feedback list as an afterthought. Many engineers request feedback from the same people each cycle because it's convenient. If those people write vague reviews, that pattern repeats into calibration. Before submitting your list, think about who actually saw your most significant work this cycle, especially any cross-team impact, and whether they're on it.
-
Assuming strong scores translate to promotion progress. A composite of 7 or 8 means you're performing well at your current level. It does not mean you're operating at the next level. ICT5 requires cross-team scope and influence, not just doing ICT4 work at a high standard. Calibration committees know the difference.
-
Reconstructing the self-review from memory. With 2,500 characters and no filler allowed, engineers who write strong self-reviews are the ones who kept a running log throughout the cycle: a notes document, a weekly summary, anything capturing what shipped and what changed. Self-review time becomes editing. Engineers who start from memory are compressing months of work in a single sitting.
-
Skipping the explicit promotion conversation with your manager. If ICT5 is your goal, that needs to be a direct, documented discussion, ideally at the start of the cycle. What does ICT5-level work look like on this team? What's missing from what you're doing now? If that conversation hasn't happened, the calibration room is not where it starts.
What engineers who advanced at Apple actually did
They built cross-team scope before asking for the title
The engineers who reached ICT5 didn't wait to be assigned cross-team work. They identified where their team's work connected to other orgs and volunteered to own that coordination. By the time any promotion conversation happened, they had 12–18 months of cross-team impact behind them. The promotion discussion was confirming what they'd already been doing, not arguing for a bet on the future.
They tracked impact throughout the year
Given the 2,500-character limit, the engineers with the strongest self-reviews started tracking from the beginning of the cycle. A simple running notes document, updated weekly or after major milestones, capturing what shipped and what it changed. At self-review time, the question became which evidence to include, not what can we still remember.
They chose peer reviewers deliberately
Rather than defaulting to their closest colleagues, they thought about who had seen work that demonstrated ICT5-relevant scope: engineers from adjacent teams whose projects they'd unblocked, Product Managers (PMs) whose launches they'd influenced, Tech Leads (TLs) who'd watched them drive cross-team alignment. The peer list shapes the evidence your manager has going into calibration.
They made ICT5 goals explicit with their manager early
Multiple engineers who reached ICT5 describe having direct conversations about scope at the start of the cycle. Not "I want to get promoted," but "what would ICT5-level work look like for me on this team, and where are the gaps in what I'm doing now?" That framing made it possible to spend the cycle closing specific gaps rather than hoping the right work would surface on its own.
Timeline expectations for Apple engineers
The following reflects patterns described by Apple software engineers. Individual timelines vary by team, manager, org headcount, and available slots at the target level.
| Level jump | Typical range | What usually precedes it |
|---|---|---|
| ICT2 to ICT3 | 1–2 years | Consistent delivery, independent ownership of defined work |
| ICT3 to ICT4 | 2–3 years | Technical depth, team-level impact, growing ownership scope |
| ICT4 to ICT5 | Highly variable | Cross-team scope and influence; many engineers spend 5–10+ years at ICT4 |
| ICT5 to ICT6 | 4+ years | Organizational impact, influence on technical direction at org scale |
ICT4 to ICT5 is where the standard timeline breaks down. The engineers who move through it faster are the ones who understood early that the ICT5 bar is about scope, not about performing ICT4 work at a higher level. If you've been passed over after building what you believed was a strong case, how to recover from a failed promotion attempt covers the steps for reassessing and rebuilding.
Apple's emphasis on demonstrated scope before the promotion conversation is similar to how Netflix approaches leveling. The Netflix performance review guide describes a parallel pattern where E5 engineers who reach E6 had typically been operating at that scope for 12 to 18 months before any formal discussion began.
Apple runs one review cycle per year. That's one window to put documented evidence in front of the calibration committee. CareerClimb helps you track your wins and impact throughout the cycle so you're not compressing a year's worth of work into 2,500 characters at the last minute. Download CareerClimb



