CareerClimbCareerClimb
Microsoft
Performance Review
Review Season
Connects
Calibration
March 5, 202610 min read

Microsoft Connects performance review for engineers

Microsoft Connects performance review for engineers

Microsoft's performance review process has changed more dramatically than most large tech companies in the past decade. When Satya Nadella took over as CEO in 2014, he eliminated the stack ranking system that had defined Microsoft's review culture for years. Stack ranking forced managers to grade employees on a forced curve; some percentage had to be rated poorly regardless of actual performance, which engineers widely described as creating a culture of competition rather than collaboration.

The replacement is Connects: a semi-annual, journal-based review system designed around continuous feedback rather than point-in-time assessments. Whether you're preparing for your first Connects cycle or trying to get more out of an existing one, this guide covers how the system works, what's actually evaluated, how calibration happens, and what strong performance documentation looks like at Microsoft.

How Connects works

Connects runs on a semi-annual cadence, with formal cycles typically mid-year (around January) and year-end (around July, aligned with Microsoft's fiscal year, which ends June 30). Between those cycles, the expectation is ongoing check-ins with your manager rather than waiting for a single review event.

The core mechanics:

Engineers maintain a private performance journal inside Microsoft's HR tools. The journal is yours: you control what you write, and you choose what to share with your manager. Its purpose is to capture accomplishments, reflections, and growth in real time rather than reconstructing everything from memory at review time.

Your manager also writes their own assessment of your work. This isn't just a reflection of your journal; it's a separate evaluation that draws from their direct observations, peer feedback, and how your work fits into the broader team context.

Before each formal cycle, Microsoft collects 360 feedback through a tool called Perspectives. You can nominate peers, partners from other teams, and stakeholders to give feedback on your impact and work. The feedback is structured around behavioral questions rather than free-form ratings, and it's intended to be anonymous.

The formal Connects discussion is then a 1:1 conversation with your manager where the journal, the 360 feedback, and your manager's assessment all come together. This isn't where ratings are finalized. Calibration happens after, in manager sessions where your manager presents your case alongside their other direct reports.

For background on how calibration rooms work and what your manager needs to defend your rating, read how performance review calibration actually works.

Microsoft's engineering levels

LevelTitleNotes
59SDEEntry level, post-grad
60SDE IIMid-level, most junior engineers
61Senior SDESenior; significant scope expected
62Principal SDEOrg-level scope and influence
63Senior PrincipalCross-org and company-level scope
64/65Partner / DistinguishedTop of IC track

The L60 to L61 transition is the most commonly discussed promotion bar at Microsoft among engineers. It requires demonstrating Senior-level scope and impact in your current role, not just hitting tenure milestones.

The three evaluation dimensions

Microsoft's Connects system evaluates three core dimensions of your work. Understanding these matters because your self-assessment, your journal, and your manager's write-up all map to this framework.

Results is the first and most heavily weighted dimension. This is about the impact you delivered: what shipped, what changed, what improved. Strong Results documentation is specific and outcome-focused. Not "led the X project" but "led the X project, which reduced API latency by 35% and unblocked two downstream teams that had been waiting on this dependency for six weeks." Calibration managers can defend a Results story that has concrete evidence; they can't do much with one that doesn't.

Growth is about your personal and professional development over the cycle. This dimension rewards engineers who are actively building new skills, taking on stretch work, and investing in their own trajectory. Common evidence: "took ownership of system design for the first time on this project," "completed the distributed systems course and applied the concepts in the caching layer redesign," or "volunteered to lead the team's incident retrospectives, which surfaced two process improvements now adopted by the org." Growth isn't just about skills acquired; it's about growth that had impact.

Inclusion and Culture is the third dimension. This covers how you contribute to team health, psychological safety, and collaborative dynamics. Engineers who find this dimension uncomfortable should know that documenting these contributions isn't bragging. Self-promotion at work is not bragging explains the distinction. Evidence here: "ran structured onboarding sessions for two new engineers joining the team," "called out a problematic pattern in a design review in a way that changed how we run reviews going forward," or "volunteered for interview panels across four quarters despite a heavy project load." Some engineers underestimate this dimension. In calibration, managers presenting a strong Inclusion narrative get credit that purely technical reviews don't capture.

Perspectives: how 360 feedback works

Microsoft's Perspectives tool collects anonymous 360 feedback before each Connects cycle. You nominate your reviewers from a pool of peers, collaborators, and stakeholders. Your manager may also add reviewers independently.

The feedback is structured: reviewers respond to behavioral prompts about your collaboration, impact on shared work, and technical contributions. Because it's anonymous, you can't predict exactly what's been said before your Connects discussion, but the patterns in the feedback inform your manager's assessment.

A few things that influence Perspectives quality: engineers who write specific, evidence-based peer feedback get taken more seriously than those who write vague endorsements. "She delivered the auth service migration on time with minimal disruption" is more useful to calibrators than "great to work with." If you're nominating peers, choose people who can speak specifically to work you've done with them in the current cycle, not just people who like you. The guide on how to write peer feedback that actually helps in calibration covers the structure in detail.

How calibration works at Microsoft

Despite the elimination of formal stack ranking, Microsoft engineers consistently report that calibration still operates with informal distribution expectations. Your manager doesn't have to rank you relative to peers on paper, but the calibration sessions where managers align ratings across an org tend to produce outcomes that roughly follow a curve.

In practice, this means strong written documentation matters as much at Microsoft as at companies with explicit distribution targets. Your manager is your advocate in calibration. If your journal entries are vague or your Connects discussion covered things informally but didn't produce specific evidence, your manager will struggle to make a compelling case in the room.

The documentation question is especially important for engineers whose work is less visible. Infrastructure engineers, platform teams, and anyone who spends cycles on reliability or maintenance rather than user-facing features need to be especially deliberate about translating their work into outcomes that non-technical managers in the calibration room can understand.

Writing your performance journal

The journal works best as a running log rather than a document written in the week before your Connects cycle. Engineers who maintain it throughout the cycle end up with material that's specific, dated, and evidenced. Engineers who write it in three days before the deadline produce reconstructed summaries that lack the precision calibration needs.

What to capture in real time: specific decisions you made and why, outcomes from projects as they ship, feedback received from stakeholders and peers, and work you did that wasn't formally scoped to you but that you took on because you saw the need. The last category is especially valuable for Growth and Results documentation.

When you're preparing for the formal Connects discussion, structure your journal entries around the three dimensions rather than chronologically. Under Results, summarize your highest-impact work with specific before/after states. Under Growth, identify two or three ways you stretched beyond what your current scope required. Under Inclusion, name specific contributions to team health or collaborative culture.

Read the complete guide to writing your software engineer self-review for more on structuring impact statements that work in calibration.

How Connects connects to promotions and compensation

Promotions at Microsoft are semi-annual, aligned with Connects cycles. The process requires your manager to build a promotion packet, typically including peer letters, evidence of scope operating above your current level, and your own documented impact. Stripe runs a structurally similar cycle with its own quota dynamics. Stripe's promotion and review process covers how that plays out in practice. The packet goes through calibration and requires approval above your direct manager.

The key condition: you need to be consistently demonstrating impact at the next level before the promotion is approved, not just occasionally. Microsoft's promotion bar is about sustained above-level scope, not one impressive quarter.

Compensation decisions follow Connects ratings through what Microsoft calls the reward review process. Your manager proposes comp adjustments based on your rating and the budget allocated to their team, and those proposals go through a calibration and approval process. Engineers report that high ratings (Results and Growth rated at or above expectations) correlate with meaningful bonus and RSU outcomes; ratings in the middle of the range produce more modest adjustments; low ratings risk minimal or no equity refreshes.

Common mistakes in Connects reviews

Writing journal entries that are too general. "Worked on the payments platform migration" is not a journal entry. "Led the database schema migration for the payments platform, affecting 4M active accounts. We completed the migration with zero downtime by using a staged rollout approach I designed. The cutover took 3 hours versus the 12-hour maintenance window originally planned." That's a journal entry.

Underweighting the Inclusion dimension. Engineers who document only Results and Growth, and leave Inclusion sparse, often lose ground in calibration to engineers with slightly weaker Results but stronger across-the-board evidence. A strong Inclusion narrative makes your manager's argument about you more complete.

Nominating peers who can't speak to recent work. If your strongest peer relationships are with people from a project two years ago, those are the wrong people to nominate for Perspectives. Nominate people who were directly involved in your current cycle's highest-impact work and who can write specific, evidenced feedback.

Treating the journal as a summary document. The journal's value is that it captures evidence close to the event, when details are fresh and dashboards still show the right data. If you write it as a summary in week 12, you've lost the precision that makes calibration arguments stick.

Skipping the Connects discussion preparation. The formal 1:1 with your manager before calibration is where you can shape the narrative they'll carry into the room. Come prepared with your three strongest Results examples, your most clear Growth story, and the Inclusion contributions you want credited. Your manager can only argue for what they know.


Microsoft's shift from stack ranking to Connects improved the experience for most engineers. But the work of documenting your impact in real time didn't go away. It just changed form. CareerClimb helps you build a running record of your work across the Connects dimensions, so when review season opens, your journal is already there. Download CareerClimb

Frequently Asked Questions

Related Articles