$ cd /home/
← Back to Posts
Code Reviews - The Road to Learning

Code Reviews - The Road to Learning

Here's an unpopular opinion: the code review is the single most valuable learning tool available to a working software engineer, and most teams treat it like a rubber stamp.

I've been on teams where PRs sat for a week, got a thumbs up with no comments, and got merged. I've been on teams where reviews were so brutal that people dreaded opening GitHub notifications. Both are failures. Neither is what code review is actually for.

Done right, a code review is a 15-30 minute investment that teaches the reviewer something, teaches the author something, and makes the codebase better. Done right, consistently, across a team — it builds shared understanding, catches security issues early, and creates engineers who are genuinely better at their craft.

Let me show you what "done right" actually looks like.

What Code Review Is Actually For

Before the how, let's be clear on the why — because most people are fuzzy on it.

Code review is not primarily for catching bugs. Your test suite does that. Code review is not for proving you're smarter than the author. Code review is not for enforcing your personal style preferences.

Code review is for:

  1. Knowledge transfer — ensuring the team understands what was built and why.
  2. Catching logic errors, edge cases, and security issues that tests don't cover.
  3. Maintaining consistency with the team's established patterns and standards.
  4. Mentorship — growing junior engineers through feedback that explains reasoning, not just outcome.

When you sit down to review a PR with these goals in mind, the whole thing feels different. You're not policing, you're collaborating. That mindset shift changes the quality of everything you write.

How to Give a Good Review

Be Kind, Be Specific, Explain Why

These three things together are the foundation of a review that actually helps someone.

Be kind. The person on the other end of your review is a human being who spent time on this. Even if the code has problems — especially if the code has problems — lead with the assumption that they made reasonable decisions with the information they had.

Be specific. Vague feedback is useless at best and demoralizing at worst.

Bad: This function is too complex.

Better: This function is doing three different things — parsing the input, validating the data, and writing to the database. If we split it into separate functions, we get better testability and the next person reading this code will understand it faster.

Explain why. This is the one most people skip and it's the most important for learning. If you just say "change X to Y," the author may comply without understanding. If you explain why Y is better than X, they carry that knowledge into every future function they write.

Use Prefixes to Set Context

A technique I picked up from a senior engineer years ago and have used ever since: prefix your comments with intent.

  • nit: — Minor style thing, take it or leave it, won't block merge.
  • suggestion: — I think this could be better, but I won't fight you on it.
  • question: — I'm genuinely confused, help me understand this.
  • issue: — This needs to be addressed before we merge.
  • security: — This is a security concern, needs discussion.

This practice eliminates ambiguity. The author knows exactly how urgent each comment is and can prioritize accordingly. It also stops reviewers from treating every nitpick like a blocking concern.

Ask Questions, Don't Just Make Statements

Sometimes you don't understand why the author made a choice. Before you flag it as wrong, ask.

Why did you choose to handle the error here rather than propagating it up? I want to understand the intent before suggesting a change.

Half the time, they had a good reason you didn't see. The other half, the question prompts them to realize they didn't think it through — and they'll fix it themselves without you having to dictate the solution.

How to Receive a Review

The ego thing is real. You spent hours on this code. Someone just left eleven comments on it. Your first reaction might be defensive. That's human. But here's what matters: defensive responses kill the learning.

Say thank you. Not performatively. But a quick "good catch, I missed that" builds trust and signals you're a collaborator, not someone defending territory.

Ask for clarification when something is unclear. Not sarcastically. Genuinely. "Can you say more about what you mean by X?" is always appropriate.

You don't have to accept every suggestion. If you disagree with a comment, you can say so — with reasoning. "I considered that approach, but went this way because Y. Am I missing something?" This turns a disagreement into a discussion, which is often where the best learning happens.

Resist the urge to explain yourself defensively in every response. Some comments are just right. "Fixed, good catch." is a complete response.

What to Look For Beyond Syntax

Junior reviewers tend to look at code style and obvious bugs. Senior reviewers look at different things entirely. Here's what I focus on:

Logic and Edge Cases

  • What happens when this input is null?
  • What happens when this list is empty?
  • What happens when this service call fails?
  • What happens at the boundary values — zero, max int, empty string?

These are the errors that make it to production. Tests catch the happy path. Reviews catch the edges.

Readability and Maintainability

  • Will someone unfamiliar with this code understand it six months from now?
  • Are variable names clear, or do I have to read three lines to understand what d is?
  • Is this comment explaining what the code does (redundant) or why it does it (valuable)?

Architecture and Coupling

  • Does this change introduce dependencies that are going to be painful later?
  • Is this logic in the right place, or is it going to create duplication?
  • Are we solving the symptom or the root problem?

Security-Focused Review

This is where I spend extra attention, because security issues in code review are infinitely cheaper to fix than security issues in production.

Things I always look for:

Input validation. Is every external input being validated before use? User data, API responses, file contents — treat it all as hostile until proven otherwise.

Authentication and authorization checks. Is this endpoint protected? Is the authorization check in the right place? I've caught missing auth checks in code reviews that would have been significant vulnerabilities in production.

Secret handling. Is anything sensitive being logged? Are credentials being hardcoded? Are secrets being exposed in error messages?

SQL and injection risks. Is user input going directly into a query? Are we using parameterized queries?

Dependency changes. New dependencies deserve scrutiny. What are we adding? Is it maintained? Does it have known CVEs?

A quick example of a security-focused comment done well:

security: This endpoint doesn't appear to have an authorization check — it's validating authentication (the JWT is checked) but I don't see where we're verifying the user has permission to access this specific resource. Can you confirm the authz check is happening somewhere in the chain, or should we add it here?

Building a Review Culture

Individual skill matters, but culture matters more. Here's what distinguishes teams where code review actually works:

Set expectations about response time. A PR that sits for five days gets stale. The author loses context. Aim for a first review within 24 hours on business days. It doesn't have to be thorough — even a quick pass to acknowledge the PR and flag anything urgent is valuable.

Keep PRs small. A PR with 2,000 lines of changes is not reviewable. It gets rubber-stamped because no one can actually hold it in their head. A PR with 200 lines gets real review. If your PRs are consistently large, it's a process problem, not just a review problem.

Make it psychological safe to ask questions. If junior engineers feel embarrassed to ask "why" in a review, you've lost the mentorship value entirely. Lead by example — ask questions openly, including when you review code from people less senior than you.

Separate style enforcement from review. Linters and formatters exist precisely so reviewers don't have to spend mental energy on tabs vs spaces. Automate the style checks. Save the review for things machines can't catch.

An Example of Bad vs. Good

Same code, different reviews:

Bad review comment:

This is wrong. Use a map instead.

Good review comment:

suggestion: If we use a Map here instead of a nested loop, we drop this from O(n²) to O(n) — which probably doesn't matter at current data sizes, but the map version is also clearer to read since the intent is "look up by ID." Happy to pair on it if the refactor is unclear.

One is a verdict. The other is a conversation. One leaves the author feeling judged. The other leaves them with understanding they'll carry forward.

That's the difference between a review culture that grows engineers and one that just frustrates them.


Takeaway: The next time you sit down to review a PR, ask yourself: "Is this author going to learn something from what I write?" If the answer is no, dig deeper. And the next time you receive a review, try responding to every comment with curiosity rather than defense. The learning is in the discomfort.