Mindful Code Reviews
A considerable part of my day as an engineer involves giving or receiving code-reviews. I most frequently converse with many colleagues via these reviews; in that sense, code-reviews are the currency of engineering thought. That’s why I believe that culture is both reflected in and informed by how teams go through the code-review process.
This post is about code-reviews; specifically, it’s about my experience changing my code-review style. This is not about how everyone does or should do code-reviews, nor does it have amazing numbers to prove why any technique is an objectively better way. The only clear action item or learning that I hope to inspire is: Does my code-review process do the best it can to help us learn, grow, and work together to the best of our ability? If not, what might help me get better?
Default code reviews restrict creativity, morale, and enthusiasm
Use light-touch suggestions and lead with inquiry to get the best out of your colleagues and build a culture of learning
Use some handy techniques to make the process easier
Explore the cognitive neuroscience basis for why this works
The default style of code-reviews I’ve practiced has one major goal - to maintain quality of the code-base. A few engineers initially architect a solution; the team then expands and they guide other engineers towards doing the right thing as defined by the convention or structure laid out. For example:
"This won’t work with X"
"The code here uses the X pattern; you’ll need to do Y instead of this"
Often, these few engineers are outnumbered by the larger, newer team working on this and the review feedback gets progressively shorter:
"Change X to Y"
"Do X"
It’s really important to note these are not dysfunctional or nasty reviews - these are legitimate, well intended, and respectful comments. What’s more: they represent concise, clear and actionable feedback.
As time went on though, I realized that these well-intentioned statements may be suboptimal, and that there were many ways to make my process better.
First, I realized that many of my reviews came from a place of certainty: I tended towards assuming a known ideal state in my code-reviews. This ideal state was derived from a static mental model of what the project ought to do. In fast-moving engineering organizations with growing teams, there’s frequently new information available about the downstream requirements, the underlying technologies, or a new technology in the field. As a result, this ideal state is always in flux.
By presenting my perspectives as objective truth, I ran the risk of drowning out new information. Without understanding the newer axioms that the reviewee was operating with, we would continue to reason in parallel lines that never converged. This could either result in standoffs (where both me and the reviewee feels frustrated with the change) or complacency (where the my outdated ideal state persists). Eventually, when the reviewee convinced me about their idea, I would say something like:
Of course, that makes a lot of sense! I didn’t realize this is what you were going for earlier. It seems like we were both attempting to solve for the same thing!
Second, I realized that a lot of my code-review feedback was was telling my colleagues exactly what to do. In the pursuit of clarity, I would present concise and actionable feedback - without too much context. Much of this ended up reading like a directive that would probably leave little room for autonomy. Sometimes I would see the enthusiasm drain from a reviewee as I picked away at small conventions, and a quote I once read would come to mind: If you make the idea 5% better and the motivation 20% worse, you’ve just ensured the end product is 15% worse. Third, I realized I was “spoiling” the best part of software engineering for them - solving a puzzle. Let’s say that during a code-review, I identify an edge case that might cause a function to error out. I would typically point out the edge case with some examples:
I don’t think the code works for all cases - for example, when foo is [1, 3], and bar is [3, 5], wouldn’t trying to remove 5 from foo cause a KeyError?
This would lead to a response like:
Oh duh! You’re right - I can’t believe I missed that! :facepalm:
Something was off: the reason that most of us become software engineers is to solve puzzles, and right at the moment that a puzzle is solved the reviewee had a facepalm moment.
After much reading and many iterations, I landed on a different style of feedback during code-reviews. This has a different aim: to engage the reviewee in a conversation so that, with our powers combined, we can unlock cool new solutions. (Full Disclosure: I find this style not just more effective but also fun.) For example:
There are three major differences in the two styles: these correspond to the three areas of improvement identified earlier. These are:
Questions instead of answers
Suggestions instead of directives
Hints instead of solutions
When I started using the language of suggestions, an interesting thing happened - instead of interpreting my feedback about conventions as ipso-facto true, many times engineers started asking “Why?”. So I started saying “Consider doing X because of Y”. That led to one of three possibilities:
“Alright, that sounds reasonable.”
“What if I do Z instead, that seems to achieve the same and is simpler”
“Wait, I don’t understand - looking at these other parts of the code it doesn’t seem like Y is a valid assumption.”
This meant that instead of just staying the same, our code quality improved as more minds contributed to making it simpler. Furthermore, our design kept evolving over time since we kept questioning the underlying assumptions that we designed for.
Using hints led to a subtle change in the exchange reported earlier. When I noticed that certain edge cases were not being accounted for, I started posing questions instead of answers:
Have you considered how the two parameters might interact with each other? Are there any assumptions we’re making about bar?
If the hint was appropriately vague given the context, the response that would read something like:
Ah, I just found the bug! I’m assuming that bar is a subset of foo! I’ll add a test for this and push up a fix.
This technique was also useful when recommending that code fit known design patterns.
“Reading this code reminded of a design pattern I’ve come across: [link.to.pattern] Do you see any similarities with this case?”
The technique of asking questions instead of answers was especially useful in making those difficult code-reviews easier - the ones that involve code that seems obviously inappropriate for whatever reason, and the reviewee just doesn’t get it. The key here is acknowledging that if a smart, rational, and well-meaning person has come up with a solution that seems so obviously inappropriate to me, there must be some information mismatch.
Instead of the recommending “This doesn’t work with X”, I would ask:
“Did you consider how this would interact with X?”, or
“I’m curious about your thoughts on how to resolve this with the related assumptions made in X.”.
The key piece here is that I presented what I was thinking about, and showed interest in what the other person was thinking about without making a judgement on the solution. The idea is to first explore if the axioms we’re reasoning with are correct, before jumping into the outcome of the reasoning.
Being aware of all these considerations and doing insightful code reviews takes effort and time. Any activity that takes effort, time, and collaboration tends to lead to moments of frustration. On our team we acknowledge that the struggle is real, and practice a few other tactical things to make the code review process a lot smoother.
One way to lessen the load for the reviewer is to break up changes into smaller logical bites. Let’s say one of my tasks involved reimplementing how we serialized data into our Elasticsearch cluster. The new way reused a bit of code from the existing way, modified some other code, and also had a bunch of new code. Instead of asking for a code-review on a mammoth diff with a thousand lines added, removed, and modified, I could break this up into smaller diffs:
Break out reused code into a shared module
Implement the new way, leaving the old way and bindings untouched
Switch the bindings to the new way, and delete all the old code
Each of these smaller changes are self-contained with a clear agenda; the reviewer invests less cognitive load, and it’s easier to find 3 smaller blocks of time then one mammoth block. As a bonus, smaller changes end up being easier to test and ship as well.
Another technique that’s worked well for us is self-reviews: the reviewee goes through the changes and either calls out obvious fixes, or specific pieces they’d like feedback to focus on. Other helpful hints are to mention if some code has just moved to different files, or has just been renamed.
The final technique that our team uses is for reviews that have already gone through a couple of rounds, or have reached an impasse. In such cases, it can be frustrating for the implementer, the reviewer, or both to coordinate asynchronously over comments. Instead, we frequently pull up a chair (or a screen-share) and engage in a pair-review, where the implementer talks through the changes while making them, and point out which parts of the feedback seem confusing. This usually leads to a pair-programming session, followed by almost instant sign-off on the change.
Let's change gears a little bit and dig deeper into the three recommendations made about providing the code-review feedback. The mindful code-review techniques are all derived from research in Cognitive Neuroscience. The research uncovers some interesting things around how our brain processes everyday things and the implications on how we collaborate with each other. The three major learnings I’d like to touch upon are the positive feelings we humans associate with certainty, autonomy, and insights.
The craving for certainty is attributed to our evolutionary history: earning deterministic knowledge about one’s surroundings was likely correlated with either eating well, or not being eaten. Then there’s our energy-efficient brain, that tries to take costly, slow “thinking” and transform it into inexpensive, fast “routines” so we could do the same thing faster and easier.
In workplaces, this need for certainty manifests as a desire to learn and form a static mental model of the project that is then used as the definitive lens to view other contributions. Further, we’re more prone to doing this when our energy reserves are low - so squeezing a final discussion in right before lunch is possibly not going to encourage the best collaboration.
Moving away from certainty by asking questions is a technique called Leading by Inquiry by the Harvard Negotiation Project1. It has a long and storied history though, and is the basis of the Socratic Method of insight employed by the famous Greek philosopher as early as 300BCE. Another easy way to move away from certainty and encourage thought is to present the reasoning - the Why? - apart from my recommendation. Letting people into the first principles behind reasoning encourages them to think and contribute differing views.
Control over one’s surroundings or even the perception of control over one’s surroundings have been shown to correlate with lower levels of stress and related health problems, higher self reported levels of happiness, higher levels of motivation, and better performance at work2.
The perception of control is an especially interesting piece: it essentially means that having some amount of control over the process makes a huge difference. Think of this as the difference between the following:
Please make this change and let me know, and I’ll approve.
We’ll need to change this - would you like to make that change now, or ticket it for a follow up task immediately after?
Giving people the opportunity to own their decision allows them to choose what fits them best at that moment of time, and in that way gives them control over their immediate actions.
When we solve a puzzle - or any task that involves non-linear thinking - there’s a particular moment when we are aware of the solution. This moment of insight is frequently characterized by deeply positive emotions. When we arrive at such insights ourselves - an aha! moment - we report significantly higher levels of positivity than when we are given the insight by someone else - an a-doh! moment3. What’s worse, that knowledge can’t be unlearned, so that experience’s positive potential is lost forever; the easiest analogy to this is being given a spoiler to your favorite TV show.
One lesson I have learned on this journey is that this isn’t a new set of tools designed to coerce or influence my colleagues to achieve the same thing I previously wanted, which was the code in my head written by them. To truly be at ease with this process, I had to embrace the possibility that sometimes things just don’t work out - my hints would be too complex, or my colleague would start from the same first principles and come to different conclusions. At that point, it was helpful for me to remember the Pythonic adage: “We’re all consenting adults.”
I’d ask myself what was the worst that could happen - and what the probability of that happening were. In most cases, the worst would be that the code not be to my preference. In other cases, the worst might be increased constant memory usage or increased constant time in exchange for better readability; in these cases I’d request my colleague to document the tradeoffs inline and move ahead.
Sometimes I find it genuinely difficult (or time consuming) to step away from my perspective and present impartial feedback. In such cases, I’ve found that it works best to declare my perspective clearly as an opinion, and give my colleague the space to take or leave this opinion. Authenticity is key to the process, and providing one side of the argument while admitting that I haven’t given thought to the other side helps maintain the spirit of collaboration. Typing it out in so many words also helps me articulate the trade-off: I’m prioritizing urgency somewhere else over being a more helpful collaborator.
Alright, that was a lot of things. To recap:
Code-review feedback improvements:
Ask questions instead of giving answers
Give suggestions instead of directives
Give hints instead of solutions
Code-review process improvements:
Break large changes into smaller chunks to review and ship independently
Use self-reviews as a way to make the reviewer’s job easier
Team up for pair-reviews to get a change over the line
We’ve discussed a bunch of techniques to make it easier to collaborate on code reviews. It’s important to note here that people do respond to different communication styles in varied ways. It’s possible that an experienced engineer with lots of context and confidence is more comfortable with a direct style than a junior engineer who’s finding their feet in the industry. My personal belief is that by understanding some truths that apply to all of our brains, we can make it easier for diverse teams to collaborate better.
My code-review process evolved organically, and I don’t have metrics showing its effectiveness. I’ll instead call out the potential impact each of us could have on engineering culture.
As I’ve used the techniques above, I’ve frequently seen colleagues come up with better solutions than what I’d presumed possible. This sort of a humbling experience makes it easier to lead with inquiry and apply these techniques the next time around. A positive code-review experience makes it more likely that engineers enter the next code-review with a learning mindset, which in turn makes it easier to have another positive experience. In an industry where change is constant, such a culture of learning is a core competitive advantage. Imagine the compounding effects on your organization by making just a small tweak to this high-frequency process!
At Nerdwallet, one of the responsibilities of every engineer is to help other engineers perform better - and the more senior an engineer grows, the greater their responsibility to coach those around them to perform at their best. If you’re interested in joining us on this journey, check out our careers page!