Code Review, SR+ … but why?

I wanted to take some time and talk about code review. Let me start of by explaining what “code review” is, or rather what it is in reference to this blog. Code review is the act of looking at someone’s code in order to evaluate it.  Code that is in review is often referred to as a patch. The purpose of the code is to fix a bug, add functionality, or improve performance. Once the patch passes review it is then staged/added to the core of the project. The reason behind the review is simple. Does the code do what it says it is supposed to do? It is important to note that every project has review requirements.  For example, the popcorn-js project I am working on has the following requirements:

  1. Ensure the code follows the style guide and is appropriate
  2. Ensure the code passes lint
  3. Ensure main tests pass on multiple browsers: test/index.html
  4. Ensure new tests were added to test the new functionality and that they pass
  5. Ensure that other tests such as parser or plugin tests that are affected by the new code also pass

Looking at these requirements a patch for the popcorn-js project has to: fix/add the functionality it is meant to fix/add, it has to include tests, and it also has to follow a style guide. If the specific patch that you are looking at is missing any of these the review simply fails. However, what happens when it passes? Lately I have been seeing short and sweet review comments: “Super Review (SR) + ” . But what does this mean exactly? Did you follow the review requirements? Do you even know they exist? When a code patch fails review the reviewer always states the reason for the failure. This is obvious since the problem has to be outlined before it can be fixed. Is it too much to expect the same type of courtesy for a passing review? After all, the way a patch was tested is significant. I am not saying that the person reviewing the patch is not to be trusted. I am however pointing out that there is merit behind doing reviews. However, if a review is not properly documented it will be unofficially re-reviewed by the person who is responsible for staging the patch. Why? Simply because the person staging/adding the new code wants to ensure that nothing broke in the process. I am aware that the person staging usually checks to ensure noting is broken but there is a major difference here. For example, looking back at  the popcorn-js project and it’s review process requirements you will notice that the project has core unit tests as well as other parser and plugin tests. Typically after something has been staged the core unit tests, including any main demos, would be run. The plugin and parser test however would not. From a release engineer’s perspective proper review documentation saves a lot of time. Let me provide an example of good review documentation based on popcorn-js’ requirements:

SR+, code looks good

No lint errors

Unit tests passing on (Vista) Firefox, Chrome and Safari

This patch affects the googleMap plugin. I verified that all unit tests/demos using this plugin work as expected on the browsers mentioned above.

Notice that I am not writing a whole paragraph. Point form notes is all you really need to let the appropriate people know what you did and why the review had passed.  I hope you keep this in mind when doing a review.

 

View all of my blogs on popcorn-js
View all of my blogs