Jump to content

How to have good code reviews?


Recommended Posts

Our Labview test tools team is a fairly new organization and we're just starting to get into code reviews. I held one earlier this week for an application I developed that is nearing release that I felt went okay, but due to time constraints was fairly shallow and the few deep dives into the code were extremely narrow. It would not have uncovered a bug I found last week that resulted in a robot crashing into a fixture.

For those that do code reviews,

  • How often/at what stages do you have code reviews? Our projects may run anywhere from 1 week to 6 months of development time with the majority on the shorter end of the scale. I think this particular code review was much later than it should have been.
  • How much time do you budget for code reviews? 2% of total project time? 20%?
  • How much participation do you expect from other developers? Do you find reviews more productive when conducted 1-on-1 or in a small group (4-6 developers total) setting?
  • How much code do you try to cover? Ideally a code review would cover all the code and at the end all participants understand it as well as the developer; however, that simply is not practical in our environment where each test tool is typically owned by a single developer. Business realities dictate the review focus on 'critical' sections, which leads to the question...
  • How do you decide what code is 'critical' and how much does application architecture dictate critical code? For an application with several asynchronous threads the message timing could be critical. Do other design patterns suggest different places to look for critical code?

  • Like 1
Link to comment

I like the comic.

As for useful reviews my team hasn't really established a formal process yet as we are fairly small. However one thing our embedded developers (using an Agile development process) do with their code reviews which I think makes sense is to exempt code that was pair programmed. The reason being is that if the code was paired programmed you already had multiple sets of eyes looking at it. This can save time during the code reviews.

We have used pair programming on parts of our LabVIEW projects and found it to be helpful especially working on challenging pieces of code.

Link to comment

Hi,

We have been doing code reviews QCP's (quality Peer Reviews) for a long time now on all our "new" software and "software changes".

Our process is based on one we use site-wide for C++, C (30 + develpers) and LabVIEW (6 developers). I have uses this same basic process for many years at different compaines

It is done as a formal traceable process, built into our own web based tools that we use to control ClearCase our source control tool.

The web based tool we created and use acts as a both records keeping system, who was the reviewer, log of review comments (and developer responses) and process machine, controlling when a QCP (quality Peer Review) needs to be done.

  • When do we do a QCP
All software written, bug fixes /new features and carried out on separate ClearCase branches. Each ClearCase branch is atomic, ie it contains all the changes for that bug fix or whatever. The developer tests and codes on that branch and at bthe point the developer is happy that the code is ready to go into the ClearCase mainline (ie ready for everyone else to use or be released) a QCP will be done. So ALL software is peer reviewed before it is added into the main code stream, this is control by our web tool which will not allow a merge back into main until a QCP has been passed.
Note a branch could have one file on it or lots. Typically a branch is kept small in scope, even for new major new software we would try to built it up a bit at a time with a different branch for each bit. This is partly to make the peer review of a branch more manageable. In LabVIEW we only really get big branches when we have major recompile issues.
A branch status goes : development - ready for QCP -- passed QCP (or back to development)
  • QCP levels

OK we all live in the real world and not all software changes are equal, plus sometimes pragmatic decisions are needed, "FIX THIS NOW" so we have some different level of QCP that are carried out. Our QCP levels are

None - very rarely used level , must be accompanied by a good reason for this type of review, maybe a save due to recompile only

Maint - just a quick check over the changes made.

High - examine the changes made and their context.

Detail - same as above really possible we do not need this option

Demo - show the review the change in action

The key here is that, the information is recorded (so traceability) the developer has to think about it. I would say that 99.9% of reviews are done as high /detail

  • What happens in a review

First off, during a review it is the developers responsibility to satisfy the reviewers comments or questions, after all it it the review that passes or fails a QCP. If a QCP passes then the code is merge into the mainline and the branch is closed. If a review fails the branch goes back to a development state and the developer needs to address all the reviewers comments.
A reviewer will do the following for all reviews except the None type.
1) Load all the top relevant top level VI's and ensure that a) there are no broken run arrows b) they close with no save's needed.
2) Run the VI analyser on all the files on the branch. We have this automated and the analyser config spec set to our custom requirements. All failures will be visually examined.
We have a document which deals with the analyser results, which must be fixed / which should be fixed (or explained away) which we can ignore and in what circumstances.
We use the VI analyser as a style guide tool.
3) The reviewer will use the OpenG compare to disk tool (modified to work with LabVIEW 8.2) to visually look at the code changes between the current version on main and the ones on the branch under review.
For a high / detail review (our normal review type is high) the review will also look at the wider context of the changes and how they fit into the general scheme of things, how they meet the requirements of the change or bug fix.
Any comments the reviewer makes are logged.
If the reviewer is not happy with the code the branch is set back to development status, the comments are mail to the developer who must then, explain / justify a code change or modify the code. Being a smallish team, often the reviewer and developer will sit together during a review so typically when a review has failed the reviewer and developer have already agreed on what corrective action is required to pass next time, but the comments and reply's are always recorded.
Following a failed peer review, the developer does their changes and any retests again (weell the code has changed) and then resumits the changes back to the original reviewer for a new QCP, hopeful it will pass this time :-)
  • Other comments

I would like to get some automated unit testing built into this system but it is a while before that will happen.

Having done QCP's for many years, I can honestly say Iwould not develop software in ANY language without a peer review process of some sort.

The value of getting somebody else to view your code cannot be over stated. Just from a simple style point of view if nothing else. When you KNOW somebody else is going to look at your code with a critical eye you tend to write your code differently.

Nearly every code review results in a valid comment that will improve the code, this is NOT to say the the original developers was poor or lazy, just a different view is taken by the reviewer, in some case our peer review has stopped code going into our mainline that would have been a major problem if it had got released.

Finally, I did see a really good NI webcast Performing Technical Code Reviews to Improve LabVIEW Code Quality but though I found a link to it at the bottom of the page it seems to now be removed

I hope this makes some sense and is of some use

cheers

Dannyt

  • Like 2
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.