Our Labview test tools team is a fairly new organization and we're just starting to get into code reviews. I held one earlier this week for an application I developed that is nearing release that I felt went okay, but due to time constraints was fairly shallow and the few deep dives into the code were extremely narrow. It would not have uncovered a bug I found last week that resulted in a robot crashing into a fixture.
For those that do code reviews,
How often/at what stages do you have code reviews? Our projects may run anywhere from 1 week to 6 months of development time with the majority on the shorter end of the scale. I think this particular code review was much later than it should have been.
How much time do you budget for code reviews? 2% of total project time? 20%?
How much participation do you expect from other developers? Do you find reviews more productive when conducted 1-on-1 or in a small group (4-6 developers total) setting?
How much code do you try to cover? Ideally a code review would cover all the code and at the end all participants understand it as well as the developer; however, that simply is not practical in our environment where each test tool is typically owned by a single developer. Business realities dictate the review focus on 'critical' sections, which leads to the question...
How do you decide what code is 'critical' and how much does application architecture dictate critical code? For an application with several asynchronous threads the message timing could be critical. Do other design patterns suggest different places to look for critical code?