2007-12-19

Code Reviews

My team is currently investigating code review strategies. As it turns out, this is a very difficult thing to create for a development team. There are an amazing amount of different ways a team can perform code reviews from the "All code must go through a handful of 'committers'" and "All your code must be on a branch before review" to "I trust you, so go ahead and commit".

Today, you can do a lot with static code analysis using tools such as CheckStyle, FindBugs, etc. You can also do a lot with unit tests and test coverage with tools like Cobertura, Emma, Clover, etc. I've even gone to the point of creating my own static analysis tools that dealt with things that can't be covered in other tools. At some point though, human eyes will need to look at the code. Static analysis isn't going to be able to cover everything.

My current organization is part agile and part waterfall. In order to handle the audit process, we have to follow certain rules and one of those rules is to create Use Case Design documents that are handed to the developer. This can be a good thing, but very misleading when it comes to code reviews.

When doing a code review while following the Use Case Design document, you typically follow the sequence diagram and verify that the code matches it. This is good to validate that the code has the correct functionality. It doesn't, however, deal with the possibility that the developer strayed off path and modified code elsewhere (even code on someone else's use case).

So, the question is, how can you organize code reviews that are fluid enough to not slow down the development process, but guarantee that everybody is coding towards the team standards and not straying off the path too far?

No comments: