Even if it sounds silly, i have a hard time with reviewing the code.
How do you guys do it?
For me, on the github page, i am able to look for formatting/naming issues, but i cannot follow the application flow. With 100 browser tabs open i spend more time finding the line in question then looking trough the code. So is it best practice to checkout the code and review in the ide i am familiar with, and then note the line and comment on github?
Additionally, without trying out myself, i have a hard time to say with much confidence that another implementation would be more fitting.
In general, is it part of a review to argue over namings? As an example:
Code reviews can be tricky. Ideally youâre looking for any and all issues that might arise, but in practice thatâs difficult unless youâre very closely familiar with both the code being modified and the surrounding context (this is why itâs valuable to have at least one reviewer whoâs an âexpertâ in the code youâre modifying, though this can be difficult in practice).
Basically I will look for anything and everything that might obviously affect code quality, and if I can, Iâll try to dig into logic flow. Naming is absolutely fair game in code reviews, IMHO - I try not to be pedantic about it, but sometimes names are unclear or even a bit misleading. Since it can be difficult to spot issues in review, companies Iâve worked at always have a fully separate QA step following review - review answers the question of âdoes this look good,â and QA (hopefully) answers the question of âis this actually good.â Automated testing (unit and integration) is generally far better at catching logic errors than reviewers are, in my experience.
While Iâll try to somewhat track logic flow in reviews, I donât see them as the best tool for catching bugs - if the author spent 20+ hours in the code in that area and missed a bug, itâs pretty unlikely that most reviewers will be able to look over the code for 10-30 minutes and catch the error. What they can do effectively is judge how easily understood the code is to other developers and offer suggestions for improvement. I view reviews and QA/tests to be tangentially useful - they serve different purposes.
Fair to comment on naming (especially in public-facing APIs⌠super important). Fair to download and run it. (Actually, in the midst of online browse and merge of code, I think âactually running itâ gets overlooked sometimes.)
Fair to ask questions about things⌠in fact, especially for new reviewers, thatâs sometimes the best way to start a conversation. âWhy is this like this?â versus âI think this is wrong.â And design-by-committee is only reasonably effective when we are pointing at running code. (Whiteboard design by committee is nearly impossible because we each read the whiteboard a little different but code is ground truth.)
As to process, it might be a personal thing in the end. I can model a lot of code in my head and can do reasonable code reviews with 4-5 tabs open in the browser⌠but sometimes that fails me and I have to checkout the source. At my day job, the codebase is still relatively unfamiliar to me so I even use Intelli-J so I can bounce around super fast.
This is a good question. In fact, I have participated in some code reviews (in my own work), but I also find it hard to immediately identify issues in a short period of time. Usually, I can only point out more obvious mistakes (such as code style, naming, function call rationality, etc.), and provide some personal opinions.
More often, we use automated testing. However, even so, there may still be potential issues, and these issues are discovered after the project has been running for a long period of time under certain circumstancesâŚ
I work for a large US company in a heavily regulated industry, and itâs the exact same situation there - and some of those issues caused incidents that resulted in hefty fines.
No review system is ever going to be perfect and the cost in time/materials/manpower/whatever goes up exponentially as it nears perfection. So there has to be some logical cut-off. You are already going to have to deal with bugs that escape review anyway. Just try to catch them as early as possible.
For an open source project, code review is about reducing maintenance burden almost more than anything else. For example, the team is just as likely to reject 1000 lines of perfect-but-complicated code if it only serves one user out of 1000⌠because that code is a maintenance burden for no real benefit. (And if the need to âput it in the engineâ was a design flat of the engine to add some flexibility then there is the real target.) But mostly this means, donât break APIs when possible, simplify when possible, etc⌠all while adding to the âgoodnessâ.
As a rule of thumb, for any type of review you have to do in your life, whether code, documents, whatever⌠a good technique is to find at least 3 things youâd want to change. For any creation of more than minimal size, we may have to look a little but we can almost always find 3 things. Force yourself to⌠then decide if those things are worth mentioning.
If you find three things really easily then things need a deep look because potentially lots of things need to change.
If you struggle to find even three things and the ones you find are not worth mentioning then thatâs a pretty solid review.