APB-Code review
Code and Design review
Keywords: Code review sessions - conduct and expectations
Contents
Abstract
This page presents conduct of and expectations for code and design reviews.
This unit ...
Prerequisites
This unit has no prerequisites.
Objectives
Code reviews are intended to ...
- ... improve everyone's familarity with the contents;
- ... practice critical analysis of the material;
- ... practice giving constructive feedback in a professional context;
- ... improve communication skills.
Deliverables
Your participation as reviewers in the Code review sessions will be worth 10 marks in each panel for a total of 4 x 10 marks maximum.
Contents
We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review code, but also the design, contents, and form of the submitted material.
Code reviews are done for course credit of the reviewers. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to be protective of your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since you as the reviewer will be evaluated for credit, I hope that you will put your heart into it.
- Principles
- Units are not to be modified, until after the code review. Do not edit, upload, commit, push, merge or otherwise modify any material you have submitted for credit until after the review is done.
- We will use four weeks for reviews. We will schedule five to six reviews per class session, with four reviewers. This will give everyone approximately five minutes for their review contributions.
- Everyone will review four units.
- Typically everyone will participate in one review at every session, and never more than two.
Schedule
Date | Unit | Author | Reviewer 1 | Reviewer 2 | Reviewer 3 | Reviewer 4 |
October 18 | RPR-Data-Reshape/Split/Merge | Yuhan Zhang | Ricardo Ramnarine | Vicky Lu | Albert Cui | Kevin Lin |
October 18 | RPR-Data-Imputation | Gregory Huang | Xiao Wang | Walter Nelson | Yuqing Zou | Jenny Yin |
October 18 | EDA-MOD-Nonlinear | Denitsa Vasileva | Yuhan Zhang | Ian Shi | Alana Man | Farzan Taj |
October 18 | EDA-DR-MDS | Ricardo Ramnarine | Gregory Huang | Adriel Martinez | Marcus Chiam | Truman Wang |
October 18 | APB-ML-Features | Xiao Wang | Denitsa Vasileva | Zhifan Wu | Dominick Han | Hari Sharma |
Date | Unit | Author | Reviewer 1 | Reviewer 2 | Reviewer 3 | Reviewer 4 |
October 25 | EDA-MOD-Linear | Vicky Lu | Yuqing Zou | Gregory Huang | Leon Xu | Yuhan Zhang |
October 25 | EDA-MOD-Generalized | Walter Nelson | Albert Cui | Ricardo Ramnarine | Truman Wang | Farzan Taj |
October 25 | EDA-MOD-Logistic | Ian Shi | Vicky Lu | Kevin Lin | Xiao Wang | Jenny Yin |
October 25 | EDA-MOD-Assessment | Adriel Martinez | Walter Nelson | Zhifan Wu | Denitsa Vasileva | Alana Man |
October 25 | APB-ML-Feature_importance | Yuqing Zou | Ian Shi | Marcus Chiam | Dominick Han | Hari Sharma |
October 25 | APB-ML-Measuring_performance | Albert Cui | Adriel Martinez | Gregory Huang | Leon Xu | Yuhan Zhang |
Date | Unit | Author | Reviewer 1 | Reviewer 2 | Reviewer 3 | Reviewer 4 |
November 1 | EDA-Visualization | Zhifan Wu | Dominick Han | Denitsa Vasileva | Xiao Wang | Yuqing Zou |
November 1 | EDA-CLU-Density_based | Alana Man | Kevin Lin | Hari Sharma | Ian Shi | Vicky Lu |
November 1 | EDA-CLU-Distribution_based | Marcus Chiam | Zhifan Wu | Leon Xu | Truman Wang | Farzan Taj |
November 1 | EDA-CLU-Mutual_information | Dominick Han | Alana Man | Jenny Yin | Albert Cui | Adriel Martinez |
November 1 | EDA-Graph_data_mining | Kevin Lin | Marcus Chiam | Denitsa Vasileva | Ricardo Ramnarine | Walter Nelson |
Date | Unit | Author | Reviewer 1 | Reviewer 2 | Reviewer 3 | Reviewer 4 |
November 15 | APB-ML-Deep_neural_networks | Jenny Yin | Hari Sharma | Yuqing Zou | Adriel Martinez | Dominick Han |
November 15 | APB-ML-h2o | Farzan Taj | Leon Xu | Walter Nelson | Marcus Chiam | Ricardo Ramnarine |
November 15 | APB-ML-RWeka | Truman Wang | Jenny Yin | Albert Cui | Zhifan Wu | Yuhan Zhang |
November 15 | APB-ML-Support_Vector_Machines | Hari Sharma | Farzan Taj | Gregory Huang | Alana Man | Kevin Lin |
November 15 | APB-ML-Random_forests | Leon Xu | Truman Wang | Ian Shi | Vicky Lu | Xiao Wang |
Preparation
- The entire class is expected to have worked through all learning units by the time of the first review, to become familiar with the context. This may be done relatively quickly.
- The entire class is expected to have worked through the learning units that are scheduled for review in detail before a review session, so that everyone can follow the discussion.
- The reviewers of a learning unit have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared.
During the review
- Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification. Other than that, familiartity with the unit is assumed.
- Reviewers will comment on issues. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
- Issues for discussion could include:
- Suggestions to make the objectives of the unit more clear.
- Suggestions how the unit could (realistically) contribute better to the course objectives.
- Improvements to integrating the unit with others (but without introducing unnecessary dependencies).
- Constructive critique of design decisions.
- Improvements to examples to better illustrate the concepts.
- Addressing any unstated assumptions.
- Identifying significant dependencies that could become obstacles to refactoring.
- Flagging, where the material lacks rigour or is factully incorrect.
- Improvements to form and layout.
- Identifying code that does not conform to coding style.
- Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
- Improvements to comments (remember, this is teaching code, not production code);
- Flagging where the sample code might not be robust against faulty input.
- Flagging where the sample code might not be safe against overwriting user data.
- Suggestions how the tasks could be made more meaningful, or where additional tasks might be useful.
- Flagging where sample solutions are missing.
- Sample solutions need to support those leaners with the greatest difficulties with the material. Are the submitted solutions suitable for this purpose? Are they sufficiently commented? What can be improved?
- Any other issues ...
- During the review, reviewers take notes of responses and comments.
Overall, be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire "team".
After the review
- After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (briefly, in point form);
- Once all suggestions are in, the unit author begins revisions.
- It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their unit.
A final note
I hope that in your career you will find yourself in a workplace where peer-review is a regular part of your activities. This may contribute tremendously to better outcomes, more transparent and more meaningful work, and more cohesive teams. When that time comes, your skills as reviewers will be evaluated implicitly, although perhaps neither you, your supervisor, nor your project lead might realize this. You will be prepared.
Further reading, links and resources
Notes
If in doubt, ask! If anything about this learning unit is not clear to you, do not proceed blindly but ask for clarification. Post your question on the course mailing list: others are likely to have similar problems. Or send an email to your instructor.
About ...
Author:
- Boris Steipe <boris.steipe@utoronto.ca>
Created:
- 2017-10-13
Modified:
- 2017-10-13
Version:
- 1.0
Version history:
- 1.0 New unit
This copyrighted material is licensed under a Creative Commons Attribution 4.0 International License. Follow the link to learn more.