APB-Code review

From "A B C"
Revision as of 19:31, 26 January 2018 by Boris (talk | contribs)
Jump to navigation Jump to search

Code and Design Review

(Code review sessions - conduct and expectations)


 


Abstract:

This page explains conduct of and expectations for code and design reviews.


Objectives:
Code reviews are intended to ...

  • ... improve everyone's familarity with the contents;
  • ... practice critical analysis of the material;
  • ... practice giving constructive feedback in a professional context;
  • ... improve communication skills.

 


Deliverables:

Your participation as reviewers in the Code review sessions will be worth 10 marks in each panel for a total of 4 x 10 marks maximum.


 



 



 


Contents

We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review code, but also the design, contents, and form of the submitted material.

Code reviews are done for course credit of the reviewers. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to be protective of your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since you as the reviewer will be evaluated for credit, I hope that you will put your heart into it.

Principles
  • Units are not to be modified, until after the code review. Do not edit, upload, commit, push, merge or otherwise modify any material you have submitted for credit until after the review is done.
  • We will use four weeks for reviews. We will schedule five to six reviews per class session, with four reviewers. This will give everyone approximately five minutes for their review contributions.
  • Everyone will review four units.
  • Typically everyone will participate in one review at every session, and never more than two.


 

Schedule

DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 18RPR-Data-Reshape/Split/MergeYuhan ZhangRicardo RamnarineVicky LuAlbert CuiKevin Lin
October 18RPR-Data-ImputationGregory HuangXiao WangWalter NelsonYuqing ZouJenny Yin
October 18EDA-MOD-NonlinearDenitsa VasilevaYuhan ZhangIan ShiAlana ManFarzan Taj
October 18EDA-DR-MDSRicardo RamnarineGregory HuangAdriel MartinezMarcus ChiamTruman Wang
October 18APB-ML-FeaturesXiao WangDenitsa VasilevaZhifan WuDominick HanHari Sharma


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 25EDA-MOD-LinearVicky LuYuqing ZouGregory HuangLeon XuYuhan Zhang
October 25EDA-MOD-GeneralizedWalter NelsonAlbert CuiRicardo RamnarineTruman WangFarzan Taj
October 25EDA-MOD-LogisticIan ShiVicky LuKevin LinXiao WangJenny Yin
October 25EDA-MOD-AssessmentAdriel MartinezWalter NelsonZhifan WuDenitsa VasilevaAlana Man
October 25APB-ML-Feature_importanceYuqing ZouIan ShiMarcus ChiamDominick HanHari Sharma
October 25APB-ML-Measuring_performanceAlbert CuiAdriel MartinezGregory HuangLeon XuYuhan Zhang


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
November 1EDA-VisualizationZhifan WuDominick HanDenitsa VasilevaXiao WangYuqing Zou
November 1EDA-CLU-Density_basedAlana ManKevin LinHari SharmaIan ShiVicky Lu
November 1EDA-CLU-Distribution_basedMarcus ChiamZhifan WuLeon XuTruman WangFarzan Taj
November 1EDA-CLU-Mutual_informationDominick HanAlana ManJenny YinAlbert CuiAdriel Martinez
November 1EDA-Graph_data_miningKevin LinMarcus ChiamDenitsa VasilevaRicardo RamnarineWalter Nelson


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
November 15APB-ML-Deep_neural_networksJenny YinHari SharmaYuqing ZouAdriel MartinezDominick Han
November 15APB-ML-h2oFarzan TajLeon XuWalter NelsonMarcus ChiamRicardo Ramnarine
November 15APB-ML-RWekaTruman WangJenny YinAlbert CuiZhifan WuYuhan Zhang
November 15APB-ML-Support_Vector_MachinesHari SharmaFarzan TajGregory HuangAlana ManKevin Lin
November 15APB-ML-Random_forestsLeon XuTruman WangIan ShiVicky LuXiao Wang


 

Preparation

  • The entire class is expected to have worked through all learning units by the time of the first review, to become familiar with the context. This may be done relatively quickly.
  • The entire class is expected to have worked through the learning units that are scheduled for review in detail before a review session, so that everyone can follow the discussion.
  • The reviewers of a learning unit have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared, and that they consider the unit with reference to the expectations set out in the [rubrics for learning units, and the general expectations for design, code and documentation].


 

During the review

  • Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification. Other than that, familiartity with the unit is assumed.
  • Reviewers will comment on issues. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
  • Issues for discussion could include:
    • Suggestions to make the objectives of the unit more clear.
    • Suggestions how the unit could (realistically) contribute better to the course objectives.
    • Improvements to integrating the unit with others (but without introducing unnecessary dependencies).
    • Constructive critique of design decisions.
    • Improvements to examples to better illustrate the concepts.
    • Addressing any unstated assumptions.
    • Identifying significant dependencies that could become obstacles to refactoring.
    • Flagging, where the material lacks rigour or is factully incorrect.
    • Improvements to form and layout.
    • Identifying code that does not conform to coding style.
    • Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
    • Improvements to comments (remember, this is teaching code, not production code);
    • Flagging where the sample code might not be robust against faulty input.
    • Flagging where the sample code might not be safe against overwriting user data.
    • Suggestions how the tasks could be made more meaningful, or where additional tasks might be useful.
    • Flagging where sample solutions are missing.
    • Sample solutions need to support those leaners with the greatest difficulties with the material. Are the submitted solutions suitable for this purpose? Are they sufficiently commented? What can be improved?
    • Any other issues ...
  • During the review, reviewers take notes of responses and comments.


 

Overall, be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire "team".


 

After the review

  • After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (briefly, in point form);
  • Once all suggestions are in, the unit author begins revisions.
  • It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their unit.


 

A final note

I hope that in your career you will find yourself in a workplace where peer-review is a regular part of your activities. This may contribute tremendously to better outcomes, more transparent and more meaningful work, and more cohesive teams. When that time comes, your skills as reviewers will be evaluated implicitly, although perhaps neither you, your supervisor, nor your project lead might realize this. You will be prepared.


 

Notes

Further reading, links and resources

Urs Enzler's "Clean Code Cheatsheet" (at planetgeek.ch) Oriented towards OO developers, but expresses sound principles that apply by analogy.


 




 

If in doubt, ask! If anything about this learning unit is not clear to you, do not proceed blindly but ask for clarification. Post your question on the course mailing list: others are likely to have similar problems. Or send an email to your instructor.



 

About ...
 
Author:

Boris Steipe <boris.steipe@utoronto.ca>

Created:

2017-10-13

Modified:

2017-10-13

Version:

1.0

Version history:

  • 1.0 New unit

CreativeCommonsBy.png This copyrighted material is licensed under a Creative Commons Attribution 4.0 International License. Follow the link to learn more.