Difference between revisions of "APB-Code review"

From "A B C"
Jump to navigation Jump to search
m
m
 
(20 intermediate revisions by the same user not shown)
Line 1: Line 1:
<div id="BIO">
+
<div id="ABC">
  <div class="b1">
+
<div style="padding:5px; border:4px solid #000000; background-color:#e19fa7; font-size:300%; font-weight:400; color: #000000; width:100%;">
 
Code and Design Review
 
Code and Design Review
  </div>
+
<div style="padding:5px; margin-top:20px; margin-bottom:10px; background-color:#e19fa7; font-size:30%; font-weight:200; color: #000000; ">
 
+
(Code review sessions - conduct and expectations)
  {{Vspace}}
+
</div>
 
 
<div class="keywords">
 
<b>Keywords:</b>&nbsp;
 
Code review sessions - conduct and expectations
 
 
</div>
 
</div>
  
{{Vspace}}
+
{{Smallvspace}}
 
 
 
 
__TOC__
 
 
 
{{Vspace}}
 
 
 
 
 
{{LIVE}}
 
 
 
{{Vspace}}
 
  
  
</div>
+
<div style="padding:5px; border:1px solid #000000; background-color:#e19fa733; font-size:85%;">
<div id="ABC-unit-framework">
+
<div style="font-size:118%;">
== Abstract ==
+
<b>Abstract:</b><br />
 
<section begin=abstract />
 
<section begin=abstract />
<!-- included from "./components/APB-Code_review.components.txt", section: "abstract" -->
 
 
This page explains conduct of and expectations for code and design reviews.
 
This page explains conduct of and expectations for code and design reviews.
 
<section end=abstract />
 
<section end=abstract />
 
+
</div>
{{Vspace}}
+
<!-- ============================ -->
 
+
<hr>
 
+
<table>
== This unit ... ==
+
<tr>
=== Objectives ===
+
<td style="padding:10px;">
<!-- included from "./components/APB-Code_review.components.txt", section: "objectives" -->
+
<b>Objectives:</b><br />
 
Code reviews are intended to ...
 
Code reviews are intended to ...
 
* ... improve everyone's familarity with the contents;
 
* ... improve everyone's familarity with the contents;
Line 43: Line 28:
 
* ... practice giving constructive feedback in a professional context;
 
* ... practice giving constructive feedback in a professional context;
 
* ... improve communication skills.
 
* ... improve communication skills.
 +
</td>
 +
<td style="padding:10px;">
 +
&nbsp;
 +
</td>
 +
</tr>
 +
</table>
 +
<!-- ============================  -->
 +
<hr>
 +
<b>Deliverables:</b><br />
 +
<section begin=deliverables />
 +
Your participation as reviewers in the Code review sessions will be worth 8 marks in each panel for a total of 4 x 8 marks maximum.
 +
<section end=deliverables />
 +
<!-- ============================  -->
 +
</div>
 +
 +
{{Smallvspace}}
  
{{Vspace}}
 
  
 +
{{SLEEP}}
 +
 +
{{Smallvspace}}
  
=== Deliverables ===
+
 
<!-- included from "./components/APB-Code_review.components.txt", section: "deliverables" -->
+
__TOC__
Your participation as reviewers in the Code review sessions will be worth 10 marks in each panel for a total of 4 x 10 marks maximum.
 
  
 
{{Vspace}}
 
{{Vspace}}
  
  
</div>
 
<div id="BIO">
 
 
== Contents ==
 
== Contents ==
<!-- included from "./components/APB-Code_review.components.txt", section: "contents" -->
 
  
 
We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review '''code''', but also the '''design''', '''contents''', and '''form''' of the submitted material.
 
We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review '''code''', but also the '''design''', '''contents''', and '''form''' of the submitted material.
  
Code reviews are done for course credit '''of the reviewers'''. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to be protective of your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since '''you''' as the reviewer will be evaluated for credit, I hope that you will put your heart into it.
+
Code reviews are done for course credit '''of the reviewers'''. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to hold back on criticism to protect your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since '''you''' as the reviewer will be evaluated for credit, I hope that you will put your heart into it.
  
 
; Principles:
 
; Principles:
Line 68: Line 67:
 
* Everyone will review four units.
 
* Everyone will review four units.
 
* Typically everyone will participate in one review at every session, and never more than two.
 
* Typically everyone will participate in one review at every session, and never more than two.
 +
*
  
 
{{Vspace}}
 
{{Vspace}}
Line 75: Line 75:
  
 
<table>
 
<table>
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
+
<tr class="sh"><td>Date</td><td>Title</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
<tr class="s1"><td>October 18</td><td>RPR-Data-Reshape/Split/Merge</td><td>Yuhan Zhang</td><td>Ricardo Ramnarine</td><td>Vicky Lu</td><td>Albert Cui</td><td>Kevin Lin</td></tr>
+
<tr class="s1"><td>October 10</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>October 18</td><td>RPR-Data-Imputation</td><td>Gregory Huang</td><td>Xiao Wang</td><td>Walter Nelson</td><td>Yuqing Zou</td><td>Jenny Yin</td></tr>
+
<tr class="s2"><td>October 10</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>October 18</td><td>EDA-MOD-Nonlinear</td><td>Denitsa Vasileva</td><td>Yuhan Zhang</td><td>Ian Shi</td><td>Alana Man</td><td>Farzan Taj</td></tr>
+
<tr class="s1"><td>October 10</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>October 18</td><td>EDA-DR-MDS</td><td>Ricardo Ramnarine</td><td>Gregory Huang</td><td>Adriel Martinez</td><td>Marcus Chiam</td><td>Truman Wang</td></tr>
+
<tr class="s2"><td>October 10</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>October 18</td><td>APB-ML-Features</td><td>Xiao Wang</td><td>Denitsa Vasileva</td><td>Zhifan Wu</td><td>Dominick Han</td><td>Hari Sharma</td></tr>
+
<tr class="s1"><td>October 10</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
 
</table>
 
</table>
  
  
 
<table>
 
<table>
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
+
<tr class="sh"><td>Date</td><td>Title</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
<tr class="s1"><td>October 25</td><td>EDA-MOD-Linear</td><td>Vicky Lu</td><td>Yuqing Zou</td><td>Gregory Huang</td><td>Leon Xu</td><td>Yuhan Zhang</td></tr>
+
<tr class="s1"><td>October 17</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>October 25</td><td>EDA-MOD-Generalized</td><td>Walter Nelson</td><td>Albert Cui</td><td>Ricardo Ramnarine</td><td>Truman Wang</td><td>Farzan Taj</td></tr>
+
<tr class="s2"><td>October 17</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>October 25</td><td>EDA-MOD-Logistic</td><td>Ian Shi</td><td>Vicky Lu</td><td>Kevin Lin</td><td>Xiao Wang</td><td>Jenny Yin</td></tr>
+
<tr class="s1"><td>October 17</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>October 25</td><td>EDA-MOD-Assessment</td><td>Adriel Martinez</td><td>Walter Nelson</td><td>Zhifan Wu</td><td>Denitsa Vasileva</td><td>Alana Man</td></tr>
+
<tr class="s2"><td>October 17</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>October 25</td><td>APB-ML-Feature_importance</td><td>Yuqing Zou</td><td>Ian Shi</td><td>Marcus Chiam</td><td>Dominick Han</td><td>Hari Sharma</td></tr>
+
<tr class="s1"><td>October 17</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>October 25</td><td>APB-ML-Measuring_performance</td><td>Albert Cui</td><td>Adriel Martinez</td><td>Gregory Huang</td><td>Leon Xu</td><td>Yuhan Zhang</td></tr>
 
 
</table>
 
</table>
 
  
 
<table>
 
<table>
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
+
<tr class="sh"><td>Date</td><td>Title</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
<tr class="s1"><td>November 1</td><td>EDA-Visualization</td><td>Zhifan Wu</td><td>Dominick Han</td><td>Denitsa Vasileva</td><td>Xiao Wang</td><td>Yuqing Zou</td></tr>
+
<tr class="s1"><td>October 24</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>November 1</td><td>EDA-CLU-Density_based</td><td>Alana Man</td><td>Kevin Lin</td><td>Hari Sharma</td><td>Ian Shi</td><td>Vicky Lu</td></tr>
+
<tr class="s2"><td>October 24</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>November 1</td><td>EDA-CLU-Distribution_based</td><td>Marcus Chiam</td><td>Zhifan Wu</td><td>Leon Xu</td><td>Truman Wang</td><td>Farzan Taj</td></tr>
+
<tr class="s1"><td>October 24</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>November 1</td><td>EDA-CLU-Mutual_information</td><td>Dominick Han</td><td>Alana Man</td><td>Jenny Yin</td><td>Albert Cui</td><td>Adriel Martinez</td></tr>
+
<tr class="s2"><td>October 24</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>November 1</td><td>EDA-Graph_data_mining</td><td>Kevin Lin</td><td>Marcus Chiam</td><td>Denitsa Vasileva</td><td>Ricardo Ramnarine</td><td>Walter Nelson</td></tr>
+
<tr class="s1"><td>October 24</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
 
</table>
 
</table>
 
  
 
<table>
 
<table>
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
+
<tr class="sh"><td>Date</td><td>Title</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
<tr class="s1"><td>November 15</td><td>APB-ML-Deep_neural_networks</td><td>Jenny Yin</td><td>Hari Sharma</td><td>Yuqing Zou</td><td>Adriel Martinez</td><td>Dominick Han</td></tr>
+
<tr class="s1"><td>October 31</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>November 15</td><td>APB-ML-h2o</td><td>Farzan Taj</td><td>Leon Xu</td><td>Walter Nelson</td><td>Marcus Chiam</td><td>Ricardo Ramnarine</td></tr>
+
<tr class="s2"><td>October 31</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>November 15</td><td>APB-ML-RWeka</td><td>Truman Wang</td><td>Jenny Yin</td><td>Albert Cui</td><td>Zhifan Wu</td><td>Yuhan Zhang</td></tr>
+
<tr class="s1"><td>October 31</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s2"><td>November 15</td><td>APB-ML-Support_Vector_Machines</td><td>Hari Sharma</td><td>Farzan Taj</td><td>Gregory Huang</td><td>Alana Man</td><td>Kevin Lin</td></tr>
+
<tr class="s2"><td>October 31</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
<tr class="s1"><td>November 15</td><td>APB-ML-Random_forests</td><td>Leon Xu</td><td>Truman Wang</td><td>Ian Shi</td><td>Vicky Lu</td><td>Xiao Wang</td></tr>
+
<tr class="s1"><td>October 31</td><td>TBD</td><td>'''NN'''</td><td>NN</td><td>NN</td><td>NN</td><td>NN</td></tr>
 
</table>
 
</table>
 +
  
  
Line 119: Line 117:
 
===Preparation===
 
===Preparation===
  
*The entire class is expected to have worked through all learning units by the time of the first review, to become familiar with the context. This may be done relatively quickly.
+
*The entire class is expected to have familiarized themselves with all submitted package before the first review, to establish context.
*The entire class is expected to have worked through the learning units that are scheduled for review in detail before a review session, so that everyone can follow the discussion.
+
*The entire class is expected to have installed and assessed the packages that are scheduled for review each review session, so that everyone can follow the discussion and contribute.
*The reviewers of a learning unit have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared, and that they consider the unit with reference to the expectations set out in the [[http://steipe.biochemistry.utoronto.ca/abc/index.php/ABC-Rubrics#Learning_Units|evaluation rubrics for learning units, and the general expectations for design, code and documentation]].
+
*The designated reviewers of a package have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared, and that they consider the unit with reference to the expectations set out in the [[http://steipe.biochemistry.utoronto.ca/abc/index.php/ABC-Rubrics|evaluation rubrics for software design, code and documentation]].
  
 
{{Vspace}}
 
{{Vspace}}
  
 
===During the review===
 
===During the review===
 +
 +
{{Smallvspace}}
 +
 +
Reviews proceed in three rounds: first, the lead reviewers ask their most important prepared questions in turn; second, the reviewers lead a more general round of discussion; finally the discusion is opened to the entire class. Each review will take approximaetley twenty minutes.
 +
 +
{{Smallvspace}}
  
 
*Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification. Other than that, familiartity with the unit is assumed.
 
*Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification. Other than that, familiartity with the unit is assumed.
*Reviewers will comment on issues. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
+
*Reviewers will comment on issues focussing on '''importance, visual appeal and utility''' of the package. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
 
* Issues for discussion could include:
 
* Issues for discussion could include:
** Suggestions to make the objectives of the unit more clear.
+
** Suggestions to make the objectives of the tool more clear.
** Suggestions how the unit could (realistically) contribute better to the course objectives.
+
** Improvements to integrating the unit with existing packagesothers (but without introducing unnecessary dependencies).
** Improvements to integrating the unit with others (but without introducing unnecessary dependencies).
+
** Constructive critique of software design decisions.
** Constructive critique of design decisions.
 
 
** Improvements to examples to better illustrate the concepts.
 
** Improvements to examples to better illustrate the concepts.
 
** Addressing any unstated assumptions.
 
** Addressing any unstated assumptions.
Line 141: Line 144:
 
** Identifying code that does not conform to coding style.
 
** Identifying code that does not conform to coding style.
 
** Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
 
** Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
** Improvements to comments (remember, this is teaching code, not production code);
+
** Improvements to comments;
 +
** Improvements to visuals;
 
** Flagging where the sample code might not be robust against faulty input.
 
** Flagging where the sample code might not be robust against faulty input.
 
** Flagging where the sample code might not be safe against overwriting user data.
 
** Flagging where the sample code might not be safe against overwriting user data.
** Suggestions how the tasks could be made more meaningful, or where additional tasks might be useful.
 
** Flagging where sample solutions are missing.
 
** Sample solutions need to support those leaners with the greatest difficulties with the material. Are the submitted solutions suitable for this purpose? Are they sufficiently commented? What can be improved?
 
 
** Any other issues ...
 
** Any other issues ...
 
*During the review, reviewers take notes of responses and comments.
 
*During the review, reviewers take notes of responses and comments.
Line 158: Line 159:
 
==After the review==
 
==After the review==
  
* After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (briefly, in point form);
+
* After the review, '''on the same day''', the reviewers summarize their issues and proposals on the "Talk" page of the package synopsis on the Student Wiki (briefly, in point form);
 
* Once all suggestions are in, the unit author begins revisions.
 
* Once all suggestions are in, the unit author begins revisions.
* It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their unit.
+
* It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their package.
  
 
{{Vspace}}
 
{{Vspace}}
Line 169: Line 170:
  
 
{{Vspace}}
 
{{Vspace}}
 
 
{{Vspace}}
 
 
  
 
== Further reading, links and resources ==
 
== Further reading, links and resources ==
Line 182: Line 179:
 
{{#pmid: 19957275}}
 
{{#pmid: 19957275}}
 
-->
 
-->
 
{{Vspace}}
 
 
 
 
== Notes ==
 
== Notes ==
<!-- included from "./components/APB-Code_review.components.txt", section: "notes" -->
 
<!-- included from "./data/ABC-unit_components.txt", section: "notes" -->
 
 
<references />
 
<references />
  
 
{{Vspace}}
 
{{Vspace}}
  
 
</div>
 
<div id="ABC-unit-framework">
 
 
{{Vspace}}
 
 
 
<!-- included from "./data/ABC-unit_components.txt", section: "ABC-unit_ask" -->
 
 
----
 
 
{{Vspace}}
 
 
<b>If in doubt, ask!</b> If anything about this learning unit is not clear to you, do not proceed blindly but ask for clarification. Post your question on the course mailing list: others are likely to have similar problems. Or send an email to your instructor.
 
 
----
 
 
{{Vspace}}
 
  
 
<div class="about">
 
<div class="about">
Line 220: Line 193:
 
:2017-10-13
 
:2017-10-13
 
<b>Modified:</b><br />
 
<b>Modified:</b><br />
:2017-10-13
+
:2018-09-12
 
<b>Version:</b><br />
 
<b>Version:</b><br />
:1.0
+
:1.1
 
<b>Version history:</b><br />
 
<b>Version history:</b><br />
 +
*1.1 2018 updates
 
*1.0 New unit
 
*1.0 New unit
 
</div>
 
</div>
[[Category:Applied_Bioinformatics]]
 
 
<!-- included from "./data/ABC-unit_components.txt", section: "ABC-unit_footer" -->
 
<!-- included from "./data/ABC-unit_components.txt", section: "ABC-unit_footer" -->
  
 
{{CC-BY}}
 
{{CC-BY}}
  
 +
[[Category:Applied_Bioinformatics]]
 +
{{INTEGRATOR}}
 +
{{SLEEP}}
 +
{{EVAL}}
 
</div>
 
</div>
 
<!-- [END] -->
 
<!-- [END] -->

Latest revision as of 11:32, 24 September 2020

Code and Design Review

(Code review sessions - conduct and expectations)


 


Abstract:

This page explains conduct of and expectations for code and design reviews.


Objectives:
Code reviews are intended to ...

  • ... improve everyone's familarity with the contents;
  • ... practice critical analysis of the material;
  • ... practice giving constructive feedback in a professional context;
  • ... improve communication skills.

 


Deliverables:

Your participation as reviewers in the Code review sessions will be worth 8 marks in each panel for a total of 4 x 8 marks maximum.


 


This page is not currently being maintained since it is not part of active learning sections.


 



 


Contents

We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review code, but also the design, contents, and form of the submitted material.

Code reviews are done for course credit of the reviewers. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to hold back on criticism to protect your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since you as the reviewer will be evaluated for credit, I hope that you will put your heart into it.

Principles
  • Units are not to be modified, until after the code review. Do not edit, upload, commit, push, merge or otherwise modify any material you have submitted for credit until after the review is done.
  • We will use four weeks for reviews. We will schedule five to six reviews per class session, with four reviewers. This will give everyone approximately five minutes for their review contributions.
  • Everyone will review four units.
  • Typically everyone will participate in one review at every session, and never more than two.


 

Schedule

DateTitleAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 10TBDNNNNNNNNNN
October 10TBDNNNNNNNNNN
October 10TBDNNNNNNNNNN
October 10TBDNNNNNNNNNN
October 10TBDNNNNNNNNNN


DateTitleAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 17TBDNNNNNNNNNN
October 17TBDNNNNNNNNNN
October 17TBDNNNNNNNNNN
October 17TBDNNNNNNNNNN
October 17TBDNNNNNNNNNN
DateTitleAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 24TBDNNNNNNNNNN
October 24TBDNNNNNNNNNN
October 24TBDNNNNNNNNNN
October 24TBDNNNNNNNNNN
October 24TBDNNNNNNNNNN
DateTitleAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 31TBDNNNNNNNNNN
October 31TBDNNNNNNNNNN
October 31TBDNNNNNNNNNN
October 31TBDNNNNNNNNNN
October 31TBDNNNNNNNNNN



 

Preparation

  • The entire class is expected to have familiarized themselves with all submitted package before the first review, to establish context.
  • The entire class is expected to have installed and assessed the packages that are scheduled for review each review session, so that everyone can follow the discussion and contribute.
  • The designated reviewers of a package have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared, and that they consider the unit with reference to the expectations set out in the [rubrics for software design, code and documentation].


 

During the review

 

Reviews proceed in three rounds: first, the lead reviewers ask their most important prepared questions in turn; second, the reviewers lead a more general round of discussion; finally the discusion is opened to the entire class. Each review will take approximaetley twenty minutes.


 
  • Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification. Other than that, familiartity with the unit is assumed.
  • Reviewers will comment on issues focussing on importance, visual appeal and utility of the package. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
  • Issues for discussion could include:
    • Suggestions to make the objectives of the tool more clear.
    • Improvements to integrating the unit with existing packagesothers (but without introducing unnecessary dependencies).
    • Constructive critique of software design decisions.
    • Improvements to examples to better illustrate the concepts.
    • Addressing any unstated assumptions.
    • Identifying significant dependencies that could become obstacles to refactoring.
    • Flagging, where the material lacks rigour or is factully incorrect.
    • Improvements to form and layout.
    • Identifying code that does not conform to coding style.
    • Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
    • Improvements to comments;
    • Improvements to visuals;
    • Flagging where the sample code might not be robust against faulty input.
    • Flagging where the sample code might not be safe against overwriting user data.
    • Any other issues ...
  • During the review, reviewers take notes of responses and comments.


 

Overall, be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire "team".


 

After the review

  • After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the package synopsis on the Student Wiki (briefly, in point form);
  • Once all suggestions are in, the unit author begins revisions.
  • It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their package.


 

A final note

I hope that in your career you will find yourself in a workplace where peer-review is a regular part of your activities. This may contribute tremendously to better outcomes, more transparent and more meaningful work, and more cohesive teams. When that time comes, your skills as reviewers will be evaluated implicitly, although perhaps neither you, your supervisor, nor your project lead might realize this. You will be prepared.


 

Further reading, links and resources

Urs Enzler's "Clean Code Cheatsheet" (at planetgeek.ch) Oriented towards OO developers, but expresses sound principles that apply by analogy.

Notes


 


About ...
 
Author:

Boris Steipe <boris.steipe@utoronto.ca>

Created:

2017-10-13

Modified:

2018-09-12

Version:

1.1

Version history:

  • 1.1 2018 updates
  • 1.0 New unit

CreativeCommonsBy.png This copyrighted material is licensed under a Creative Commons Attribution 4.0 International License. Follow the link to learn more.

This page is not currently being maintained since it is not part of active learning sections.