Difference between revisions of "APB-Code review"

From "A B C"
Jump to navigation Jump to search
m (Created page with "<div id="BIO"> <div class="b1"> Code review </div> {{Vspace}} <div class="keywords"> <b>Keywords:</b>  Code review sessions - conduct and expectations </div> {...")
 
m
Line 1: Line 1:
 
<div id="BIO">
 
<div id="BIO">
 
   <div class="b1">
 
   <div class="b1">
Code review
+
Code and Design review
 
   </div>
 
   </div>
  
Line 19: Line 19:
  
  
{{DEV}}
+
{{LIVE}}
  
 
{{Vspace}}
 
{{Vspace}}
Line 29: Line 29:
 
<section begin=abstract />
 
<section begin=abstract />
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "abstract" -->
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "abstract" -->
This page presents conduct and expectations for code reviews.
+
This page presents conduct of and expectations for code and design reviews.
 
<section end=abstract />
 
<section end=abstract />
  
Line 46: Line 46:
 
=== Objectives ===
 
=== Objectives ===
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "objectives" -->
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "objectives" -->
...
+
Code reviews are intended to ...
 
+
* ... improve everyone's familarity with the contents;
{{Vspace}}
+
* ... practice critical analysis of the material;
 
+
* ... practice giving constructive feedback in a professional context;
 
+
* ... improve communication skills.
=== Outcomes ===
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "outcomes" -->
 
...
 
  
 
{{Vspace}}
 
{{Vspace}}
Line 60: Line 57:
 
=== Deliverables ===
 
=== Deliverables ===
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "deliverables" -->
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "deliverables" -->
<!-- included from "ABC-unit_components.wtxt", section: "deliverables-none" -->
+
Each code-review session w
*<b>No deliverables</b>: This unit has no deliverables.
 
  
 
{{Vspace}}
 
{{Vspace}}
Line 71: Line 67:
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "contents" -->
 
<!-- included from "../components/APB-Code_review.components.wtxt", section: "contents" -->
  
In this course we will try "Code Reviews for Credit". This means, we are not going to evaluate the "code" - i.e. the learning units, but the '''reviewers'''. A well-prepared, thoughtful, knowledgeable review will provide immensely valuable feedback, I hope that reviewing for credit will help the reviewers put their heart into it.
+
We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review '''code''', but also the '''design''', '''contents''', and '''form''' of the submitted material.
 +
 
 +
Code reviews are done for course credit '''of the reviewers'''. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to be protective of your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since '''you''' as the reviewer will be evaluated for credit, I hope that you will put your heart into it.
 +
 
 +
{{Vspace}}
 +
 
 +
===Schedule===
 +
 
 +
* Units are not to be modified, until after the code review. Do not edit, upload, commit, push, merge or otherwise modify any material you have submitted for credit untilafter the review is done.
 +
* We will use four weeks for reviews. We will schedule five to six reviews per class session, with four reviewers. This will give everyone approximately five minutes for their review contributions.
 +
* Everyone will review four units.
 +
* Typically everyone will participate in one review at every session, and never more than two.
 +
 
 +
{{Vspace}}
  
 
===Schedule===
 
===Schedule===
  
* Units are to be kept in the state that was submitted when they were due, until after the code review.
 
* We will use four weeks for reviews. We will schedule six reviews per class session, with three to four reviewers. This will give everyone approximately five minutes for questions, proposals and feedback.
 
* Everyone will be involved at every class session.
 
  
Why the review?
+
<table>
 +
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
 +
<tr class="s1"><td>October 18</td><td>RPR-Data-Reshape/Split/Merge</td><td>Yuhan Zhang</td><td>Ricardo Ramnarine</td><td>Vicky Lu</td><td>Albert Cui</td><td>Kevin Lin</td></tr>
 +
<tr class="s2"><td>October 18</td><td>RPR-Data-Imputation</td><td>Gregory Huang</td><td>Xiao Wang</td><td>Walter Nelson</td><td>Yuqing Zou</td><td>Jenny Yin</td></tr>
 +
<tr class="s1"><td>October 18</td><td>EDA-MOD-Nonlinear</td><td>Denitsa Vasileva</td><td>Yuhan Zhang</td><td>Ian Shi</td><td>Alana Man</td><td>Farzan Taj</td></tr>
 +
<tr class="s2"><td>October 18</td><td>EDA-DR-MDS</td><td>Ricardo Ramnarine</td><td>Gregory Huang</td><td>Adriel Martinez</td><td>Marcus Chiam</td><td>Truman Wang</td></tr>
 +
<tr class="s1"><td>October 18</td><td>APB-ML-Features</td><td>Xiao Wang</td><td>Denitsa Vasileva</td><td>Zhifan Wu</td><td>Dominick Han</td><td>Hari Sharma</td></tr>
 +
</table>
 +
 
 +
 
 +
<table>
 +
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
 +
<tr class="s1"><td>October 25</td><td>EDA-MOD-Linear</td><td>Vicky Lu</td><td>Yuqing Zou</td><td>Gregory Huang</td><td>Leon Xu</td><td>Yuhan Zhang</td></tr>
 +
<tr class="s2"><td>October 25</td><td>EDA-MOD-Generalized</td><td>Walter Nelson</td><td>Albert Cui</td><td>Ricardo Ramnarine</td><td>Truman Wang</td><td>Farzan Taj</td></tr>
 +
<tr class="s1"><td>October 25</td><td>EDA-MOD-Logistic</td><td>Ian Shi</td><td>Vicky Lu</td><td>Kevin Lin</td><td>Xiao Wang</td><td>Jenny Yin</td></tr>
 +
<tr class="s2"><td>October 25</td><td>EDA-MOD-Assessment</td><td>Adriel Martinez</td><td>Walter Nelson</td><td>Zhifan Wu</td><td>Denitsa Vasileva</td><td>Alana Man</td></tr>
 +
<tr class="s1"><td>October 25</td><td>APB-ML-Feature_importance</td><td>Yuqing Zou</td><td>Ian Shi</td><td>Marcus Chiam</td><td>Dominick Han</td><td>Hari Sharma</td></tr>
 +
<tr class="s2"><td>October 25</td><td>APB-ML-Measuring_performance</td><td>Albert Cui</td><td>Adriel Martinez</td><td>Gregory Huang</td><td>Leon Xu</td><td>Yuhan Zhang</td></tr>
 +
</table>
 +
 
  
Code '''and''' design review
+
<table>
 +
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
 +
<tr class="s1"><td>November 1</td><td>EDA-Visualization</td><td>Zhifan Wu</td><td>Dominick Han</td><td>Denitsa Vasileva</td><td>Xiao Wang</td><td>Yuqing Zou</td></tr>
 +
<tr class="s2"><td>November 1</td><td>EDA-CLU-Density_based</td><td>Alana Man</td><td>Kevin Lin</td><td>Hari Sharma</td><td>Ian Shi</td><td>Vicky Lu</td></tr>
 +
<tr class="s1"><td>November 1</td><td>EDA-CLU-Distribution_based</td><td>Marcus Chiam</td><td>Zhifan Wu</td><td>Leon Xu</td><td>Truman Wang</td><td>Farzan Taj</td></tr>
 +
<tr class="s2"><td>November 1</td><td>EDA-CLU-Mutual_information</td><td>Dominick Han</td><td>Alana Man</td><td>Jenny Yin</td><td>Albert Cui</td><td>Adriel Martinez</td></tr>
 +
<tr class="s1"><td>November 1</td><td>EDA-Graph_data_mining</td><td>Kevin Lin</td><td>Marcus Chiam</td><td>Denitsa Vasileva</td><td>Ricardo Ramnarine</td><td>Walter Nelson</td></tr>
 +
</table>
 +
 
 +
 
 +
<table>
 +
<tr class="sh"><td>Date</td><td>Unit</td><td>Author</td><td>Reviewer 1</td><td>Reviewer 2</td><td>Reviewer 3</td><td>Reviewer 4</td></tr>
 +
<tr class="s1"><td>November 15</td><td>APB-ML-Deep_neural_networks</td><td>Jenny Yin</td><td>Hari Sharma</td><td>Yuqing Zou</td><td>Adriel Martinez</td><td>Dominick Han</td></tr>
 +
<tr class="s2"><td>November 15</td><td>APB-ML-h2o</td><td>Farzan Taj</td><td>Leon Xu</td><td>Walter Nelson</td><td>Marcus Chiam</td><td>Ricardo Ramnarine</td></tr>
 +
<tr class="s1"><td>November 15</td><td>APB-ML-RWeka</td><td>Truman Wang</td><td>Jenny Yin</td><td>Albert Cui</td><td>Zhifan Wu</td><td>Yuhan Zhang</td></tr>
 +
<tr class="s2"><td>November 15</td><td>APB-ML-Support_Vector_Machines</td><td>Hari Sharma</td><td>Farzan Taj</td><td>Gregory Huang</td><td>Alana Man</td><td>Kevin Lin</td></tr>
 +
<tr class="s1"><td>November 15</td><td>APB-ML-Random_forests</td><td>Leon Xu</td><td>Truman Wang</td><td>Ian Shi</td><td>Vicky Lu</td><td>Xiao Wang</td></tr>
 +
</table>
 +
 
 +
 
 +
{{Vspace}}
  
 
===Preparation===
 
===Preparation===
  
*The entire class is expected to have carefully worked through the learning units that are scheduled for review, so everyone can follow the discussion.
+
*The entire class is expected to have worked through all learning units by the time of the first review, to become familiar with the context. This may be done relatively quickly.
*The reviewers have worked through the unit in detail and lead the discussion.
+
*The entire class is expected to have worked through the learning units that are scheduled for review in detail before a review session, so that everyone can follow the discussion.
*Code will not be presented by the author like in a normal code review, but the reviewers may ask some initial questions for clarification.
+
*The reviewers of a learning unit have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared.
**How does the unit fulfil its own requirements and objectives?
+
 
**How does the unit contribute to the objectives of the course?
+
{{Vspace}}
**Is the unit well integrated with others?
+
 
**What design decisions were made? How did they work out?
+
===During the review===
**Do the examples illustrate the concepts?
 
** Are there unstated assumptions, hidden dependencies, obstacles to refactoring?
 
** Is it '''correct'''?
 
**Does the form and coding style need improvement?
 
**Is the sample code robust against faulty input?
 
**Does the example code
 
**is the code safe against overwriting data?
 
**Are the tasks meaningful? Have sample solutions been provided? Are they commented to the degree that is appropriate for someone who needs o look up the sample solution?
 
  
*During the review, reviewers take notes of responses to the issues they raised and their suggestions.
+
*Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification.
*After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (point form); once all suggestions are in, the unit author begins the revision. Note: it is not mandatory that the revisions follow the reviewers' suggestions - in the end, the author is responsible for their unit.
+
*Reviewers will comment on issues. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
 +
* Issues for discussion could include:
 +
** Suggestions to make the objectives of the unit more clear.
 +
** Suggestions how the unit could (realistically) contribute better to the course objectives.
 +
** Improvements to integrating the unit with others (but without introducing unnecessary dependencies).
 +
** Constructive critique of design decisions.
 +
** Improvements to examples to better illustrate the concepts.
 +
** Addressing any unstated assumptions.
 +
** Identifying significant dependencies that could become obstacles to refactoring.
 +
** Flagging, where the material lacks rigour or is factully incorrect.
 +
** Improvements to form and layout.
 +
** Identifying code that does not conform to coding style.
 +
** Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
 +
** Improvements to comments (remember, this is teaching code, not production code);
 +
** Flagging where the sample code might not be robust against faulty input.
 +
** Flagging where the sample code might not be safe against overwriting user data.
 +
** Suggestions how the tasks could be made more meaningful, or where additional tasks might be useful.
 +
** Flagging where sample solutions are missing.
 +
** Sample solutions need to support those leaners with the greatest difficulties with the material. Are the submitted solutions suitable for this purpose? Are they sufficiently commented? What can be improved?
 +
**
 +
*During the review, reviewers take notes of responses to the issues they raised and any comments to them.
  
 +
{{Smallvspace}}
  
#Briefly consider improvements to coding style as suggestions but don't spend too much time on them (don't create a "{{WP|Parkinson's_law_of_triviality|Bicycle Shed}}" anti-pattern) - style is not the most important thing about the review. Be constructive and nice - you should encourage your colleagues, not demotivate them.
+
Overall, be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire "team".
#Spend most of the time discussing architecture: how does this code fit into the general lay of the land? How would it need to change if its context changes? Is it sufficiently modular to survive? What does it depend on? What depends on it? Does it apply a particular {{WP|Software_design_pattern|design pattern}}? Should it? Or has it devolved into an {{WP|Anti-pattern|anti-pattern}}?
 
#Focus on tests. What is the most dangerous error for the system integrity that the code under review could produce. Are there tests that validate how the code deals with this? Are there tests for the {{WP|Edge_case|edge cases}} and {{WP|Corner_case|corner cases}}<ref>A software engineer walks into a bar and orders a beer. Then he orders 0 beers. Then orders 2147483648 beers. Then he orders a duck. Then orders -1 beers, poured into a bathtub. Then he returns a beer he didn't order. Then he spills his beer on the floor, shrieks wildly and runs away without paying.</ref>? This is the best part about the review: bring everyone in the room on board of the real objectives of the project, by considering how one component contributes to it.
 
#Finally, what's your gut feeling about the code: is there {{WP|Code_smell|Code Smell}}? Are there suboptimal design decisions that perhaps don't seem very critical at the moment but that could later turn into inappropriate {{WP|Technical debt|technical debt}}? Perhaps some {{WP|Code_refactoring|refactoring}} is indicated; solving the same problem again often leads to vastly improved strategies.
 
  
Overall be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire team.
+
{{Vspace}}
  
 +
==After the review==
  
 +
*After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (briefly, in point form);
 +
*Once all suggestions are in, the unit author begins revisions.
 +
*It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their unit.
 +
 +
 +
{{Vspace}}
 +
 +
==A final note==
 +
 +
I hope that in your career you will find yourself in a workplace where peer-review is a regular part of your activities. This may contribute tremendously to better outcomes, more transparent and more meaningful work, and more cohesive teams. When that time comes, your skills as reviewers will be evaluated implicitly, although perhaps neither you, your supervisor, nor your project lead might realize this. You will be prepared.
 +
 +
{{Vspace}}
  
  
Line 119: Line 187:
  
 
== Further reading, links and resources ==
 
== Further reading, links and resources ==
 +
*{{WP|Anti-pattern|'''Anti-pattern'''}}
 +
*{{WP|Design_smell|'''Design smell'''}}
 +
*{{WP|Code_smell|'''Code smell'''}}
 +
<div class="reference-box">[https://www.planetgeek.ch/wp-content/uploads/2014/11/Clean-Code-V2.4.pdf Urs Enzler's "Clean Code Cheatsheet"] (at planetgeek.ch) Oriented towards OO developers, but expresses sound principles that apply by analogy.</div>
 
<!-- Formatting exqmples:
 
<!-- Formatting exqmples:
 
{{#pmid: 19957275}}
 
{{#pmid: 19957275}}
<div class="reference-box">[http://www.ncbi.nlm.nih.gov]</div>
 
 
-->
 
-->
  
Line 165: Line 236:
 
:1.0
 
:1.0
 
<b>Version history:</b><br />
 
<b>Version history:</b><br />
*1.0 First final version
+
*1.0 New unit
 
</div>
 
</div>
 
[[Category:BCB410-units]]
 
[[Category:BCB410-units]]

Revision as of 15:10, 15 October 2017

Abstract

This page presents conduct of and expectations for code and design reviews.


 


This unit ...

Prerequisites

This unit has no prerequisites.


 


Objectives

Code reviews are intended to ...

  • ... improve everyone's familarity with the contents;
  • ... practice critical analysis of the material;
  • ... practice giving constructive feedback in a professional context;
  • ... improve communication skills.


 


Deliverables

Each code-review session w


 


Contents

We call our review sessions "Code reviews" for simplicity, however the purpose is not only to review code, but also the design, contents, and form of the submitted material.

Code reviews are done for course credit of the reviewers. The units are already going to have been marked by the instructor before the review. Critical reviews are not going to affect that mark, there is no need to be protective of your peers for that reason. The best way to help your peers is to provide a high-quality review, with creative, constructive feedback. A well-prepared, thoughtful, knowledgeable review will be immensely valuable for your peers. And since you as the reviewer will be evaluated for credit, I hope that you will put your heart into it.


 

Schedule

  • Units are not to be modified, until after the code review. Do not edit, upload, commit, push, merge or otherwise modify any material you have submitted for credit untilafter the review is done.
  • We will use four weeks for reviews. We will schedule five to six reviews per class session, with four reviewers. This will give everyone approximately five minutes for their review contributions.
  • Everyone will review four units.
  • Typically everyone will participate in one review at every session, and never more than two.


 

Schedule

DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 18RPR-Data-Reshape/Split/MergeYuhan ZhangRicardo RamnarineVicky LuAlbert CuiKevin Lin
October 18RPR-Data-ImputationGregory HuangXiao WangWalter NelsonYuqing ZouJenny Yin
October 18EDA-MOD-NonlinearDenitsa VasilevaYuhan ZhangIan ShiAlana ManFarzan Taj
October 18EDA-DR-MDSRicardo RamnarineGregory HuangAdriel MartinezMarcus ChiamTruman Wang
October 18APB-ML-FeaturesXiao WangDenitsa VasilevaZhifan WuDominick HanHari Sharma


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
October 25EDA-MOD-LinearVicky LuYuqing ZouGregory HuangLeon XuYuhan Zhang
October 25EDA-MOD-GeneralizedWalter NelsonAlbert CuiRicardo RamnarineTruman WangFarzan Taj
October 25EDA-MOD-LogisticIan ShiVicky LuKevin LinXiao WangJenny Yin
October 25EDA-MOD-AssessmentAdriel MartinezWalter NelsonZhifan WuDenitsa VasilevaAlana Man
October 25APB-ML-Feature_importanceYuqing ZouIan ShiMarcus ChiamDominick HanHari Sharma
October 25APB-ML-Measuring_performanceAlbert CuiAdriel MartinezGregory HuangLeon XuYuhan Zhang


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
November 1EDA-VisualizationZhifan WuDominick HanDenitsa VasilevaXiao WangYuqing Zou
November 1EDA-CLU-Density_basedAlana ManKevin LinHari SharmaIan ShiVicky Lu
November 1EDA-CLU-Distribution_basedMarcus ChiamZhifan WuLeon XuTruman WangFarzan Taj
November 1EDA-CLU-Mutual_informationDominick HanAlana ManJenny YinAlbert CuiAdriel Martinez
November 1EDA-Graph_data_miningKevin LinMarcus ChiamDenitsa VasilevaRicardo RamnarineWalter Nelson


DateUnitAuthorReviewer 1Reviewer 2Reviewer 3Reviewer 4
November 15APB-ML-Deep_neural_networksJenny YinHari SharmaYuqing ZouAdriel MartinezDominick Han
November 15APB-ML-h2oFarzan TajLeon XuWalter NelsonMarcus ChiamRicardo Ramnarine
November 15APB-ML-RWekaTruman WangJenny YinAlbert CuiZhifan WuYuhan Zhang
November 15APB-ML-Support_Vector_MachinesHari SharmaFarzan TajGregory HuangAlana ManKevin Lin
November 15APB-ML-Random_forestsLeon XuTruman WangIan ShiVicky LuXiao Wang


 

Preparation

  • The entire class is expected to have worked through all learning units by the time of the first review, to become familiar with the context. This may be done relatively quickly.
  • The entire class is expected to have worked through the learning units that are scheduled for review in detail before a review session, so that everyone can follow the discussion.
  • The reviewers of a learning unit have worked through the material in detail, have made themselves knowledgeable about the context and background, and have prepared their review contributions (see below). I expect that reviewers will come to class very well prepared.


 

During the review

  • Code will not be presented by the author (unfortunately we don't have enough time), but the reviewers may ask some initial questions for clarification.
  • Reviewers will comment on issues. Ideally, reviewers will make specific suggestions for improvement but it is better to point out a weakness, even if you don't quite know how to address it, than to remain silent. Once it is pointed out, others may have useful ideas. Of course, if you note particular strengths of the unit, that is also welcome.
  • Issues for discussion could include:
    • Suggestions to make the objectives of the unit more clear.
    • Suggestions how the unit could (realistically) contribute better to the course objectives.
    • Improvements to integrating the unit with others (but without introducing unnecessary dependencies).
    • Constructive critique of design decisions.
    • Improvements to examples to better illustrate the concepts.
    • Addressing any unstated assumptions.
    • Identifying significant dependencies that could become obstacles to refactoring.
    • Flagging, where the material lacks rigour or is factully incorrect.
    • Improvements to form and layout.
    • Identifying code that does not conform to coding style.
    • Identifying code that exemplifies poor practice ("anti-patterns", design smell", "code smell").
    • Improvements to comments (remember, this is teaching code, not production code);
    • Flagging where the sample code might not be robust against faulty input.
    • Flagging where the sample code might not be safe against overwriting user data.
    • Suggestions how the tasks could be made more meaningful, or where additional tasks might be useful.
    • Flagging where sample solutions are missing.
    • Sample solutions need to support those leaners with the greatest difficulties with the material. Are the submitted solutions suitable for this purpose? Are they sufficiently commented? What can be improved?
  • During the review, reviewers take notes of responses to the issues they raised and any comments to them.


 

Overall, be mindful that code review is a sensitive social issue, and that the primary objective is not to point out errors, but to improve the entire "team".


 

After the review

  • After the review, on the same day, the reviewers summarize their issues and proposals on the "Talk" page of the unit (briefly, in point form);
  • Once all suggestions are in, the unit author begins revisions.
  • It is not mandatory that the revisions follow the reviewers' suggestions. Authors need to consider comments carefully, but apply their own best judgement. In the end, the reviewers are responsible for their reviews, but the author is responsible for their unit.


 

A final note

I hope that in your career you will find yourself in a workplace where peer-review is a regular part of your activities. This may contribute tremendously to better outcomes, more transparent and more meaningful work, and more cohesive teams. When that time comes, your skills as reviewers will be evaluated implicitly, although perhaps neither you, your supervisor, nor your project lead might realize this. You will be prepared.


 


 


Further reading, links and resources

Urs Enzler's "Clean Code Cheatsheet" (at planetgeek.ch) Oriented towards OO developers, but expresses sound principles that apply by analogy.


 


Notes


 



 




 

If in doubt, ask! If anything about this learning unit is not clear to you, do not proceed blindly but ask for clarification. Post your question on the course mailing list: others are likely to have similar problems. Or send an email to your instructor.



 

About ...
 
Author:

Boris Steipe <boris.steipe@utoronto.ca>

Created:

2017-10-13

Modified:

2017-10-13

Version:

1.0

Version history:

  • 1.0 New unit

CreativeCommonsBy.png This copyrighted material is licensed under a Creative Commons Attribution 4.0 International License. Follow the link to learn more.