FND-CSC-Test driven development

From "A B C"
Jump to navigation Jump to search

Test Driven Development

(Test Driven Development)


 


Abstract:

Test Driven Development (TDD) is a powerful methodology to help writing correct code. Unit testing is its cornerstone. Integration testing ensures units work together.


Objectives:
This unit will ...

  • ... introduce the concept of Test Driven Devlopment;
  • ... introduce unit testing and integration testing, two cornerstones of building robust software for reproducible results.

Outcomes:
After working through this unit you ...

  • ... begin to use TDD in your own practice;
  • ... have enough context to learn more about unit testing and integration testing in practice.

Deliverables:

  • Time management: Before you begin, estimate how long it will take you to complete this unit. Then, record in your course journal: the number of hours you estimated, the number of hours you worked on the unit, and the amount of time that passed between start and completion of this unit.
  • Journal: Document your progress in your Course Journal. Some tasks may ask you to include specific items in your journal. Don't overlook these.
  • Insights: If you find something particularly noteworthy about this unit, make a note in your insights! page.

  • Prerequisites:
    This unit builds on material covered in the following prerequisite units:


     



     



     


    Evaluation

    Evaluation: NA

    This unit is not evaluated for course marks.

    Contents

    Test Driven Development

    TDD is a development methodology to ensure that code actually does what it is meant to do. In practice, in TDD we define our software goals and devise a test (or battery of tests) for each. Initially, all tests fail. As we develop, the tests succeed. As we continue development

    • we think carefully about how to break the project into components and structure them (units). These more or less do one thing, one thing only, and don't have side-effects;
    • we discipline ourselves to watch out for unexpected input, edge- and corner cases and unwarranted assumptions;
    • we can be confident that later changes do not break what we have done earlier - because our tests keep track of the behaviour.

    Note that TDD is not meant to be a method to find bugs - although it does that too. It is a design process. It helps you make your requirements explicit, and structure your code well. The key contribution of TDD is that it prompts developers to clearly identify testable behaviour of their code.


     

    Typically testing is done at several levels:

    • During the initial development phases unit testing continuously checks the function of the software units of the system.
    • As the code base progresses, code units are integrated and begin interacting via their interfaces - we begin integration testing. Interfaces can be specified as "contracts" that define the conditions and obligations of an interaction. Typically, a contract will define the precondition, postcondition and invariants of an interaction. Focussing on these aspects of system behaviour is also called design by contract. The primary task of integration testing is to verify that contracts are accurately and completely fulfilled.
    • Finally validation tests verify the code, and validate its correct execution - just like a positive control in a lab experiment.
    • However: Code may be correct, but still unusable in practice. Without performance testing the development cannot be considered to be complete.

    Testing supports maintenance. When you find a bug, write a test that fails because of the bug, then fix the bug, and with great satisfaction watch your test pass. Should anything of the sort happen again, your test will notice. Also, be mindful that you may have made the same type of error elsewhere in your code. Search for these cases, write tests and fix them too.

    That said, one of the strongest points of TDD is that it supports refactoring! Work in a cycle: Write tests → Develop → Refactor. This allows you to get something working quickly, then adopting more elegant / efficient / maintainable solutions - and as you do that, your tests ensure you do not break functionality by mistake.


     

    Unit testing

    Unit tests focus on the individual code units - the basic functions. They ensure that each function

    • matches its specifications;
    • handles erroneous input gracefully;
    • fails gracefully when it can't recover.

    In addition, automated unit tests

    • ensure that ongoing development does not break existing functions.

    To be most useful, unit tests test only one function's behaviour, and not its integration with other functions. That is because functions need to be reconfigured when code is refactored, and tests that inadvertently test two or more functions, or the dependence of a function on some specific input, create dependencies. If you succeed identifying such dependencies explicitly, and adding them to your integration tests instead, you will have come a long way towards developing maintainable and extensible code. More on this topic in Steve Sanderson's blog post, linked below.

    Unit tests usually employ some "testing framework" (eg. the testthat package in R) that supports writing simple statements, which compare observed behaviour with expected values, and can be automatically executed.


     

    Integration testing

    In practice, unit tests and integration tests are written in the same frameworks, but they have very different goals and should be clearly separated, since integration tests need to be change when code is reconfigured. The goal of integration tests is to ensure the integrity of the interfaces that have been defined between fiucntions or code modules - e.g. the structure and validity of R objects that are passed between functions, or input/output files. The may also test the collaboration of functions with small, synthetic data sets that give known-to-be-correct results.

    Ideally, integration tests reflect the architecture of the project, e.g. the data-flow, or workflow. Of course, this is a dialectic relationship, and designing integration tests can do as much for the architecture design as designing unit tests can do for designing a function. For example, designing and implementing integration tests ensures that the states of a workflow are properly exposed, observable, and testable.


     

    Further reading, links and resources

    http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/ Writing Great Unit Tests: Best and Worst Practices.] Sanderson's blog post discusses the distinction between unit tests, integration tests, and the "dirty hybrids" inbetween.
    Architecture Centric Integration Testing - a blog post by Dr.'s Elberzhager and Naab of the Fraunhofer Intitute for Experimental Software Engineering.

    Notes


     


    About ...
     
    Author:

    Boris Steipe <boris.steipe@utoronto.ca>

    Created:

    2017-08-05

    Modified:

    2017-10-15

    Version:

    1.0

    Version history:

    • 1.0 First live version
    • 0.1 First stub

    CreativeCommonsBy.png This copyrighted material is licensed under a Creative Commons Attribution 4.0 International License. Follow the link to learn more.