Difference between revisions of "Software Development"

From "A B C"
Jump to navigation Jump to search
Line 144: Line 144:
 
<!-- {{WWW|WWW_UniProt}} -->
 
<!-- {{WWW|WWW_UniProt}} -->
 
;Practice
 
;Practice
 +
*[http://www.alexkras.com/getting-started-with-git/ ''git'' cheat sheet]
 
*[http://www.alexkras.com/19-git-tips-for-everyday-use/ 19 ''git'' tips for everyday use]
 
*[http://www.alexkras.com/19-git-tips-for-everyday-use/ 19 ''git'' tips for everyday use]
 
  
 
;Concepts
 
;Concepts

Revision as of 04:24, 27 September 2015

Software Development
(In a small-scale research context)


It is not hard to argue that the creation of software is the greatest human cultural achievement to date. But writing software well is not easy and much sophisticated methodology has been proposed for software development, primarily addressing the needs of large software companies and enterprise-scale systems. Certainly: once software development becomes the task of teams, and systems become larger than what one person can remember confidently, failure is virtually guaranteed if the task can't be organized in a structured way.

But our work often does not fit this paradigm, because in the bioinformatics lab the requirements change quickly. The reason is obvious: most of what we produce in science are one-off solutions. Once one analysis runs, we publish the results, and we move on. There is limited value in doing an analysis over and over again. However, this does not mean we can't profit from applying the basic principles of good development principles. Fortunately that is easy. There actually is only one principle.

Make implicit knowledge explicit.

Everything else follows.




 


Collaborate

Making project goals explicit and making progress explicit are crucial, so that everyone knows what's going on and what their responsibilities are. Collaboration these days is distributed, and online:

  • Schedule regular face-to-face meetings. If you can't be in the same room, Google hangout's may work (up to ten people). Old-time developers often use IRC chat rooms.
  • A wiki is obviously a good way to structure, share and collaboratively edit information. Alternatively, the information you need to share could go into your github repository.
  • Trello appears to be a nice tool to distribute work-packages and keep up to date with discussions, especially if your "team" is distributed.
  • I like Kanbanery for my own time-management, but it can also be adapted to project workflows.


Plan

The planning stage involves defining the goals and endpoints of the project. We usually start out with a vague idea of something we would like to achieve. We need to define:

  • where we are;
  • where we want to be;
  • and how we will get there.

For an example of a plan, refer to the 2015 BCB420 Class Project. There, we lay out a plan in three phases: Preparation, Implementation and Results. This is generic, the preparation phase implies an analysis of the problem, which focusses on what will be accomplished, independent of how this will be done. The results of the analysis can be a requirements document (see here for a template ABP Requirements template) or a less formal collection of goals.

The most important achievement of the plan is to break down the project into manageable parts and define the Milestones that characterize the completion of each part.


 

Design

In the design phase, we focus on the architecture of the system that fulfils the requirements. By architecture we mean the components, their interfaces and behaviour. Typically this will involve some modelling and there are different ways to model a system.

  • Structural modelling describes the components and interfaces. The components are typically pieces of software, the interfaces are "contracts" that describe how information passes from one piece to another. Structural models include the Data model that captures how data reflects reality and how reality changes the data in our system;
  • Behaviour modelling describes the state changes of our system, how it responds to input and how data flows through the system. In data-driven analysis, the data flow model may capture most of what is important about the system.

Typically, several different types of models may contribute to understanding a system; in practice dataflow diagrams may be particularly well suited for the workflow centric systems that we commonly encounter in bioinformatics.

SPN (Structured Process Notation) is one way to define dataflow diagrams for data-driven analysis in bioinformatics.
In this example SPN icons are integrated to describe a workflow that annotates protein structure domains at the residue level.


 


Develop

In the development phase, we actually build our system. It is a misunderstanding if you believe most time will be spent in this phase. Designing a system well is hard. Building it, if it is well designed, is easy. Building it if it is poorly designed is probably impossible.

A number of development methodologies and philosophies have been proposed, and they go in and out of fashion. In this course we will work with a conjunction of TDD (Test Driven Development) and Literate programming.

Literate Programming

Literate programming is an idea that software is best described in a natural language, focussing on the logic of the program, i.e. the why of code, not the what. The goal is to ensure that model, code, and documentation become a single unit, and that all this information is stored in one and only one location. The product should be consistent between its described goals and its implementation, seamless in capturing the process from start (data input) to end (visualization, interpretation), and reversible (between analysis, design and implementation).

In literate programming, narrative and computer code are kept in the same file. This source document is typically written in Markdown or LaTeX syntax and includes the programming code as well as text annotations, tables, formulas etc. The supporting software can weave human-readable documentation from this, or tangle executable code. Literate programming with both Markdown and LaTex is supported by R Studio and this makes the R Studio interface a useful development environment for this paradigm. While it is easy to edit source files with a different editor and process files in base R after loading the Sweave() and Stangle() functions or the knitr package. In our context here we will use R Studio because it conveniently integrates the functionality we need.

For exercises on knitr, RMarkdown and LaTex, follow this link.


Test Driven Development

TDD is meant to ensure that code actually does what it is meant to do. In practice, we define our software goals and devise a test (or battery of tests) for each. Initially, all test fail. As we develop, the test succeed. As we continue development

  • we think carefully about how to break the project into components and structure them;
  • we discipline ourselves to watch out for unexpected input, edge- and corner cases and unwarranted assumptions;
  • we can be confident that later changes do not break what we have done earlier - because our test keep track of the behaviour.

For an exercise in Test Driven Development, follow this link.

Typically testing is done at several levels:

  • During the initial development phases unit testing continuously checks the function of the software units of the system.
  • As the code base progresses, code units are integrated and begin interacting via their interfaces. These interfaces can be specified as "contracts" that define the conditions and obligations of an interaction. Typically, a contract will define the precondition, postcondition and invariants of an interaction. These can be verified by tests.
  • Final tests verify the code, and validate its correct execution- just like a positive control in a lab experiment.


 


Fail Safe or Fail Fast?

Testing for correct input is a crucial task for every function, and R especially goes out of its way to coerce input to the type that is needed. This is making functions fail safe. Do consider the opposite philosophy however: "fail fast", to produce fragile code. You must test whether input is correct, but a good argument can be made that incorrect input should not be fixed, but the function should stop the program and complain loudly and explicitly about what went wrong. This - once again - makes implicit knowledge explicit, it helps the caller of the function to understand how to pass correct input, and it prevents code from executing on wrong assumptions. In fact, failing fast may be the real fail safe.

 

Code

Here is a small list of miscellaneous best-practice items for the phase when actual code is being written:

  • Be organized. Keep your files in well-named folders and give your file names some thought.
  • Use version control.
  • Use an IDE (Integrated Development Environment). Syntax highlighting and code autocompletion are nice, but good support for debugging, especially stepping through code and examining variables, setting breakpoints and conditional breakpoints are essential for development.
  • Design your code to be easily extensible and only loosely coupled. Your requirements will change frequently, make sure your code is modular and nimble to change as well.
  • Design reusable code. This may include standardized interface conventions and separating options and operands well.
  • DRY (Don't repeat yourself): create functions or subroutines for tasks that need to be repeated.
  • KISS (Keep it simple): resist the temptation for particularly "elegant" language idioms and terse code.
  • Comment your code. I can't repeat that often enough. Code is read very much more often than it is written. Unfortunately (for you) the one most likely to have to read and understand your convoluted code is you yourself, half a year later. So do yourself the favour to explain what you are thinking. Not what the code does - that is readable from the code itself - but why you do something the way you do.
  • Be consistent.


 

Deploy and Maintain

In our context, deployment may mean a single run of discovery and maintenance may be superfluous as the research agenda moves on.

But this does not mean we should ignore best practice in scientific software development: simple, but essential aspects like using version control for our code, using IDEs, writing test cases for all code functions etc. These aspects are very well covered in the open source Software Carpentry project and courses. Free, online, accessible and to the point. Go there and learn:


 


 

Notes


 

Further reading and resources

Practice
Concepts
Architecture modeling. A quite useful overview of systems modeling, part of the Microsoft Visual Studio documentation.
  • Kim Waldén and Jean-Marc Nerson: Seamless Object-Oriented Software Architecture: Analysis and Design of Reliable Systems, Prentice Hall, 1995.
Sandve et al. (2013) Ten simple rules for reproducible computational research. PLoS Comput Biol 9:e1003285. (pmid: 24204232)

PubMed ] [ DOI ]

Article in Nature Biotechnology; note that successful here is meant to imply widely used. David Baker's Rosetta package is not mentioned, for example. Nevertheless: good insights in this.

Altschul et al. (2013) The anatomy of successful computational biology software. Nat Biotechnol 31:894-7. (pmid: 24104757)

PubMed ] [ DOI ]

Peng (2011) Reproducible research in computational science. Science 334:1226-7. (pmid: 22144613)

PubMed ] [ DOI ] Computational science has led to exciting new developments, but the nature of the work has exposed limitations in our ability to evaluate published findings. Reproducibility has the potential to serve as a minimum standard for judging scientific claims when full independent replication of a study is not possible.