Difference between revisions of "BIO Assignment Week 2"

From "A B C"
Jump to navigation Jump to search
m
m
Line 296: Line 296:
 
* there is an entry in the '''RefSeq''' database: [http://www.ncbi.nlm.nih.gov/protein/NP_010227.1 <code>NP_010227.1</code>]. This is the preferred entry for us to work with. RefSeq is a curated, non-redundant database which solves a number of problems of archival databases. You can recognize RefSeq identifiers &ndash; they always look like NP_12345.1, NM_12345.1, XP_12345.1, NC_12345.1 etc. This reflects whether the sequence is protein, mRNA or genomic, and inferred or obtained through experimental evidence. The RefSeq ID <code>NP_010227.1</code> actually appears twice, once linked to its genomic sequence, and once to its mRNA.
 
* there is an entry in the '''RefSeq''' database: [http://www.ncbi.nlm.nih.gov/protein/NP_010227.1 <code>NP_010227.1</code>]. This is the preferred entry for us to work with. RefSeq is a curated, non-redundant database which solves a number of problems of archival databases. You can recognize RefSeq identifiers &ndash; they always look like NP_12345.1, NM_12345.1, XP_12345.1, NC_12345.1 etc. This reflects whether the sequence is protein, mRNA or genomic, and inferred or obtained through experimental evidence. The RefSeq ID <code>NP_010227.1</code> actually appears twice, once linked to its genomic sequence, and once to its mRNA.
  
* there is a '''SwissProt''' sequence [http://www.ncbi.nlm.nih.gov/protein/P39678.1 <code>P39678.1</code>]<ref>Actually the "real" SwissProt identifier would be patterened like <code>MBP1_YEAST</code>. <code>P39678</code> is the corresponding UniProt identifier.</ref>. This link is kind of a big deal. It's a cross-reference into [http://www.uniprot.org/uniprot/P39678 '''UniProt'''], the huge protein sequence database maintained by the [http://www.ebi.ac.uk/ '''EBI''' (European Bioinformatics Institute)], which is the NCBI's counterpart in Europe. SwissProt entries have the highest annotation standard overall and are expertly curated. Many Webservices that we will encounter, work with UniProt ID's (e.g. <code>P39678.1</code>), rather than RefSeq. But it used to be until recently that the two databases did not link to each other, mostly for reasons of funding politics. It's great to see that this divide has now been overcome.
+
* there is a '''SwissProt''' sequence [http://www.ncbi.nlm.nih.gov/protein/P39678.1 <code>P39678.1</code>]<ref>Actually the "real" SwissProt identifier would be patterned like <code>MBP1_YEAST</code>. <code>P39678</code> is the corresponding UniProt identifier.</ref>. This link is kind of a big deal. It's a cross-reference into [http://www.uniprot.org/uniprot/P39678 '''UniProt'''], the huge protein sequence database maintained by the [http://www.ebi.ac.uk/ '''EBI''' (European Bioinformatics Institute)], which is the NCBI's counterpart in Europe. SwissProt entries have the highest annotation standard overall and are expertly curated. Many Webservices that we will encounter, work with UniProt ID's (e.g. <code>P39678.1</code>), rather than RefSeq. But it used to be until recently that the two databases did not link to each other, mostly for reasons of funding politics. It's great to see that this divide has now been overcome.
  
 
       <!-- Column 1 end -->
 
       <!-- Column 1 end -->

Revision as of 13:36, 28 September 2015

Assignment for Week 2
Scenario, Labnotes, R-functions, Databases, Data Modeling


Note! This assignment is currently active. All significant changes will be announced on the mailing list.

 
 

Parts labelled as "TBC" are in progress and will be made available as they are being completed.


Concepts and activities (and reading, if applicable) for this assignment will be topics on next week's quiz.



 

The Scenario

I have introduced the concept of "cargo cult science" in class. The "cargo" in Bioinformatics is to understand biology. This includes understanding how things came to be the way they are, and how they work. Both relate to the concept of function of biomolecules, and the systems they contribute to. But "function" is a rather poorly defined concept and exploring ways to make it rigorous and computable will be the major objective of this course. The realm of bioinformatics contains many kingdoms and duchies and shires and hidden glades. To find out how they contribute to the whole, we will proceed on a quest. We will take a relatively well-characterized protein that is part of a relatively well-characterized process, and ask what its function is. We will examine the protein's sequence, its structure, its domain composition, its relationship to and interactions with other proteins, and through that paint a picture of a "system" that it contributes to.

Our quest will revolve around a transcription factor that plays an important role in the regulation of the cell cycle: Mbp1 is a key component of the MBF complex (Mbp1/Swi6) in yeast. This complex regulates gene expression at the crucial G1/S-phase transition of the mitotic cell cycle and has been shown to bind to the regulatory regions of more than a hundred target genes. It is therefore a DNA binding protein that acts as a control switch for a key cellular process.

We will start our quest with information about the Mbp1 protein of Baker's yeast, Saccharomyces cerevisiae, one of the most important model organisms. Baker's yeast is a eukaryote that has been studied genetically and biochemically in great detail for many decades, and it is easily manipulated with high-throughput experimental methods. But each of you will use this information to study not Baker's yeast, but a related organism. You will explore the function of the Mbp1 protein in some other species from the kingdom of fungi, whose genome has been completely sequenced; thus our quest is also an exercise in model-organism reasoning: the transfer of knowledge from one, well-studied organism to others.

It's reasonable to hypothesize that such central control machinery is conserved in most if not all fungi. But we don't know. Many of the species that we will be working with have not been characterized in great detail, and some of them are new to our class this year. And while we know a fair bit about Mbp1, we probably don't know very much at all about the related genes in other organisms: whether they exist, whether they have similar functional features and whether they might contribute to the G1/S checkpoint system in a similar way. Thus we might discover things that are new and interesting. This is a quest of discovery.

Here are the steps of the assignment for this week:

  1. We'll need to explore what data is available for the Mbp1 protein.
  2. We'll need to pick a species to adopt for exploration.
  3. We'll need to define what data we want to store and design a datamodel.

However, before we head off into the Internet: have you thought about how to document such a "quest"? How will you keep notes? Obviously, computational research proceeds with the same best-practice principles as any wet-lab experiment. We have to keep notes, ensure our work is reproducible, and that our conclusions are supported by data. I think it's pretty obvious that paper notes are not very useful for bioinformatics work. Ideally, you should be able to save results, and link to files and Webpages.


 

Keeping Labnotes

Consider it a part of your assignment to document your activities in electronic form. Here are some applications you might think of - but (!) disclaimer, I myself don't use any of these (yet) (except the Wiki of course).

  • Evernote - a web hosted, automatically syncing e-notebook.
  • Nevernote - the Open Source alternative to Evernote.
  • Google Keep - if you have a Gmail account, you can simply log in here. Grid-based. Seems a bit awkward for longer notes. But of course you can also use Google Docs.
  • Microsoft OneNote - this sounds interesting and even though I have had my share of problems with Microsoft products, I'll probably give this a try. Syncing across platforms, being able to format contents and organize it sounds great.
  • The Student Wiki - of course. You can keep your course notes with your User pages.

Are you aware of any other solutions? Let us know!

Keeping such a journal will be helpful, because the assignments are integrated over the entire term, and later assignments will make use of earlier results. But it is also excellent practice for "real" research. Expand the section below for details - written from a Wiki perspective but generally applicable.


 

Remember you are writing a lab notebook—not a formal lab report: a point-form record of your actual activities. Write such documentation as notes to your (future) self.


Create a lab-notes page as a subpage of your User space on the Student Wiki.


For each task:

  • Write a header and give it a unique number.
This is useful so you can refer to the header number in later text. Obviously, you should "hard-code" the number and not use the Wiki's automatic section numbering scheme, since the numbers should be stable over time, not change when you add or delete a section. It may be useful to add new contents at the top, so you don't have to scroll to the bottom of the page evry time you add new material. This does not have to be in strict chronological order, like we would have it in a paper notebook. It may be advantageous to give different subprojects their own page, or at least order them on one page. Just remember that things that are on the same page are easy to find.
  • State the objective.
In one brief sentence, restate what your task is supposed to achieve.
  • Document the procedure.
Note what you have done, as concisely as possible but with sufficient detail. I am often asked: "What is sufficient detail"? The answer is easy: detailed enough so that someone can reproduce what you have done. In practice that guy will often be you, yourself, in the future. I hope that you won't be constantly cursing your past-self because of omissions!
  • Document your results.
You can distinguish different types of results -
    • Static data does not change over time and it may be sufficient to note a reference to the result. For example, there is no need to copy a GenBank record into your documentation, it is sufficient to note the accession number or the GI number, or better, to link to it.
    • Variable data can change over time. For example the results of a BLAST search depend on the sequences in the database. A list of similar structures may change as new structures get solved. In principle you want to record such data, to be able to reproduce at a later time what your conclusions were based on. But be selective in what you record. For example you should not paste the entire set of results of a BLAST search into your document, but only those matches that were important for your conclusions. Indiscriminate pasting of irrelevant information will make your notes unusable.
    • Analysis results
The results of sequence analyses, alignments etc. in general get recorded in your documentation. Again: be selective. Record what is important.
  • Note your conclusions.
An analysis is not complete unless you conclude something from the results. (Remember what we said about "Cargo Cult Science". If there is no conclusion, your activities are quite pointless.) Are two sequences likely homologues, or not? Does your protein contain a signal-sequence or does it not? Is a binding site conserved, or not? The analysis provides the data. In your conclusion you provide the interpretation of what the data means in the context of your objective. Were you expecting a signal-sequence but there isn't one? What could that mean? Sometimes your assignment task in this course will ask you to elaborate on an analysis and conclusion. But this does not mean that when I don't explicitly mention it, you can skip the interpretation.
  • Add cross-references.
Cross-reference to other information are super valuable as your documentation grows. It's easy to see how to format a link to a section of your Wiki-page: just look at the link under the Table of Contents at the top. But you can also place "anchors" for linking anywhere on an HTML page: just use the following syntax. <span id="{some-label}"><\span> for the anchor, and append #{some-label} to the page URL. Try this here: (http://steipe.biochemistry.utoronto.ca/abc/Assignment_2#tf) .
  • Use discretion when uploading images
I have enabled image uploading with some reservations, we'll see how it goes. You must not:
  • upload images that are irrelevant for this course;
  • upload copyrighted images;
  • upload any images that are larger than 500 kb. I may silently remove large images when I encounter them.
Moreover, understand that any of your uploaded images may be deleted at any time. If they are valuable to you, keep backups on your own machine.
  • Prepare your images well
Don't upload uncompressed screen dumps. Save images in a compressed file format on your own computer. Then use the Special:Upload link in the left-hand menu to upload images. The Wiki will only accept .jpeg or .png images.
  • Use the correct image types.
In principle, images can be stored uncompressed as .tiff or .bmp, or compressed as .gif or .jpg or .png. .gif is useful for images with large, monochrome areas and sharp, high-contrast edges because the LZW compression algorithm it uses works especially well on such data; .jpg (or .jpeg) is preferred for images with shades and halftones such as the structure views you should prepare for several assignments, JPEG has excellent application support and is the most versatile general purpose image file format currently in use; .tiff (or .tif) is preferred to archive master copies of images in a lossless fashion, use LZW compression for TIFF files if your system/application supports it; The .png format is an open source alternative for lossless, compressed images. Application support is growing but still variable. .bmp is not preferred for really anything, it is bloated in its (default) uncompressed form and primarily used only because it is simple to code and ubiquitous on Windows computers.
Image dimensions and resolution
Stereo images should have equivalent points approximately 6cm apart. It depends on your monitor how many pixels this corresponds to. The dimensions of an image are stated in pixels (width x height). My notebook screen has a native display resolution of 1440 x 900 pixels/23.5 x 21 cm. Therefore a 6cm separation on my notebook corresponds to approximately 260 pixels. However on my desktop monitor, 260 pixels is 6.7 cm across. And on a high-resolution iPad display, at 227 ppi (pixels per inch), 260 pixels are just 2.9 cm across. For the assignments: adjust your stereo images so they are approximately at the right separation and are approximately 500 to 600 pixels wide. Also, scale your molecules so they fill the available window and - if you have depth cueing enabled - move them close to the front clipping plane so the molecule is are not just a dim blob, lost in murky shadows.
Considerations for print (manuscripts etc.) are slightly different: for print output you can specify the output resolution in dpi (dots per inch). A typical print resolution is about 300 dpi: 6 cm separation at 300dpi is about 700 pixels. Print images should therefore be about three times as large in width and height as screen images.
Preparation of stereo views
When assignments ask you to create molecular images, always create stereo views.
Keep your images uncluttered and expressive
Scale the molecular model to fill the available space of your image well. Orient views so they illustrate a point you are trying to make. Emphasize residues that you are writing about with a contrasting colouring scheme. Add labels, where residue identities are not otherwise obvious. Turn off side-chains for residues that are not important. The more you practice these small details, the more efficient you will become in the use of your tools.
If you have technical difficulties, post your questions to the list and/or contact me.



 

Data Sources

SGD - a Yeast Model Organism Database

Yeast happens to have a very well maintained model organism database - a Web resource dedicated to Saccharomyces cerevisiae. Where such resources are available, they are very useful for the community. For the general case however, we need to work with one of the large, general data providers - the NCBI and the EBI. But in order to get a sense of the type of data that is available, let's visit the SGD database first.

Task:
Access the information page on Mbp1 at the Saccharomyces Genome Database.

  1. Browse through the Summary page and note the available information: you should see:
    • information about the gene and the protein;
    • Information about it's roles in the cell curated at the Gene Ontology database;
    • Information about knock-out phenotypes; (Amazing. Would you have imagined that this is a non-essential gene?)
    • Information about protein-protein interactions;
    • Regulation and expression;
    • A curators' summary of our understanding of the protein. Mandatory reading.
    • And key references.
  2. Access the Protein tab and note the much more detailed information.
    • Domains and their classification;
    • Sequence;
    • Shared domains;
    • and much more...

You will notice that some of this information relates to the molecule itself, and some of it relates to its relationship with other molecules. Some of it is stored at SGD, and some of it is cross-referenced from other databases. And we have textual data, numeric data, and images.

How would you store such data to use it in your project? We will work on this question at the end of the assignment.

 


 

If we were working on yeast, most data we need is right here: curated, kept current and consistent, referenced to the literature and ready to use. But you'll be working on a different species and we'll explore the much, much larger databases at the NCBI for this. The upside is that most of the information like this is available for your species. The downside is that we'll have to integrate information from many different sources "by hand".


 

NCBI databases

The NCBI (National Center for Biotechnology Information) is the largest international provider of data for genomics and molecular biology. With its annual budget of several hundred million dollars, it organizes a challenging program of data management at the largest scale, it makes its data freely and openly available over the Internet, worldwide, and it runs significant in-house research projects.

Let us explore some of the offerings of the NCBI that can contribute to our objective of studying a particular gene in an organism of interest.


 

Entrez

Task:
Remember to document your activities.

  1. Access the NCBI website at http://www.ncbi.nlm.nih.gov/ [1]
  2. In the search bar, enter mbp1 and click Search.
  3. On the resulting page, look for the Protein section and click on the link. What do you find?


The result page of your search in "All Databases" is the "Global Query Result Page" of the Entrez system. If you follow the "Protein" link, you get taken to the more than 450 sequences in the NCBI Protein database that contain the keyword "mbp1". But when you look more closely at the results, you see that the result is quite non-specific: searching only by keyword retrieves an Arabidopsis protein, bacterial proteins, a Saccharomyces protein (perhaps one that we are actually interested in), Maltose Binding Proteins - and much more. There must be a more specific way to search, and indeed there is. Time to read up on the Entrez system.


Task:

  1. Navigate to the Entrez Help Page and read about the Entrez system, especially about:
    1. Boolean operators,
    2. wildcards,
    3. limits, and
    4. filters.
  2. You should minimally understand:
    1. How to search by keyword;
    2. How to search by gene or protein name;
    3. How to restrict a search to a particular organism.

Don't skip this part, you don't need to know the options by heart, but you should know they exist and how to find them. It would be great to have a synopsis of the important fields for reference, wouldn't it?Why don't you go and create one: I have put a template page on the Student Wiki (A synopsis of Entrez codes). Editors welcome!


Keyword and organism searches are pretty universal, but apart from that, each NCBI database has its own set of specific fields. You can access the keywords via the Advanced Search interface of any of the database pages.


 

Protein Sequence

 

Task:
With this knowledge we can restrict the search to proteins called "Mbp1" that occur in Baker's Yeast. Return to the Global Search page and in the search field, type:

Mbp1[protein name] AND
"Saccharomyces cerevisiae"[organism]


This should find one and only one protein. Follow the link into the protein database: since this is only one record, the link takes you directly to the result: CAA98618.1—a data record in Genbank Flat File (GFF) format[2]. The database identifier CAA98618.1 tells you that this is a record in the GenPept database. There are actually several, identical versions of this sequence in the NCBI's holdings. A link to "Identical Proteins" near the top of the record shows you what these are:


Some of the sequences represent duplicate entries of the same gene (Mbp1) in the same strain (S288c) of the same species (S. cerevisiae). In particular:


  • there are three GenBank; records (CAA52271.1, CAA98618.1 and DAA11800.1); these are archival entries, submitted by independent yeast genome research projects;
  • there is an entry in the RefSeq database: NP_010227.1. This is the preferred entry for us to work with. RefSeq is a curated, non-redundant database which solves a number of problems of archival databases. You can recognize RefSeq identifiers – they always look like NP_12345.1, NM_12345.1, XP_12345.1, NC_12345.1 etc. This reflects whether the sequence is protein, mRNA or genomic, and inferred or obtained through experimental evidence. The RefSeq ID NP_010227.1 actually appears twice, once linked to its genomic sequence, and once to its mRNA.
  • there is a SwissProt sequence P39678.1[3]. This link is kind of a big deal. It's a cross-reference into UniProt, the huge protein sequence database maintained by the EBI (European Bioinformatics Institute), which is the NCBI's counterpart in Europe. SwissProt entries have the highest annotation standard overall and are expertly curated. Many Webservices that we will encounter, work with UniProt ID's (e.g. P39678.1), rather than RefSeq. But it used to be until recently that the two databases did not link to each other, mostly for reasons of funding politics. It's great to see that this divide has now been overcome.


  • Finally, there are four entries of the same sequence in different yeast strains. These don't have to be identical, they just happen to be. Sometimes we find identical sequences in quite divergent species. Therefore I would not actually consider EIW11153.1, AJU86440.1, AJU58508.1, and AJU61971.1 to be identical proteins, although they have the same sequence.


Note all the .1 suffixes of the sequence identifiers. These are version numbers. Two observations:

  1. It's great that version numbers are now used throughout the NCBI database. This is good database engineering practice because it's really important for reproducible research that updates to database records are possible, but recognizable. When working with data you always must provide for the possibility of updates, and manage the changes transparently and explicitly. Proper versioning should be a part of all datamodels.
  2. When searching, or for general use, you should omit the version number, i.e. use NP_010227 or P39678 not NP_010227.1 resp. P39678.1. This way the database system will resolve the identifier to the most current, highest version number (unless you want the older one, of course).


Task:

  1. Note down the RefSeq ID and the UniProt (SwissProt) ID in your journal.
  2. Follow the link to the RefSeq entry NP_010227.1.
  3. Explore the page and follow these links (note the contents in your journal):
    1. Under "Analyze this Sequence": Identify Conserved Domains
    2. Under "Protein 3D Structure": See all 3 structures...
    3. Under "Pathways for the MBP1 gene": Cell cycle - yeast
    4. Under "Related information" Proteins with Similar Sequence

As we see, this is a good start page to explore all kinds of databases at the NCBI via cross-references.


 

PubMed

Arguably one of the most important databases in the life sciences is PubMed and this is a good time to look at PubMed in a bit more detail.


Task:

  1. Return back to the MBP1 RefSeq record.
  2. Find the PubMed links under Related information in the right-hand margin and explore them.
    1. The first one (PubMed) will take you to records that cite the sequence record;
    2. The second one (PubMed (RefSeq)) will take you to articles that relate to the Mbp1 gene or protein;
    3. The third one (PubMed (Weighted)) applies a weighting algorithm to find broadly relevant information - an example of literature data mining. PubMed(weighted) appears to give a pretty good overview of systems-biology type, cross-sectional and functional information.

But neither of the searches finds all Mbp1 related literature.

  1. On any of the PubMed pages open the Advanced query page and study the keywords that apply to PubMed searches. These are actually quite important and useful to remember. Make yourself familiar with the section on Search field descriptions and tags in the PubMed help document, (in particular [DP], [AU], [TI], and [TA]), how you use the History to combine searches, and the use of AND, OR, NOT and brackets. Understand how you can restrict a search to reviews only, and what the link to Related citations... is useful for.
  2. Now find publications with Mbp1 in the title. In the result list, follow the links for the two Biochemistry papers, by Taylor et al. (2000) and by Deleeuw et al. (2008). Download the PDFs, we will need them later.


Now, we were actually trying to find related proteins in a different species. Our next task is therefore to decide what that species should be.


 

Choosing YFO (Your Favourite Organism)

In this section we create a lottery to assign species at (pseudo) random to students. We'll try the following procedure.

  1. First, I create a list of suitable species.
  2. Then, we put this list into the body of an R function.
  3. The function picks one of the species at random - but to make sure this process is reproducible, we'll set a seed for the random number generator. Obviously, everyone has to use a different seed, or else everyone would end up getting the same species assigned.
  4. Thus we'll use your Student Number as the seed. This is an integer, so it can be used, and it's unique to each of you. The choice is then random, reproducible and unique.

You may notice that this process does not guarantee that everyone gets a different species, and that all species are chosen at least once. I don't think doing that is possible in a "stateless" way (i.e. I don't want to have to remember who chose what species), given that I don't know all of your student numbers. But if anyone can think of a better solution, that would be neat.

Is it possible that all of you end up working on the same species anyway? Indeed. That's the problem with randomness. But it is not very likely.


What about the "suitable species" though? Where do they come from? For the purposes of the course "quest", we need species

  • that actually have transcription factors that are related to Mbp1;
  • whose genomes have been sequenced; and
  • for which the sequences have been deposited in the RefSeq database, NCBI's unique sequence collection.


Task:


To prepare such a list of species, I have searched the NCBI's RefSeq database for proteins whose sequences are similar to the KilA-N domain of Mbp1 (this is the most highly conserved part of the protein, which binds DNA) and compiled the names of organisms that contain them...

Thess are the steps of the process:


(1) Compiled a list of genome-sequenced fungi from information on the NCBI genome browser page by selecting Eukaryota / Fungi ... and downloading the entire list of species as a text document. An excerpt of the first lines of the document is shown here:
#Organism/Name            Kingdom    Group  SubGroup        Size (Mb)
Aciculosporium take       Eukaryota  Fungi  Ascomycetes     58.8364
Agaricus bisporus         Eukaryota  Fungi  Basidiomycetes  32.6144
Ajellomyces capsulatus    Eukaryota  Fungi  Ascomycetes     46.124 
Ajellomyces dermatitidis  Eukaryota  Fungi  Ascomycetes     75.4047
[...]


(2) Performed a BLAST search with the Mbp1 KilA-N domain sequence, the search database restricted to the refseq_protein database and an organism seleciton of "Fungi".
(3) In the header of the BLAST results page, there is a link to [Taxonomy reports] This contains a list of all hits, sorted by species. I copied the lineage report to a separate file.


(4) Ran the following R script:
# prepareSpeciesList.R
#
# From a list of genome-sequenced species, find those that
# have KilA-N domain containing proteins in RefSeq.
# 

setwd(BCH441DIR)  

if (!require(ore, quietly=TRUE)) { # Oniguruma Regular Expressions
    install.packages("ore")
    library(ore)
}

# read output of search at NCBI genomes
#Organism/Name,Kingdom,Group,SubGroup,Size (Mb),Chrs,Organelles,Plasmids,Assemblies
#"Absidia idahoensis",Eukaryota,"Fungi","Other Fungi",-,-,-,-,-
#"Aciculosporium take",Eukaryota,"Fungi","Ascomycetes",58.8364,-,-,-,1
#"Acremonium chrysogenum",Eukaryota,"Fungi","Ascomycetes",28.5905,-,1,-,1
#"Agaricus bisporus",Eukaryota,"Fungi","Basidiomycetes",32.6144,-,-,-,2
# etc.

sequencedFungi <- read.csv("genomes_overview.csv",
                            stringsAsFactors=FALSE)
head(sequencedFungi)  # Check what we have!
tail(sequencedFungi)


# Run BLASTp on RefSeq with KilA-N:                      
# >Yeast Mbp1 KilA-N domain (AA 19..93 of NP_010227)
# IHSTGSIMKRKKDDWVNATHILKAANFAKAKRTRILEKEVLKETHEKVQG 
# GFGKYQGTWVPLNIAKQLAEKFSVY
# E value cutoff 0.01
# Copy and save Lineage Report from Taxonomy report of results.
#. . . . . . Vanderwaltozyma polyspora DSM 70294 ............  143   4 hits
#. . . . . . Candida glabrata CBS 138 .......................  140   3 hits
#. . . . . . Tetrapisispora phaffii CBS 4417 ................  140   4 hits
#. . . . . . Naumovozyma dairenensis CBS 421 ................  139   3 hits
#. . . . . . Kazachstania africana CBS 2517 .................  133   5 hits
# etc.


hits <- readLines("lineageReport.txt", n = -1)

for (i in 1:length(hits)) { # get first two words from line
    hits[i] <- ore.search("(\\w+\\s+\\w+)", hits[i])$match	
}

# choose only hits from sequenced fungi
fungi <- unique(hits[hits %in% sequencedFungi[,1]])

# extract genus and keep only one representative of genus
fungi <- cbind(fungi, rep("", length(fungi)))
for (i in 1:nrow(fungi)) {
    fungi[i,2] <- ore.search("^\\w+", fungi[i,1])$match	
}
selected <- fungi[!duplicated(fungi[, 2]), 1]

selected <- selected[order(selected)]
selected

# remove reference species
reference <- c("Saccharomyces cerevisiae",
               "Aspergillus nidulans",
               "Candida glabrata",               # ref.: albicans
               "Neurospora tetrasperma",         # ref.: crassa
               "Schizosaccharomyces japonicus",  # ref.: pombe
               "Ustilago maydis")
               
selected <- selected[!(selected  %in% reference)]             
                
# add binomial codes
biCode <- function(s) { 
	substr(s, 4, 5) <- substr(strsplit(s,"\\s+")[[1]][2], 1, 2)
	return (toupper(substr(s, 1, 5)))
}

for (i in 1:length(selected)) {
    selected[i] <- paste(selected[i],
                         " (",
                         biCode(selected[i]),
                         ")",
                         sep="")	
}

# print output
cat(paste(selected, collapse="\n"))

# [End]


This process with its mix of Web service, programmed post-processing and a bit of manual cleanup, is a fairly typical example of gathering and collating information across different data sources.

Here is R code to assign the species:

Task:


  • Read, try to understand and then execute the following R-code.
pickSpecies <- function(ID) {
	# this function randomly picks a fungal species
	# from a list. It is seeded by a student ID. Therefore
	# the pick is random, but reproducible.
	
	# first, define a list of species:
	Species <- c(
"Agaricus bisporus (AGABI)",
"Arthrobotrys oligospora (ARTOL)",
"Arthroderma benhamiae (ARTBE)",
"Aureobasidium subglaciale (AURSU)",
"Auricularia delicata (AURDE)",
"Batrachochytrium dendrobatidis (BATDE)",
"Baudoinia panamericana (BAUPA)",
"Beauveria bassiana (BEABA)",
"Bipolaris sorokiniana (BIPSO)",
"Blastomyces dermatitidis (BLADE)",
"Botrytis cinerea (BOTCI)",
"Capronia epimyces (CAPEP)",
"Chaetomium thermophilum (CHATH)",
"Cladophialophora yegresii (CLAYE)",
"Clavispora lusitaniae (CLALU)",
"Coccidioides immitis (COCIM)",
"Colletotrichum graminicola (COLGR)",
"Coniophora puteana (CONPU)",
"Coniosporium apollinis (CONAP)",
"Coprinopsis cinerea (COPCI)",
"Cordyceps militaris (CORMI)",
"Cryptococcus neoformans (CRYNE)",
"Cyphellophora europaea (CYPEU)",
"Dactylellina haptotyla (DACHA)",
"Debaryomyces hansenii (DEBHA)",
"Dichomitus squalens (DICSQ)",
"Endocarpon pusillum (ENDPU)",
"Eremothecium gossypii (EREGO)",
"Eutypa lata (EUTLA)",
"Exophiala aquamarina (EXOAQ)",
"Fibroporia radiculosa (FIBRA)",
"Fomitiporia mediterranea (FOMME)",
"Fonsecaea pedrosoi (FONPE)",
"Fusarium pseudograminearum (FUSPS)",
"Gaeumannomyces graminis (GAEGR)",
"Glarea lozoyensis (GLALO)",
"Gloeophyllum trabeum (GLOTR)",
"Heterobasidion irregulare (HETIR)",
"Histoplasma capsulatum (HISCA)",
"Kazachstania africana (KAZAF)",
"Kluyveromyces lactis (KLULA)",
"Komagataella pastoris (KOMPA)",
"Laccaria bicolor (LACBI)",
"Lachancea thermotolerans (LACTH)",
"Leptosphaeria maculans (LEPMA)",
"Lodderomyces elongisporus (LODEL)",
"Magnaporthe oryzae (MAGOR)",
"Malassezia globosa (MALGL)",
"Marssonina brunnea (MARBR)",
"Metarhizium robertsii (METRO)",
"Meyerozyma guilliermondii (MEYGU)",
"Microsporum gypseum (MICGY)",
"Millerozyma farinosa (MILFA)",
"Moniliophthora roreri (MONRO)",
"Myceliophthora thermophila (MYCTH)",
"Naumovozyma dairenensis (NAUDA)",
"Nectria haematococca (NECHA)",
"Neofusicoccum parvum (NEOPA)",
"Neosartorya fischeri (NEOFI)",
"Ogataea parapolymorpha (OGAPA)",
"Paracoccidioides brasiliensis (PARBR)",
"Penicillium rubens (PENRU)",
"Pestalotiopsis fici (PESFI)",
"Phanerochaete carnosa (PHACA)",
"Pneumocystis murina (PNEMU)",
"Podospora anserina (PODAN)",
"Postia placenta (POSPL)",
"Pseudocercospora fijiensis (PSEFI)",
"Pseudogymnoascus destructans (PSEDE)",
"Pseudozyma hubeiensis (PSEHU)",
"Puccinia graminis (PUCGR)",
"Punctularia strigosozonata (PUNST)",
"Pyrenophora teres (PYRTE)",
"Rasamsonia emersonii (RASEM)",
"Rhinocladiella mackenziei (RHIMA)",
"Scheffersomyces stipitis (SCHST)",
"Schizophyllum commune (SCHCO)",
"Sclerotinia sclerotiorum (SCLSC)",
"Serpula lacrymans (SERLA)",
"Setosphaeria turcica (SETTU)",
"Sordaria macrospora (SORMA)",
"Spathaspora passalidarum (SPAPA)",
"Stereum hirsutum (STEHI)",
"Talaromyces marneffei (TALMA)",
"Tetrapisispora phaffii (TETPH)",
"Thielavia terrestris (THITE)",
"Tilletiaria anomala (TILAN)",
"Togninia minima (TOGMI)",
"Torulaspora delbrueckii (TORDE)",
"Trametes versicolor (TRAVE)",
"Tremella mesenterica (TREME)",
"Trichoderma virens (TRIVI)",
"Trichophyton rubrum (TRIRU)",
"Tuber melanosporum (TUBME)",
"Uncinocarpus reesii (UNCRE)",
"Vanderwaltozyma polyspora (VANPO)",
"Verticillium alfalfae (VERAL)",
"Wallemia mellicola (WALME)",
"Wickerhamomyces ciferrii (WICCI)",
"Yarrowia lipolytica (YARLI)",
"Zygosaccharomyces rouxii (ZYGRO)",
"Zymoseptoria tritici (ZYMTR)"
		)
	set.seed(ID)                  # seed the random number generator
	choice <- sample(Species, 1)  # pick a random element
	return(choice)
}
  • Execute the function pickSpecies() with your student ID as its argument. Example:
 > pickSpecies(991234567)
[1] "Coccidioides immitis (COCIM)"
  • Note down the species name and its five letter label on your Student Wiki user page. Use this species whenever this or future assignments refer to YFO.


 

Selecting "your" gene

Task:

  1. Back at the Mbp1 protein page follow the link to Run BLAST... under "Analyze this sequence".
  2. This allows you to perform a sequence similarity search. You need to set two parameters:
    1. As Database, select Reference proteins (refseq_protein) from the drop down menu;
    2. In the Organism field, type the species you have selected as YFO and select the corresponding taxonomy ID.
  3. Click on Run BLAST to start the search.

This should find a handful of genes, all of them in YFO. If you find none, or hundreds, or they are not all in the same species, you did something wrong. Ask on the mailing list and make sure to fix the problem.

  1. Note the results in your Journal.


 

Data Storage

Now that we have a better sense of what our data is, we need to consider ways to model and store it. Let's talk about storage first.


Any software project requires modelling on many levels - data-flow models, logic models, user interaction models and more. But all of these ultimately rely on a data model that defines how the world is going to be represented in the computer for the project's purpose. The process of abstraction of data entities and defining their relationships can (and should) take up a major part of the project definition, often taking several iterations until you get it right. Whether your data can be completely described, consistently stored and efficiently retrieved is determined to a large part by your data model.

Databases can take many forms, from memories in your brain, to shoe-cartons under your bed, to software applications on your computer, or warehouse-sized data centres. Fundamentally, these all do the same thing: collect information and make it available.

Let us consider collecting information on APSES-domain transcription factors in various fungi, with the goal of being able to compare them. Let's specify this as follows:

Store data on APSES-domain proteins so that we can
  • cross reference the source databases;
  • study if they have the same features (e.g. domains);
  • and compare the features.

The underlying information can easily be retrieved for a protein from its RefSeq or UniProt entry.


Text files

A first attempt to organize the data might be simply to write it down in a large text file:

name: Mbp1
refseq ID: NP_010227
uniprot ID: P39678
species: Saccharomyces cerevisiae
taxonomy ID: 4392
sequence:
MSNQIYSARYSGVDVYEFIHSTGSIMKRKKDDWVNATHILKAANFAKAKR 
TRILEKEVLKETHEKVQGGFGKYQGTWVPLNIAKQLAEKFSVYDQLKPLF 
DFTQTDGSASPPPAPKHHHASKVDRKKAIRSASTSAIMETKRNNKKAEEN 
QFQSSKILGNPTAAPRKRGRPVGSTRGSRRKLGVNLQRSQSDMGFPRPAI 
PNSSISTTQLPSIRSTMGPQSPTLGILEEERHDSRQQQPQQNNSAQFKEI 
DLEDGLSSDVEPSQQLQQVFNQNTGFVPQQQSSLIQTQQTESMATSVSSS 
PSLPTSPGDFADSNPFEERFPGGGTSPIISMIPRYPVTSRPQTSDINDKV 
NKYLSKLVDYFISNEMKSNKSLPQVLLHPPPHSAPYIDAPIDPELHTAFH 
WACSMGNLPIAEALYEAGTSIRSTNSQGQTPLMRSSLFHNSYTRRTFPRI 
FQLLHETVFDIDSQSQTVIHHIVKRKSTTPSAVYYLDVVLSKIKDFSPQY 
RIELLLNTQDKNGDTALHIASKNGDVVFFNTLVKMGALTTISNKEGLTAN 
EIMNQQYEQMMIQNGTNQHVNSSNTDLNIHVNTNNIETKNDVNSMVIMSP 
VSPSDYITYPSQIATNISRNIPNVVNSMKQMASIYNDLHEQHDNEIKSLQ 
KTLKSISKTKIQVSLKTLEVLKESSKDENGEAQTNDDFEILSRLQEQNTK 
KLRKRLIRYKRLIKQKLEYRQTVLLNKLIEDETQATTNNTVEKDNNTLER 
LELAQELTMLQLQRKNKLSSLVKKFEDNAKIHKYRRIIREGTEMNIEEVD 
SSLDVILQTLIANNNKNKGAEQIITISNANSHA    
length: 833
Kila-N domain: 21-93
Ankyrin domains: 369-455, 505-549

...

... and save it all in one large text file and whenever you need to look something up, you just open the file, look for e.g. the name of the protein and read what's there. Or - for a more structured approach, you could put this into several files in a folder.[4] This is a perfectly valid approach and for some applications it might not be worth the effort to think more deeply about how to structure the data, and store it in a way that it is robust and scales easily to large datasets. Alas, small projects have a tendency to grow into large projects and if you work in this way, it's almost guaranteed that you will end up doing many things by hand that could easily be automated. Imagine asking questions like:

  • How many proteins do I have?
  • What's the sequence of the Kila-N domain?
  • What percentage of my proteins have an Ankyrin domain?
  • Or two ...?

Answering these questions "by hand" is possible, but tedious.

Spreadsheets

Data for three yeast APSES domain proteins in an Excel spreadsheet.


Many serious researchers keep their project data in spreadsheets. Often they use Excel, or an alternative like the free OpenOffice Calc, or Google Sheets, both of which are compatible with Excel and have some interesting advantages. Here, all your data is in one place, easy to edit. You can even do simple calculations - although you should never use Excel for statistics[5]. You could answer What percentage of my proteins have an Ankyrin domain? quite easily[6].

There are two major downsides to spreadsheets. For one, complex queries need programming. There is no way around this. You can program inside Excel with Visual Basic. But you might as well export your data so you can work on it with a "real" programming language. The other thing is that Excel does not scale very well. Once you have more than a hundred proteins in your spreadsheet, you can see how finding anything can become tedious.

However, just because it was built for business applications, and designed for use by office assistants, does not mean it is intrinsically unsuitable for our domain. It's important to be pragmatic, not dogmatic, when choosing tools: choose according to your real requirements. Sometimes "quick and dirty" is just fine, because quick.


 

R

R can keep complex data in data frames and lists. If we do data analysis with R, we have to load the data first. We can use any of the read.table() functions for structured data, read lines of raw text with readLines(), or slurp in entire files with scan(). But we could also keep the data in an R object in the first place that we can read from disk, analyze, modify, and write back. In this case, R becomes our database engine.

# Sample construction of an R database table as a dataframe

# Data for the Mbp1 protein
proteins <- data.frame(  
    name     = "Mbp1",
    refSeq   = "NP_010227",
    uniProt  = "P39678",
    species  = "Saccharomyces cerevisiae",
    taxId    = "4392",
    sequence = paste(
                    "MSNQIYSARYSGVDVYEFIHSTGSIMKRKKDDWVNATHILKAANFAKAKR",
                    "TRILEKEVLKETHEKVQGGFGKYQGTWVPLNIAKQLAEKFSVYDQLKPLF",
                    "DFTQTDGSASPPPAPKHHHASKVDRKKAIRSASTSAIMETKRNNKKAEEN",
                    "QFQSSKILGNPTAAPRKRGRPVGSTRGSRRKLGVNLQRSQSDMGFPRPAI",
                    "PNSSISTTQLPSIRSTMGPQSPTLGILEEERHDSRQQQPQQNNSAQFKEI",
                    "DLEDGLSSDVEPSQQLQQVFNQNTGFVPQQQSSLIQTQQTESMATSVSSS",
                    "PSLPTSPGDFADSNPFEERFPGGGTSPIISMIPRYPVTSRPQTSDINDKV",
                    "NKYLSKLVDYFISNEMKSNKSLPQVLLHPPPHSAPYIDAPIDPELHTAFH",
                    "WACSMGNLPIAEALYEAGTSIRSTNSQGQTPLMRSSLFHNSYTRRTFPRI",
                    "FQLLHETVFDIDSQSQTVIHHIVKRKSTTPSAVYYLDVVLSKIKDFSPQY",
                    "RIELLLNTQDKNGDTALHIASKNGDVVFFNTLVKMGALTTISNKEGLTAN",
                    "EIMNQQYEQMMIQNGTNQHVNSSNTDLNIHVNTNNIETKNDVNSMVIMSP",
                    "VSPSDYITYPSQIATNISRNIPNVVNSMKQMASIYNDLHEQHDNEIKSLQ",
                    "KTLKSISKTKIQVSLKTLEVLKESSKDENGEAQTNDDFEILSRLQEQNTK",
                    "KLRKRLIRYKRLIKQKLEYRQTVLLNKLIEDETQATTNNTVEKDNNTLER",
                    "LELAQELTMLQLQRKNKLSSLVKKFEDNAKIHKYRRIIREGTEMNIEEVD",
                    "SSLDVILQTLIANNNKNKGAEQIITISNANSHA",
                    sep=""),
    seqLen   = 833,
    KilAN    = "21-93",  
    Ankyrin  = "369-455, 505-549",
    stringsAsFactors = FALSE)

# add data for the Swi4 protein
proteins <- rbind(proteins,
                  data.frame(  
    name     = "Swi4",
    refSeq   = "NP_011036",
    uniProt  = "P25302",
    species  = "Saccharomyces cerevisiae",
    taxId    = "4392",
    sequence = paste(
                    "MPFDVLISNQKDNTNHQNITPISKSVLLAPHSNHPVIEIATYSETDVYEC",
                    "YIRGFETKIVMRRTKDDWINITQVFKIAQFSKTKRTKILEKESNDMQHEK",
                    "VQGGYGRFQGTWIPLDSAKFLVNKYEIIDPVVNSILTFQFDPNNPPPKRS",
                    "KNSILRKTSPGTKITSPSSYNKTPRKKNSSSSTSATTTAANKKGKKNASI",
                    "NQPNPSPLQNLVFQTPQQFQVNSSMNIMNNNDNHTTMNFNNDTRHNLINN",
                    "ISNNSNQSTIIQQQKSIHENSFNNNYSATQKPLQFFPIPTNLQNKNVALN",
                    "NPNNNDSNSYSHNIDNVINSSNNNNNGNNNNLIIVPDGPMQSQQQQQHHH",
                    "EYLTNNFNHSMMDSITNGNSKKRRKKLNQSNEQQFYNQQEKIQRHFKLMK",
                    "QPLLWQSFQNPNDHHNEYCDSNGSNNNNNTVASNGSSIEVFSSNENDNSM",
                    "NMSSRSMTPFSAGNTSSQNKLENKMTDQEYKQTILTILSSERSSDVDQAL",
                    "LATLYPAPKNFNINFEIDDQGHTPLHWATAMANIPLIKMLITLNANALQC",
                    "NKLGFNCITKSIFYNNCYKENAFDEIISILKICLITPDVNGRLPFHYLIE",
                    "LSVNKSKNPMIIKSYMDSIILSLGQQDYNLLKICLNYQDNIGNTPLHLSA",
                    "LNLNFEVYNRLVYLGASTDILNLDNESPASIMNKFNTPAGGSNSRNNNTK",
                    "ADRKLARNLPQKNYYQQQQQQQQPQNNVKIPKIIKTQHPDKEDSTADVNI",
                    "AKTDSEVNESQYLHSNQPNSTNMNTIMEDLSNINSFVTSSVIKDIKSTPS",
                    "KILENSPILYRRRSQSISDEKEKAKDNENQVEKKKDPLNSVKTAMPSLES",
                    "PSSLLPIQMSPLGKYSKPLSQQINKLNTKVSSLQRIMGEEIKNLDNEVVE",
                    "TESSISNNKKRLITIAHQIEDAFDSVSNKTPINSISDLQSRIKETSSKLN",
                    "SEKQNFIQSLEKSQALKLATIVQDEESKVDMNTNSSSHPEKQEDEEPIPK",
                    "STSETSSPKNTKADAKFSNTVQESYDVNETLRLATELTILQFKRRMTTLK",
                    "ISEAKSKINSSVKLDKYRNLIGITIENIDSKLDDIEKDLRANA",
                    sep=""),
    seqLen   = 1093,
    KilAN    = "56-122",  
    Ankyrin  = "516-662",
    stringsAsFactors = FALSE)
    )

# how many proteins?
nrow(proteins)

#what are their names?
proteins[,"name"]

# how many do not have an Ankyrin domain?
sum(proteins[,"Ankyrin"] == "")
    
# save it to file
save(proteins, file="proteinData.Rda")

# delete it from memory
rm(proteins)

# check...
proteins  # ... yes, it's gone


# read it back in:
load("proteinData.Rda")

# did this work?
sum(proteins[,"seqLen"])   # 1926 amino acids

# add another protein: Phd1
proteins <- rbind(proteins,
                  data.frame(  
    name     = "Phd1",
    refSeq   = "NP_012881",
    uniProt  = "P39678",
    species  = "Saccharomyces cerevisiae",
    taxId    = "4392",
    sequence = paste(
                    "MPFDVLISNQKDNTNHQNITPISKSVLLAPHSNHPVIEIATYSETDVYEC",
                    "MYHVPEMRLHYPLVNTQSNAAITPTRSYDNTLPSFNELSHQSTINLPFVQ",
                    "RETPNAYANVAQLATSPTQAKSGYYCRYYAVPFPTYPQQPQSPYQQAVLP",
                    "YATIPNSNFQPSSFPVMAVMPPEVQFDGSFLNTLHPHTELPPIIQNTNDT",
                    "SVARPNNLKSIAAASPTVTATTRTPGVSSTSVLKPRVITTMWEDENTICY",
                    "QVEANGISVVRRADNNMINGTKLLNVTKMTRGRRDGILRSEKVREVVKIG",
                    "SMHLKGVWIPFERAYILAQREQILDHLYPLFVKDIESIVDARKPSNKASL",
                    "TPKSSPAPIKQEPSDNKHEIATEIKPKSIDALSNGASTQGAGELPHLKIN",
                    "HIDTEAQTSRAKNELS",
                    sep=""),
    seqLen   = 366,
    KilAN    = "209-285",  
    Ankyrin  = "",    # No ankyrin domains annotated here
    stringsAsFactors = FALSE)
    )

# check:
proteins[,"name"]                #"Mbp1" "Swi4" "Phd1"
sum(proteins[,"Ankyrin"] == "")  # Now there is one...
sum(proteins[,"seqLen"])         # 2292 amino acids

# [END]


 

The third way to use R for data is to connect it to a "real" database:

  • a relational database like mySQL, MariaDB, or PostgreSQL;
  • an object/document database like {{WP|MongoDB};
  • or even a graph-database like Neo4j.

R "drivers" are available for all of these. However all of these require installing extra software on your computer: the actual database, which runs as an independent application. If you need a rock-solid database with guaranteed integrity, industry standard performance, and scalability to even very large datasets and hordes of concurrent users, don't think of rolling your own solution. One of the above is the way to go.


 

MySQL and friends

A "Schema" for a table that stores data for APSES domain proteins. This is a screenshot of the free MySQL Workbench application.

MySQL is a free, open relational database that powers some of the largest corporations as well as some of the smallest laboratories. It is based on a client-server model. The database engine runs as a daemon in the background and waits for connection attempts. When a connection is established, the server process establishes a communication session with the client. The client sends requests, and the server responds. One can do this interactively, by running the client program /usr/local/mysql/bin/mysql (on Unix systems). Or, when you are using a program such as R, Python, Perl, etc. you use the appropriate method calls or functions—the driver—to establish the connection.

These types of databases use their own language to describe actions: SQL - which handles data definition, data manipulation, and data control.

Just for illustration, the Figure above shows a table for our APSES domain protein data, built as a table in the MySQL workbench application and presented as an Entity Relationship Diagram (ERD). There is only one entity though - the protein "table". The application can generate the actual code that implements this model on a SQL compliant database:


CREATE TABLE IF NOT EXISTS `mydb`.`proteins` (
  `name` VARCHAR(20) NULL,
  `refSeq` VARCHAR(20) NOT NULL,
  `uniProt` VARCHAR(20) NULL,
  `species` VARCHAR(45) NOT NULL COMMENT '	',
  `taxId` VARCHAR(10) NULL,
  `sequence` BLOB NULL,
  `seqLen` INT NULL,
  `KilA-N` VARCHAR(45) NULL,
  `Ankyrin` VARCHAR(45) NULL,
  PRIMARY KEY (`refSeq`))
ENGINE = InnoDB


This looks at least as complicated as putting the model into R in the first place. Why then would we do this, if we need to load it into R for analysis anyway. There are several important reasons.

  • Scalability: these systems are built to work with very large datasets and optimized for performance. In theory R has very good performance with large data objects, but not so when the data becomes larger than what the computer can keep in memory all at once.
  • Concurrency: when several users need to access the data potentially at the same time, you must use a "real" database system. Handling problems of concurrent access is what they are made for.
  • ACID compliance. ACID describes four aspects that make a database robust, these are crucial for situations in which you have only partial control over your system or its input, and they would be quite laborious to implement for your hand built R data model:
    • Atomicity: Atomicity requires that each transaction is handled "indivisibly": it either succeeds fully, with all requested elements, or not at all.
    • Consistency: Consistency requires that any transaction will bring the database from one valid state to another. In particular any data-validation rules have to be enforced.
    • Isolation: Isolation ensures that any concurrent execution of transactions results in the exact same database state as if transactions would have been executed serially, one after the other.
    • Durability: The Durability requirement ensures that a committed transaction remains permanently committed, even in the event that the database crashes or later errors occur. You can think of this like an "autosave" function on every operation.

All the database systems I have mentioned above are ACID compliant[7].



 

Data modelling

As you have seen above, the actual specification of a data model in R or as a sequence of SQL statements is quite technical and not well suited to obtain an overview for the model's main features that we would need for its design. We'll thus introduce a modelling convention: the Entity-Relationship Diagram (ERD). These are semi-formal diagrams that show the key features of the model. Currently we have only a single table defined, with a number of attributes.

If we think a bit about our model and its intended use, it should become clear that there are a number of problems. They have to do with efficiency, and internal consistency.

Problems include:

  • We don't have a unique identifier. The name "Mbp1" could appear more than once in our table. The database IDs for RefSeq and UniProt are unique (up to versions) but they mean something else and that can be very confusing.
  • The relationship between species name and the taxonomy ID does not depend on the gene. In fact we could claim it to be different in different genes records. This would make our database inconsistent.
  • We can't guarantee that the length of the sequence is correct, we might have made an error while updating. Since seqLen depends on the contents of sequence it is redundant to store it separately.
  • A major issue is that we can't easily accommodate other features in our model. What about AT-hooks, disordered segments, phosphorylation sites, coiled-coil domains? ... (It may be I know something you don't know, yet.). And what about different versions of the Kil-A N domain? Different databases annotate slightly different boundaries.
  • The way we have treated the Ankyrin domain ranges so far is really awkward. We should be able to represent more than one domain in our model.
An ERD Diagram for our data model so far, the "attributes" for our Protein table are shown.
Problems with our data model.


A first set of changes.
Unique identifier
Every entity in our data model should have its own, unique identifier. Typically this will simply be an integer that we should automatically increment with every new entry. Automatically. We have to be sure we don't make a mistake.
Move species/tax_id to separate table
If the relationship between two attributes does not actually depend on our protein, we move them to their own table. One identifier remains in our protein table. We call this a "foreign key". The relationship between the two tables is drawn as a line, and the cardinalities of the relationship are identified. "Cardinalities" means: how many entities of one table can be associated with one entity of the other table. Here, 0, n on the left side means: a given tax_ID does not have to actually occur in the protein table i.e. we can put species in the table for which we actually have no proteins. There could also be many ("n") proteins for one species in our database. On the right hand side 1 means: there is exactly one species annotated for each protein. No more, no less.
Remove redundant data

This is almost always a good idea. It's usually better just to compute seqLen or similar from the data. The exception is if something is expensive to compute and/or used often. Then we may store the reult in our datamodel, while making our procedures watertight so we store the correct values.

These are relatively easy repairs. Treating the domain annotations correctly requires a bit more surgery.

An improved model of protein features. We store features in a table where we can describe them in general terms. We then create a separate table that annotates the start and end amino acid of each feature for a specific protein.

It's already awkward to work with a string like "21-93" when we need integer start and end values. We can parse them out, but it would be much more convenient if we can store them directly. But something like "369-455, 505-549" is really terrible. First of all it becomes an effort to tell how many domains there are in the first place, and secondly, the parsing code becomes quite involved. And that creates opportunities for errors in our logic and bugs in our code. And finally, what about if we have more features that we want to annotate? Should we have attributes like Kil-A N start, Kil-A N end, Ankyrin 01 start, Ankyrin 01 end, Ankyrin 02 start, Ankyrin 02 end, Ankyrin 03 start, Ankyrin 30 end... No, that would be absurdly complicated and error prone. There is a much better approach that solves all three problems at the same time. Just like with our species, we create a table that describes features. We can put any number of features there, even slightly different representations of canonical features from different data sources. Then we create a table that stores every feature occurrence in every protein. We call this a junction table and this is an extremely common pattern in data models. Each entry in this table links exactly one protein with exactly one feature. Each protein can have 0, n features. And each feature can be found in 0, n proteins.

With this simple schematic, we obtain an excellent overview about the logical structure of our data and how to represent it in code. Such models are essential for the design and documentation of any software project.



Time to put this into practice: design your own data model.

Task:

  • Use your imagination about what kind of data you would like to store to study the "function" of a protein, specifically the transcription factors in YFO that are related to Mbp1.
  • Write down what you would like to store.
  • Sketch a relational data model for your data. Put it on paper, or print it out. Bring it to class for Tuesday's quiz.


 

That is all.


 

Links and resources

 


Footnotes and references

  1. If you find this URL hard to remember, consider the acronyms:
    ncbi.nlm.nih.gov
    NCBI: National Center for Biotechnology Information
    NLM: National Library of Medicine
    NIH: National Institutes of Health
    GOV: the US GOVernment top-level domain
  2. If there would have been more than one match, you would have gotten a list of results, as before.
  3. Actually the "real" SwissProt identifier would be patterned like MBP1_YEAST. P39678 is the corresponding UniProt identifier.
  4. Your operating system can help you keep the files organized. The "file system" is a database.
  5. For real: Excel is miserable and often wrong on statistics, and it makes horrible, ugly plots. See here and here why Excel problems are not merely cosmetic.
  6. At the bottom of the window there is a menu that says "sum = ..." by default. This provides simple calculations on the selected range. Set the choice to "count", select all Ankyrin domain entries, and the count shows you how many cells actually have a value.
  7. For a list of relational Database Management Systems, see here.


 

Ask, if things don't work for you!

If anything about the assignment is not clear to you, please ask on the mailing list. You can be certain that others will have had similar problems. Success comes from joining the conversation.