Difference between revisions of "BIO Assignment Week 6"

From "A B C"
Jump to navigation Jump to search
m
m
 
(11 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
<div class="b1">
 
<div class="b1">
 
Assignment for Week 6<br />
 
Assignment for Week 6<br />
<span style="font-size: 70%">Sensitive database searches with PSI-BLAST</span>
+
<span style="font-size: 70%">Function</span>
 
</div>
 
</div>
 
<table style="width:100%;"><tr>
 
<table style="width:100%;"><tr>
Line 15: Line 15:
  
 
__TOC__
 
__TOC__
 +
 +
  
  
Line 20: Line 22:
 
==Introduction==
 
==Introduction==
  
<div style="padding: 2px; background: #F0F1F7;  border:solid 1px #AAAAAA; font-size:125%;color:#444444">
+
&nbsp;
 +
 
 +
In this assignment we will first download a number of APSES domain containing sequences into our database - and we will automate the process. Then we will annotate them with domain data. First manually, and then again, we will automate this. Next we will extract the APSES domains from our database according to the annotations. And finally we will align them, and visualize domain conservation in the 3D model to study parts of the protein that are conserved.
 +
 
 +
 
 +
&nbsp;
 +
 
 +
==Downloading Protein Data From the Web==
 +
 
 +
 
 +
In [[BIO_Assignment_Week_3|Assignment 3]] we created a schema for a local protein sequence collection, and implemented it as an R list. We added sequences to this database by hand, but since the information should be cross-referenced and available based on a protein's RefSeq ID, we should really have a function that automates this process. It is far too easy to make mistakes and enter erroneous information otherwise.
 +
 
 +
 
 +
{{task|1=
 +
 
 +
Work through the following code examples.
 +
<source lang="R">
  
 +
# To begin, we load some libraries with functions
 +
# we need...
  
&nbsp;<br>
+
# httr sends and receives information via the http
 +
# protocol, just like a Web browser.
 +
if (!require(httr, quietly=TRUE)) {
 +
install.packages("httr")
 +
library(httr)
 +
}
  
;Take care of things, and they will take care of you.
+
# NCBI's eUtils send information in XML format; we
:''Shunryu Suzuki''
+
# need to be able to parse XML.
</div>
+
if (!require(XML, quietly=TRUE)) {
 +
install.packages("XML")
 +
library(XML)
 +
}
  
 +
# stringr has a number of useful utility functions
 +
# to work with strings. E.g. a function that
 +
# strips leading and trailing whitespace from
 +
# strings.
 +
if (!require(stringr, quietly=TRUE)) {
 +
install.packages("stringr")
 +
library(stringr)
 +
}
  
Anyone can click buttons on a Web page, but to use the powerful sequence database search tools ''right'' often takes considerable more care, caution and consideration.
 
  
Much of what we know about a protein's physiological function is based on the '''conservation''' of that function as the species evolves. We assess conservation by comparing sequences between related proteins. Conservation - or its opposite: ''variation'' - is a consequence of '''selection under constraints''': protein sequences change as a consequence of DNA mutations, this changes the protein's structure, this in turn changes functions and that has the multiple effects on a species' fitness function. Detrimental variants may be removed. Variation that is tolerated is largely neutral and therefore found only in positions that are neither structurally nor functionally critical. Conservation patterns can thus provide evidence for many different questions: structural conservation among proteins with similar 3D-structures, functional conservation among homologues with comparable roles, or amino acid propensities as predictors for protein engineering and design tasks.
+
# We will walk through the process with the refSeqID
 +
# of yeast Mbp1
 +
refSeqID <- "NP_010227"
  
Measuring conservation requires alignment. Therefore a carefully done multiple sequence alignment ('''MSA''') is a cornerstone for the annotation of the essential properties a gene or protein. MSAs are also useful to resolve ambiguities in the precise placement of indels and to ensure that columns in alignments actually contain amino acids that evolve in a similar context. MSAs serve as input for
 
* functional annotation;
 
* protein homology modeling;
 
* phylogenetic analyses, and
 
* sensitive homology searches in databases.
 
  
In order to perform a multiple sequence alignment, we obviously need a set of homologous sequences. This is where the trouble begins. All interpretation of MSA results depends '''absolutely''' on how the input sequences were chosen. Should we include only orthologs, or paralogs as well? Should we include only species with fully sequenced genomes, or can we tolerate that some orthologous genes are possibly missing for a species? Should we include all sequences we can lay our hands on, or should we restrict the selection to a manageable number of ''representative'' sequences? All of these choices influence our interpretation:
+
# UniProt.
*orthologs are expected to be functionally and structurally conserved;
+
# The UniProt ID mapping service supports a "RESTful
*paralogs may have divergent function but have similar structure;
+
# API": responses can be obtained simply via a Web-
*missing genes may make paralogs look like orthologs; and
+
# browsers request. Such requests are commonly sent
*selection bias may weight our results toward sequences that are over-represented and do not provide a fair representation of evolutionary divergence.
+
# via the GET or POST verbs that a Webserver responds
 +
# to, when a client asks for data. GET requests are
 +
# visible in the URL of the request; POST requests
 +
# are not directly visible, they are commonly used
 +
# to send the contents of forms, or when transmitting
 +
# larger, complex data items. The UniProt ID mapping
 +
# sevice can accept long lists of IDs, thus using the
 +
# POST mechanism makes sense.
  
 +
# R has a POST() function as part of the httr package.
  
In this assignment, we will set ourselves the task to use PSI-BLAST and '''find all orthologs and paralogs of the APSES domain containing transcription factors in YFO'''. We will use these sequences later for multiple alignments, calculation of conservation ''etc''. The methodical problem we will address is: how do we perform a sensitive PSI-BLAST search '''in one organism'''. There is an issue to consider:
+
# It's very straightforward to use: just define the URL
* If we restrict the PSI-BLAST search to YFO, PSI-BLAST has little chance of building a meaningful profile - the number of homologues that actually are '''in''' YFO is too small. Thus the search will not become very sensitive.
+
# of the server and send a list of items as the  
* If we don't restrict our search, but search in all species, the number of hits may become too large. It becomes increasingly difficult to closely check all hits as to whether they have good coverage, and how will we evaluate the fringe cases of marginal E-value, where we need to decide whether to include a new sequence in the profile, or whether to hold off on it for one or two iterations, to see whether the E-value drops significantly. Profile corruption would make the search useless. This is maybe still manageable if we restrict our search to fungi, but imagine you are working with a bacterial protein, or a protein that is conserved across the entire tree of life: your search will find thousands of sequences. And by next year, thousands more will have been added.  
+
# body of the request.
  
Therefore we have to find a middle ground: add enough species (sequences) to compile a sensitive profile, but not so many that we can no longer individually assess the sequences that contribute to the profile.
+
# uniProt ID mapping service
 +
URL <- "http://www.uniprot.org/mapping/"
 +
response <- POST(URL,
 +
                body = list(from = "P_REFSEQ_AC",
 +
                            to = "ACC",
 +
                            format = "tab",
 +
                            query = refSeqID))
  
 +
response
  
Thus in practice, a sensitive PSI-BLAST search needs to address two issues before we begin:
+
# If the query is successful, tabbed text is returned.
# We need to define the sequence we are searching with; and
+
# and we capture the fourth element as the requested
# We need to define the dataset we are searching in.
+
# mapped ID.
 +
unlist(strsplit(content(response), "\\s+"))
  
 +
# If the query can't be fulfilled because of a problem
 +
# with the server, a WebPage is rturned. But the server status
 +
# is also returned and we can check the status code. I have
 +
# lately gotten many "503" status codes: Server Not Available...
  
 +
if (response$status_code == 200) { # 200: oK
 +
uniProtID <- unlist(strsplit(content(response), "\\s+"))[4]
 +
if (is.na(uniProtID)) {
 +
warning(paste("UniProt ID mapping service returned NA.",
 +
              "Check your RefSeqID."))
 +
}
 +
} else {
 +
uniProtID <- NA
 +
warning(paste("No uniProt ID mapping available:",
 +
              "server returned status",
 +
              response$status_code))
 +
}
  
 +
uniProtID  # Let's see what we got...
 +
          # This should be "P39678"
 +
          # (or NA if the query failed)
 +
</source>
  
  
==Defining the sequence to search with==
+
Next, we'll retrieve data from the various NCBI databases.
  
 +
It is has become unreasonably difficult to screenscrape the NCBI site
 +
since the actual page contents are dynamically loaded via
 +
AJAX. This may be intentional, or just  overengineering.
 +
While NCBI offers a subset of their data via the eutils API and
 +
that works well enough, some of the data that is available to the
 +
Web browser's eyes is not served to a program.
  
Consider again the task we set out from: '''find all orthologs and paralogs of the APSES domain containing transcription factors in YFO'''.
+
The eutils API returns data in XML format. Have a
 +
look at the following URL in your browser to see what that looks like:
  
 +
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=protein&term=NP_010227
  
{{task|1=
 
What query sequence should you use? Should you ...
 
  
 +
<source lang="R">
  
# Search with the full-length Mbp1 sequence from ''Saccharomyces cerevisiae''?
+
# In order to parse such data, we need tools from the  
# Search with the full-length Mbp1 homolog that you found in YFO?
+
# XML package.  
# Search with the structurally defined ''S. cerevisiae'' APSES domain sequence?
 
# Search with the APSES domain sequence from the YFO homolog, that you have defined by sequence alignment with the yeast protein?
 
# Search with the KilA-N domain sequence?
 
  
 +
# First we build a query URL...
 +
eUtilsBase <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/"
  
<div class="mw-collapsible mw-collapsed" data-expandtext="Expand" data-collapsetext="Collapse" style="border:#000000 solid 1px; padding: 10px; margin-left:25px; margin-right:25px;">Reflect on this (pretend this is a quiz question) and come up with a reasoned answer. Then click on "Expand" to read my opinion on this question.
 
<div  class="mw-collapsible-content">
 
;The full-length Mbp1 sequence from ''Saccharomyces cerevisiae''
 
:Since this sequence contains multiple domains (in particular the ubiquitous Ankyrin domains) it is not suitable for BLAST database searches. You must restrict your search to the domain of greatest interest for your question. That would be the APSES domain.
 
  
;The full-length Mbp1 homolog that you found in YFO
+
# Then we assemble an URL that will search for get the
:What organism the search sequence comes from does not make a difference. Since you aim to find '''all''' homologs in YFO, it is not necessary to have your search sequence more or less similar to '''any particular''' homologs. In fact '''any''' APSES sequence should give you the same result, since they are '''all''' homologous. But the full-length sequence in YFO has the same problem as the ''Saccharomyces'' sequence.
+
# unique, NCBI internal identifier,  the GI number,
 +
# for our refSeqID...
 +
URL <- paste(eUtilsBase,
 +
            "esearch.fcgi?",     # ...using the esearch program
 +
                                  # that finds an entry in an
 +
                                  # NCBI database
 +
            "db=protein",
 +
            "&term=", refSeqID,
 +
            sep="")
 +
# Copy the URL and paste it into your browser to see
 +
# what the response should look like.
 +
URL
  
;The structurally defined ''S. cerevisiae'' APSES domain sequence?
+
# To fetch a response in R, we use the function htmlParse()
:That would be my first choice, just because it is structurally well defined as a complete domain, and the sequence is easy to obtain from the <code>1BM8</code> PDB entry. (<code>1MB1</code> would also work, but you would need to edit out the penta-Histidine tag at the C-terminus that was engineered into the sequence to help purify the recombinantly expressed protein.)
+
# with our URL as its argument.
 +
response <- htmlParse(URL)
 +
response
  
;The APSES domain sequence from the YFO homolog, that you have defined by sequence alignment with the yeast protein?
+
# This is XML. We can take the response apart into
:As argued above: since they are all homologs, any of them should lead to the same set of results.
+
# its indvidual components with the xmlToList function.
  
;The KilA-N domain sequence?
+
xmlToList(response)
:This is a shorter sequence and a more distant homolog to the domain we are interested in. It would not be my first choice: the fact that it is more distantly related might make the search '''more sensitive'''. The fact that it is shorter might make the search '''less specific'''. The effect of this tradeoff would need to be compared and considered. By the way: the same holds for the even shorter subdomain 50-74 we discussed in the last assignment. However: one of the results of our analysis will be '''whether APSES domains in fungi all have the same length as the Mbp1 domain, or whether some are indeed much shorter, as suggested by the Pfam alignment.'''
 
  
 +
# Note how the XML "tree" is represented as a list of
 +
# lists of lists ...
 +
# If we know exactly what elelement we are looking for,
 +
# we can extract it from this structure:
 +
xmlToList(response)[["body"]][["esearchresult"]][["idlist"]][["id"]]
  
So in my opinion, you should search with the yeast Mbp1 APSES domain, i.e. the sequence which you have previously studied in the crystal structure. Where is that? Well, you might have saved it in your journal, or you can get it again from the [http://www.pdb.org/pdb/explore/explore.do?structureId=1BM8 '''PDB'''] (i.e. [http://www.pdb.org/pdb/files/fasta.txt?structureIdList=1BM8 here], or from [[BIO_Assignment_Week_3#Search input|Assignment 3]].
+
# But this is not very robus, it would break with the
 +
# slightest change that the NCBI makes to their response
 +
# and the NCBI changes things A LOT!
  
</div>
+
# Somewhat more robust is to specify the type of element
</div>
+
# we want - its the text contained in an <id>...</id>
}}
+
# elelement, and use the XPath XML parsing language to
 +
# retrieve it.
  
&nbsp;
+
# getNodeSet() lets us fetch tagged contents by
 +
# applying toString.XMLNode() to it...
  
==Selecting species for a PSI-BLAST search==
+
node <- getNodeSet(response, "//id/text()")
 +
unlist(lapply(node, toString.XMLNode))  # "6320147 "
  
 +
# We will be doing this a lot, so we write a function
 +
# for it...
 +
node2string <- function(doc, tag) {
 +
    # an extractor function for the contents of elements
 +
    # between given tags in an XML response.
 +
    # Contents of all matching elements is returned in
 +
    # a vector of strings.
 +
path <- paste("//", tag, "/text()", sep="")
 +
nodes <- getNodeSet(doc, path)
 +
return(unlist(lapply(nodes, toString.XMLNode)))
 +
}
  
As discussed in the introduction, in order to use our sequence set for studying structural and functional features and conservation patterns of our APSES domain proteins, we should start with a well selected dataset of APSES domain containing homologs in YFO. Since these may be quite divergent, we can't rely on '''BLAST''' to find all of them, we need to use the much more sensitive search of '''PSI-BLAST''' instead. But even though you are interested only in YFO's genes, it would be a mistake to restrict the PSI-BLAST search to YFO. PSI-BLAST becomes more sensitive if the profile represents more diverged homologs. Therefore we should always search with a broadly representative set of species, even if we are interested only in the results for one of the species. This is important. Please reflect on this for a bit and make sure you understand the rationale why we include sequences in the search that we are not actually interested in.
+
# using node2string() ...
 +
GID <- node2string(response, "id")
 +
GID
  
 +
# The GI is the pivot for all our data requests at the
 +
# NCBI.
  
But you can also search with '''too many''' species: if the number of species is large and PSI-BLAST finds a large number of results:
+
# Let's first get the associated data for this GI
# it becomes unwieldy to check the newly included sequences at each iteration, inclusion of false-positive hits may result, profile corruption and loss of specificity. The search will fail.
+
URL <- paste(eUtilsBase,
# since genomes from some parts of the Tree Of Life are over represented, the inclusion of all sequences leads to selection bias and loss of sensitivity.
+
            "esummary.fcgi?",
 +
            "db=protein",
 +
            "&id=",
 +
            GID,
 +
            "&version=2.0",
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
taxID <- node2string(response, "taxid")
 +
organism <- node2string(response, "organism")
 +
taxID
 +
organism
  
We should therefore try to find a subset of species
 
# that represent as large a '''range''' as possible on the evolutionary tree;
 
# that are as well '''distributed''' as possible on the tree; and
 
# whose '''genomes''' are fully sequenced.
 
  
These criteria are important. Again, reflect on them and understand their justification. Choosing your species well for a PSI-BLAST search can be crucial to obtain results that are robust and meaningful.
+
# Next, fetch the actual sequence
 +
URL <- paste(eUtilsBase,
 +
            "efetch.fcgi?",
 +
            "db=protein",
 +
            "&id=",
 +
            GID,
 +
            "&retmode=text&rettype=fasta",
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
How can we '''define''' a list of such species, and how can we '''use''' the list?
+
fasta <- node2string(response, "p")
 +
fasta
  
The definition is a rather typical bioinformatics task for integrating datasources: "retrieve a list of representative fungi with fully sequenced genomes".  Unfortunately, to do this in a principled way requires tools that you can't (yet) program: we would need to use a list of genome sequenced fungi, estimate their evolutionary distance and select a well-distributed sample. Regrettably you can't combine such information easily with the resources available from the NCBI.
+
seq <- unlist(strsplit(fasta, "\\n"))[-1] # Drop the first elelment,
 +
                                          # it is the FASTA header.
 +
seq
  
We will use an approach that is conceptually similar: selecting a set of species according to their shared taxonomic rank in the tree of life. {{WP|Biological classification|'''Biological classification'''}} provides a hierarchical system that describes evolutionary relatedness for all living entities. The levels of this hierarchy are so called {{WP|Taxonomic rank|'''taxonomic ranks'''}}. These ranks are defined in ''Codes of Nomenclature'' that are curated by the self-governed international associations of scientists working in the field. The number of ranks is not specified: there is a general consensus on seven principal ranks (see below, in bold) but many subcategories exist and may be newly introduced. It is desired&ndash;but not mandated&ndash;that ranks represent ''clades'' (a group of related species, or a "branch" of a phylogeny), and it is desired&ndash;but not madated&ndash;that the rank is sharply defined. The system is based on subjective dissimilarity. Needless to say that it is in flux.
 
  
If we follow a link to an entry in the NCBI's Taxonomy database, eg. [http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=559292 ''Saccharomyces cerevisiae S228c''], the strain from which the original "yeast genome" was sequenced in the late 1990s, we see the following specification of its taxonomic lineage:
+
# Next, fetch the crossreference to the NCBI Gene
 +
# database
 +
URL <- paste(eUtilsBase,
 +
            "elink.fcgi?",
 +
            "dbfrom=protein",
 +
            "&db=gene",
 +
            "&id=",
 +
            GID,
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
geneID <- node2string(response, "linksetdb/id")
 +
geneID
  
<source lang="text">
+
# ... and the actual Gene record:
cellular organisms; Eukaryota; Opisthokonta; Fungi; Dikarya;
+
URL <- paste(eUtilsBase,
Ascomycota; Saccharomyceta; Saccharomycotina; Saccharomycetes;
+
            "esummary.fcgi?",
Saccharomycetales; Saccharomycetaceae; Saccharomyces; Saccharomyces cerevisiae
+
            "&db=gene",
</source>
+
            "&id=",
 +
            geneID,
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
name <- node2string(response, "name")
 +
genome_xref <- node2string(response, "chraccver")
 +
genome_from <- node2string(response, "chrstart")[1]
 +
genome_to <- node2string(response, "chrstop")[1]
 +
name
 +
genome_xref
 +
genome_from
 +
genome_to
  
These names can be mapped into taxonomic ranks ranks, since the suffixes of these names e.g. ''-mycotina'', ''-mycetaceae'' are specific to defined ranks. (NCBI does not provide this mapping, but {{WP|Taxonomic rank|Wikipedia}} is helpful here.)
+
# So far so good. But since we need to do this a lot
 +
# we need to roll all of this into a function.  
  
<table>
+
# I have added the function to the dbUtilities code
 +
# so you can update it easily.
  
<tr class="sh">
+
# Run:
<td>Rank</td>
 
<td>Suffix</td>
 
<td>Example</td>
 
</tr>
 
  
<tr class="s1">
+
updateDbUtilities("55ca561e2944af6e9ce5cf2a558d0a3c588ea9af")
<td>Domain</td>
 
<td></td>
 
<td>Eukaryota (Eukarya)</td>
 
</tr>
 
  
<tr class="s2">
+
# If that is successful, try these three testcases
<td>&nbsp;&nbsp;Subdomain</td>
 
<td>&nbsp;</td>
 
<td>Opisthokonta</td>
 
</tr>
 
  
<tr class="s1">
+
myNewDB <- createDB()
<td>'''Kingdom'''</td>
+
tmp <- fetchProteinData("NP_010227") # Mbp1p
<td>&nbsp;</td>
+
tmp
<td>Fungi</td>
+
myNewDB <- addToDB(myNewDB, tmp)
</tr>
+
myNewDB
  
<tr class="s2">
+
tmp <- fetchProteinData("NP_011036") # Swi4p
<td>&nbsp;&nbsp;Subkingdom</td>
+
tmp
<td>&nbsp;</td>
+
myNewDB <- addToDB(myNewDB, tmp)
<td>Dikarya</td>
+
myNewDB
</tr>
 
  
<tr class="s1">
+
tmp <- fetchProteinData("NP_012881") # Phd1p
<td>'''Phylum'''</td>
+
tmp
<td>&nbsp;</td>
+
myNewDB <- addToDB(myNewDB, tmp)
<td>Ascomycota</td>
+
myNewDB
</tr>
 
  
<tr class="s2">
 
<td>&nbsp;&nbsp;''rankless taxon''<ref>The -myceta are well supported groups above the Class rank. See {{WP|Leotiomyceta|''Leotiomyceta''}} for details and references.</ref></td>
 
<td>-myceta</td>
 
<td>Saccharomyceta</td>
 
</tr>
 
  
<tr class="s1">
 
<td>&nbsp;&nbsp;Subphylum</td>
 
<td>-mycotina</td>
 
<td>Saccharomycotina</td>
 
</tr>
 
  
<tr class="s2">
+
</source>
<td>'''Class'''</td>
 
<td>-mycetes</td>
 
<td>Saccharomycetes</td>
 
</tr>
 
  
<tr class="s1">
 
<td>&nbsp;&nbsp;Subclass</td>
 
<td>-mycetidae</td>
 
<td>&nbsp;</td>
 
</tr>
 
  
<tr class="s2">
+
}}
<td>'''Order'''</td>
 
<td>-ales</td>
 
<td>Saccharomycetales</td>
 
</tr>
 
  
<tr class="s1">
 
<td>'''Family'''</td>
 
<td>-aceae</td>
 
<td>Saccharomycetaceae</td>
 
</tr>
 
  
<tr class="s2">
 
<td>&nbsp;&nbsp;Subfamily</td>
 
<td>-oideae</td>
 
<td>&nbsp;</td>
 
</tr>
 
  
<tr class="s1">
+
This new <code>fetchProteinData()</code> function seems to be quite convenient. I have compiled a [[Reference_APSES_proteins_(reference_species)|set of APSES domain proteins]] for ten fungi species and loaded the 48 protein's data into an R database in a few minutes. This "reference database" will be automatically loaded for you with the '''next''' dbUtilities update. Note that it will be recreated every time you start up '''R'''. This means two things: (i) if you break something in the reference database, it's not a problem. (ii) if you store your own data in it, it will be lost. In order to add your own genes, you need to make a working copy for yourself.
<td>&nbsp;&nbsp;Tribe</td>
 
<td>-eae</td>
 
<td>&nbsp;</td>
 
</tr>
 
  
<tr class="s2">
 
<td>&nbsp;&nbsp;Subtribe</td>
 
<td>-ineae</td>
 
<td>&nbsp;</td>
 
</tr>
 
  
<tr class="s1">
+
====Computer literacy====
<td>'''Genus'''</td>
 
<td>&nbsp;</td>
 
<td>Saccharomyces</td>
 
</tr>
 
  
<tr class="s2">
 
<td>'''Species'''</td>
 
<td>&nbsp;</td>
 
<td>''Saccharomyces cerevisiae''</td>
 
</tr>
 
  
<table>
+
;Digression - some musings on computer literacy and code engineering.
 +
It's really useful to get into a consistent habit of giving your files a meaningful name. The name should include something that tells you what the file contains, and something that tells you the date or version. I give versions major and minor numbers, and - knowing how much things always change - I write major version numbers with a leading zero eg. <code>04</code> so that they will be correctly sorted by name in a directory listing. The same goes for dates: always write <code>YYYY-MM-DD</code> to ensure proper sorting.
  
 +
On the topic of versions: creating the database with its data structures and the functions that operate on them is an ongoing process, and changes in one part of the code may have important consequences for another part. Imagine I made a poor choice of a column name early on: changing that would need to be done in every single function of the code that reads or writes or analyzes data. Once the code reaches a certain level of complexity, organizing it well is just as important as writing it in the first place. In the new update of <code>dbUtilities.R</code>, a database has a <code>$version</code> element, and every function checks that the database version matches the version for which the function was written. Obviously, this also means the developer must provide tools to migrate contents from an older version to a newer version. And since migrating can run into trouble and leave all data in an inconsistent and unfixable state, it's a good time to remind you to back up important data frequently. Of course you will want to save your database once you've done any significant work with it. And you will especially want to save the databases you create for your Term Project. But you should also (and perhaps more importantly) save the script that you use to create the database in the first place. And on that note: when was the last time you made a full backup of your computer's hard-drive? Too long ago? I thought so.
  
You can see that there is not a common mapping between the yeast lineage and the commonly recognized categories - not all ranks are represented. Nor is this consistent across species in the taxonomic database: some have subfamily ranks and some don't. And the tree is in no way normalized - some of the ranks have thousands of members, and for some, only a single extant member may be known, or it may be a rank that only relates to the fossil record. But the ranks do provide some guidance to evolutionary divergence. Say you want to choose four species across the tree of life for a study, you should choose one from each of the major '''domains''' of life: Eubacteria, Euryarchaeota, Crenarchaeota-Eocytes, and Eukaryotes. Or you want to study a gene that is specific to mammals. Then you could choose from the clades listed in the NCBI taxonomy database under [http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=40674&lvl=4 '''Mammalia'''] (a {{WP|Mammal_classification|'''class rank'''}}, and depending how many species you would want to include, use the
+
;Backup your hard-drive now.
subclass-, order-, or family rank (hover over the names to see their taxonomic rank.)
 
  
There will still be quite a bit of manual work involved and an exploration of different options on the Web may be useful. For our purposes here we can retrieve a good set of organisms from the [http://fungi.ensembl.org/info/website/species.html '''ensembl fungal genomes page'''] - maintained by the EBI's genome annotation group - that lists species grouped by taxonomic ''order''. All of these organisms are genome-sequenced, we can pick a set of representatives:
 
  
# Capnodiales&nbsp;&nbsp;&nbsp;''Zymoseptoria tritici''
+
If your last backup at the time of next week's quiz was less than two days ago, you will receive a 0.5 mark bonus.
# Erysiphales&nbsp;&nbsp;&nbsp;''Blumeria graminis''
 
# Eurotiales&nbsp;&nbsp;&nbsp;''Aspergillus nidulans''
 
# Glomerellales&nbsp;&nbsp;&nbsp;''Glomerella graminicola''
 
# Hypocreales&nbsp;&nbsp;&nbsp;''Trichoderma reesei''
 
# Magnaporthales&nbsp;&nbsp;&nbsp;''Magnaporthe oryzae''
 
# Microbotryales&nbsp;&nbsp;&nbsp;''Microbotryum violaceum''
 
# Pezizales&nbsp;&nbsp;&nbsp;''Tuber melanosporum''
 
# Pleosporales&nbsp;&nbsp;&nbsp;''Phaeosphaeria nodorum''
 
# Pucciniales&nbsp;&nbsp;&nbsp;''Puccinia graminis''
 
# Saccharomycetales&nbsp;&nbsp;&nbsp;''Saccharomyces cerevisiae''
 
# Schizosaccharomycetales&nbsp;&nbsp;&nbsp;''Schizosaccharomyces pombe''
 
# Sclerotiniaceae&nbsp;&nbsp;&nbsp;''Sclerotinia sclerotiorum''
 
# Sordariales&nbsp;&nbsp;&nbsp;''Neurospora crassa''
 
# Tremellales&nbsp;&nbsp;&nbsp;''Cryptococcus neoformans''
 
# Ustilaginales&nbsp;&nbsp;&nbsp;''Ustilago maydis''
 
  
This set of organisms thus can be used to generate a PSI-BLAST search in a well-distributed set of species. Of course '''you must also include YFO''' (<small>if YFO is not in this list already</small>).
 
  
To enter these 16 species as an Entrez restriction, they need to be formatted as below. (<small>One could also enter species one by one, by pressing the '''(+)''' button after the organism list</small>)
+
===New Database ===
  
 +
Here is some sample code to work with the new database, enter new protein data for YFO, save it and load it again when needed.
  
<source lang="text">
 
Aspergillus nidulans[orgn]
 
OR Blumeria graminis[orgn]
 
OR Cryptococcus neoformans[orgn]
 
OR Glomerella graminicola[orgn]
 
OR Magnaporthe oryzae[orgn]
 
OR Microbotryum violaceum[orgn]
 
OR Neurospora crassa[orgn]
 
OR Phaeosphaeria nodorum[orgn]
 
OR Puccinia graminis[orgn]
 
OR Sclerotinia sclerotiorum[orgn]
 
OR Trichoderma reesei[orgn]
 
OR Tuber melanosporum[orgn]
 
OR Saccharomyces cerevisiae[orgn]
 
OR Schizosaccharomyces pombe[orgn]
 
OR Ustilago maydis[orgn]
 
OR Zymoseptoria tritici[orgn]
 
  
</source>
+
<source lang="R">
 +
# You don't need to load the reference database refDB. If
 +
# everything is set up correctly, it gets loaded at startup.
 +
# (Just so you know: you can turn off that behaviour if you
 +
# ever should want to...)
 +
 
  
 +
# First you need to load the newest version of dbUtilities.R
  
 +
updateDButilities("7bb32ab3d0861ad81bdcb9294f0f6a737b820bf9")
  
&nbsp;
+
# If you get an error:
 +
#    Error: could not find function "updateDButilities"
 +
# ... then it seems you didn't do the previous update.
  
==Executing the PSI-BLAST search==
+
# Try getting the update with the new key but the previous function:
 +
# updateDbUtilities()
 +
#
 +
# If that function is not found either, confirm that your ~/.Rprofile
 +
# actually loads dbUtilites.R from your project directory.
  
We have a list of species. Good. Next up: how do we '''use''' it.
+
# As a desperate last resort, you could uncomment
 +
# the following piece of code and run the update
 +
# without verification...
 +
#
 +
# URL <- "http://steipe.biochemistry.utoronto.ca/abc/images/f/f9/DbUtilities.R"
 +
# download.file(URL, paste(PROJECTDIR, "dbUtilities.R", sep="")), method="auto")
 +
# source(paste(PROJECTDIR, "dbUtilities.R", sep=""))
 +
#
 +
# But be cautious: there is no verification. You yourself need
 +
# to satisfy yourself that this "file from the internet" is what
 +
# it should be, before source()'ing it...
  
{{task|1=
 
  
 +
# After the file has been source()'d,  refDB exists.
 +
ls(refDB)
  
# Navigate to the BLAST homepage.
 
# Select '''protein BLAST'''.
 
# Paste the APSES domain sequence into the search field.
 
# Select '''refseq''' as the database.
 
# Copy the organism restriction list from above '''and enter the correct name for YFO''' into the list if it is not there already. Obviously, you can't find sequences in YFO if YFO is not included in your search space. Paste the list into the '''Entrez Query''' field.
 
# In the '''Algorithm''' section, select PSI-BLAST.
 
#Click on '''BLAST'''.
 
}}
 
  
 +
# check the contents of refDB:
 +
refDB$protein$name
 +
refDB$taxonomy
  
Evaluate the results carefully. Since we used default parameters, the threshold for inclusion was set at an '''E-value''' of 0.005 by default, and that may be a bit too lenient. If you look at the table of your hits&ndash; in the '''Sequences producing significant alignments...''' section&ndash; there may also be a few sequences that have a low query coverage of less than 80%. Let's exclude these from the profile initially: not to worry, if they are true positives, the will come back with improved E-values and greater coverage in subsequent iterations. But if they were false positives, their E-values will rise and they should drop out of the profile and not contaminate it.
 
  
 +
# list refSeqIDs for saccharomyces cerevisiae genes.
 +
refDB$protein[refDB$protein$taxID == 559292, "refSeqID"]
  
{{task|1=
 
#In the header section, click on '''Formatting options''' and in the line "Format for..." set the '''with inclusion threshold''' to <code>0.001</code> (This means E-values can't be above 10<sup>-03</sup> for the sequence to be included.)
 
# Click on the '''Reformat''' button (top right).
 
# In the table of sequence descriptions (not alignments!), click on the '''Query coverage''' to sort the table by coverage, not by score.
 
# Copy the rows with a coverage of less than 80% and paste them into some text editor so you can compare what happens with these sequences in the next iteration.
 
# '''Deselect''' the check mark next to these sequences in the right-hand column '''Select for PSI blast'''. (For me these are six sequences, but with YFO included that may be a bit different.)
 
# Then scroll to '''Run PSI-BLAST iteration 2 ...''' and click on '''<code>Go</code>'''.
 
}}
 
  
 +
# To add some genes from YFO, I proceed as follows.
 +
# Obviously, you need to adapt this to your YFO
 +
# and the sequences in YFO that you have found
 +
# with your PSI-BLAST search.
  
This is now the "real" PSI-BLAST at work: it constructs a profile from all the full-length sequences and searches with the '''profile''', not with any individual sequence. Note that we are controlling what goes into the profile in two ways:
+
# Let's assume my YFO is the fly agaric (amanita muscaria)
# we are explicitly removing sequences with poor coverage; and
+
# and its APSES domain proteins have the following IDs
# we are requiring a more stringent minimum E-value for each sequence.
+
# (these are not refSeq btw. and thus unlikely
 +
# to be found in UniProt) ...
 +
# KIL68212
 +
# KIL69256
 +
# KIL65817
 +
#
  
  
{{task|1=
+
# First, I create a copy of the database with a name that
#Again, study the table of hits. Sequences highlighted in yellow have met the search criteria in the second iteration. Note that the coverage of (some) of the previously excluded sequences is now above 80%.
+
# I will recognize to be associated with my YFO.
# Let's exclude partial matches one more time. Again, deselect all sequences with less than 80% coverage. Then run the third iteration.
+
amamuDB <- refDB
# Iterate the search in this way until no more "New" sequences are added to the profile. Then scan the list of excluded hits ... are there any from YFO that seem like they could potentially make the list? Marginal E-value perhaps, or reasonable E-value but less coverage? If that's the case, try returning the E-value threshold to the default 0.005 and see what happens...
 
}}
 
  
  
Once no "new" sequences have been added, if we were to repeat the process again and again, we would always get the same result because the profile stays the same. We say that the search has '''converged'''. Good. Time to harvest.
+
# Then I fetch my protein data ...
 +
tmp1 <- fetchProteinData("KIL68212")
 +
tmp2 <- fetchProteinData("KIL69256")
 +
tmp3 <- fetchProteinData("KIL65817")
  
  
{{task|1=
+
# ... and if I am satisfied that it contains what I
# At the header, click on '''Taxonomy reports''' and find YFO in the '''Organism Report''' section. These are your APSES domain homologs. All of them. Actually, perhaps more than all: the report may also include sequences with E-values above the inclusion threshold.
+
# want, I add it to the database.
# From the report copy the sequence identifiers
+
amamuDB <- addToDB(amamuDB, tmp1)
## from YFO,
+
amamuDB <- addToDB(amamuDB, tmp2)
## with E-values above your defined threshold.
+
amamuDB <- addToDB(amamuDB, tmp3)
}}
 
  
For example, the list of ''Saccharomyces'' genes is the following:
 
  
<code>
+
# Then I make a local backup copy. Note the filename and
<b>[http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=559292 Saccharomyces cerevisiae S288c]</b> [http://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=4890 [ascomycetes]] taxid 559292<br \>
+
# version number :-)
[http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=Protein&list_uids=6320147&dopt=GenPept ref|NP_010227.1|] Mbp1p [Saccharomyces cerevisiae S288c]          [ 131] 1e-38<br \>
+
save(amamuDB, file="amamuDB.01.RData")
[http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=Protein&list_uids=6320957&dopt=GenPept ref|NP_011036.1|] Swi4p [Saccharomyces cerevisiae S288c]          [ 123]  1e-35<br \>
+
   
[http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=Protein&list_uids=6322808&dopt=GenPept ref|NP_012881.1|] Phd1p [Saccharomyces cerevisiae S288c]          [  91]  1e-25<br \>
 
[http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=Protein&list_uids=6323658&dopt=GenPept ref|NP_013729.1|] Sok2p [Saccharomyces cerevisiae S288c]          [  93]  3e-25<br \>
 
[http://www.ncbi.nlm.nih.gov:80/entrez/query.fcgi?cmd=Retrieve&db=Protein&list_uids=6322090&dopt=GenPept ref|NP_012165.1|] Xbp1p [Saccharomyces cerevisiae S288c]          [ 40]  5e-07<br \>
 
</code>
 
  
[[Saccharomyces cerevisiae Xbp1|Xbp1]] is a special case. It has only very low coverage, but that is because it has a long domain insertion and the N-terminal match often is not recognized by alignment because the gap scores for long indels are unrealistically large. For now, I keep that sequence with the others.
+
# Now I can explore my new database ...
 +
amamuDB$protein[amamuDB$protein$taxID == 946122, "refSeqID"]
  
  
Next we need to retrieve the sequences. Tedious to retrieve them one by one, but we can get them all at the same time:
+
# ... but if anything goes wrong, for example
 +
# if I make a mistake in checking which
 +
# rows contain taxID 946122 ...
 +
amamuDB$protein$taxID = 946122
  
 +
# Ooops ... what did I just do wrong?
 +
#      ... wnat happened instead?
  
{{task|1=
+
amamuDB$protein$taxID
  
# Return to the BLAST results page and again open the '''Formatting options'''.
 
# Find the '''Limit results''' section and enter YFO's name into the field. For example <code>Saccharomyces cerevisiae [ORGN]</code>
 
# Click on '''Reformat'''
 
# Scroll to the '''Descriptions''' section, check the box at the left-hand margin, next to each sequence you want to keep. Then click on '''Download &rarr; FASTA complete sequence &rarr; Continue'''.
 
  
 +
# ... I can simply recover from my backup copy:
 +
load("amamuDB.01.RData")   
 +
amamuDB$protein$taxID
  
  
<div class="mw-collapsible mw-collapsed" data-expandtext="Expand" data-collapsetext="Collapse" style="border:#000000 solid 1px; padding: 10px; margin-left:25px; margin-right:25px;">There are actually several ways to download lists of sequences. Using the results page utility is only one. But if you know the GIs of the sequences you need, you can get them more directly by putting them into the URL...
+
</source>
<div  class="mw-collapsible-content">
 
  
* http://www.ncbi.nlm.nih.gov/protein/6320147,6320957,6322808,6323658,6322090?report=docsum  - The default report
 
* http://www.ncbi.nlm.nih.gov/protein/6320147,6320957,6322808,6323658,6322090?report=fasta - FASTA sequences with NCBI HTML markup
 
  
Even more flexible is the [http://www.ncbi.nlm.nih.gov/books/NBK25500/#chapter1.Downloading_Full_Records '''eUtils'''] interface to the NCBI databases. For example you can download the dataset in text format by clicking below.
+
&nbsp;
 +
{{task|1=
  
* http://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=protein&id=6320147,6320957,6322808,6323658,6322090&rettype=fasta&retmode=text
+
;Create your own version of the protein database by adding all the genes from YFO that you have discovered with the PSI-BLAST search for the APSES domain. Save it.
  
Note that this utility does not '''show''' anything, but downloads the (multi) fasta file to your default download directory.
 
 
</div>
 
</div>
 
 
}}
 
}}
  
  
;That is all.
+
&nbsp;
  
  
&nbsp;
+
;TBC
 +
 
  
== Links and resources ==
 
  
<!-- {{#pmid: 19957275}} -->
+
&nbsp;
<!-- {{WWW|WWW_GMOD}} -->
 
<!-- <div class="reference-box">[http://www.ncbi.nlm.nih.gov]</div> -->
 
  
<!--
 
  
Add to this assignment:
 
- study the BLAST output format, links, tools, scores ...
 
- compare the improvement in E-values to standard BLAST
 
- examine this in terms of sensitivity and specificity
 
  
-->
 
  
 
&nbsp;
 
&nbsp;

Latest revision as of 05:54, 17 November 2015

Assignment for Week 6
Function

< Assignment 5 Assignment 7 >

Note! This assignment is currently inactive. Major and minor unannounced changes may be made at any time.

 
 

Concepts and activities (and reading, if applicable) for this assignment will be topics on next week's quiz.




 

Introduction

 

In this assignment we will first download a number of APSES domain containing sequences into our database - and we will automate the process. Then we will annotate them with domain data. First manually, and then again, we will automate this. Next we will extract the APSES domains from our database according to the annotations. And finally we will align them, and visualize domain conservation in the 3D model to study parts of the protein that are conserved.


 

Downloading Protein Data From the Web

In Assignment 3 we created a schema for a local protein sequence collection, and implemented it as an R list. We added sequences to this database by hand, but since the information should be cross-referenced and available based on a protein's RefSeq ID, we should really have a function that automates this process. It is far too easy to make mistakes and enter erroneous information otherwise.


Task:
Work through the following code examples.

# To begin, we load some libraries with functions
# we need...

# httr sends and receives information via the http
# protocol, just like a Web browser.
if (!require(httr, quietly=TRUE)) { 
	install.packages("httr")
	library(httr)
}

# NCBI's eUtils send information in XML format; we
# need to be able to parse XML.
if (!require(XML, quietly=TRUE)) {
	install.packages("XML")
	library(XML)
}

# stringr has a number of useful utility functions
# to work with strings. E.g. a function that
# strips leading and trailing whitespace from
# strings.
if (!require(stringr, quietly=TRUE)) {
	install.packages("stringr")
	library(stringr)
}


# We will walk through the process with the refSeqID
# of yeast Mbp1
refSeqID <- "NP_010227"


# UniProt.
# The UniProt ID mapping service supports a "RESTful
# API": responses can be obtained simply via a Web-
# browsers request. Such requests are commonly sent
# via the GET or POST verbs that a Webserver responds
# to, when a client asks for data. GET requests are 
# visible in the URL of the request; POST requests
# are not directly visible, they are commonly used
# to send the contents of forms, or when transmitting
# larger, complex data items. The UniProt ID mapping
# sevice can accept long lists of IDs, thus using the
# POST mechanism makes sense.

# R has a POST() function as part of the httr package.

# It's very straightforward to use: just define the URL
# of the server and send a list of items as the 
# body of the request.

# uniProt ID mapping service
URL <- "http://www.uniprot.org/mapping/"
response <- POST(URL, 
                 body = list(from = "P_REFSEQ_AC",
                             to = "ACC",
                             format = "tab",
                             query = refSeqID))

response

# If the query is successful, tabbed text is returned.
# and we capture the fourth element as the requested
# mapped ID.
unlist(strsplit(content(response), "\\s+"))

# If the query can't be fulfilled because of a problem
# with the server, a WebPage is rturned. But the server status
# is also returned and we can check the status code. I have
# lately gotten many "503" status codes: Server Not Available...

if (response$status_code == 200) { # 200: oK
	uniProtID <- unlist(strsplit(content(response), "\\s+"))[4]
	if (is.na(uniProtID)) {
	warning(paste("UniProt ID mapping service returned NA.",
	              "Check your RefSeqID."))
	}
} else {
	uniProtID <- NA
	warning(paste("No uniProt ID mapping available:",
	              "server returned status",
	              response$status_code))
}

uniProtID  # Let's see what we got...
           # This should be "P39678"
           # (or NA if the query failed)


Next, we'll retrieve data from the various NCBI databases.

It is has become unreasonably difficult to screenscrape the NCBI site since the actual page contents are dynamically loaded via AJAX. This may be intentional, or just overengineering. While NCBI offers a subset of their data via the eutils API and that works well enough, some of the data that is available to the Web browser's eyes is not served to a program.

The eutils API returns data in XML format. Have a look at the following URL in your browser to see what that looks like:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=protein&term=NP_010227


# In order to parse such data, we need tools from the 
# XML package. 

# First we build a query URL...
eUtilsBase <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/"


# Then we assemble an URL that will search for get the
# unique, NCBI internal identifier,  the GI number,
# for our refSeqID...
URL <- paste(eUtilsBase,
             "esearch.fcgi?",     # ...using the esearch program
                                  # that finds an entry in an
                                  # NCBI database
             "db=protein",
             "&term=", refSeqID,
             sep="")
# Copy the URL and paste it into your browser to see
# what the response should look like.
URL

# To fetch a response in R, we use the function htmlParse()
# with our URL as its argument.
response <- htmlParse(URL)
response

# This is XML. We can take the response apart into
# its indvidual components with the xmlToList function.

xmlToList(response)

# Note how the XML "tree" is represented as a list of
# lists of lists ...
# If we know exactly what elelement we are looking for,
# we can extract it from this structure:
xmlToList(response)[["body"]][["esearchresult"]][["idlist"]][["id"]]

# But this is not very robus, it would break with the
# slightest change that the NCBI makes to their response
# and the NCBI changes things A LOT!

# Somewhat more robust is to specify the type of element
# we want - its the text contained in an <id>...</id>
# elelement, and use the XPath XML parsing language to
# retrieve it.

# getNodeSet() lets us fetch tagged contents by 
# applying toString.XMLNode() to it...

node <- getNodeSet(response, "//id/text()")
unlist(lapply(node, toString.XMLNode))  # "6320147 "

# We will be doing this a lot, so we write a function
# for it...
node2string <- function(doc, tag) {
    # an extractor function for the contents of elements
    # between given tags in an XML response.
    # Contents of all matching elements is returned in
    # a vector of strings.
	path <- paste("//", tag, "/text()", sep="")
	nodes <- getNodeSet(doc, path)
	return(unlist(lapply(nodes, toString.XMLNode)))
}

# using node2string() ...
GID <- node2string(response, "id")
GID

# The GI is the pivot for all our data requests at the
# NCBI. 

# Let's first get the associated data for this GI
URL <- paste(eUtilsBase,
             "esummary.fcgi?",
             "db=protein",
             "&id=",
             GID,
             "&version=2.0",
             sep="")
response <- htmlParse(URL)
URL
response

taxID <- node2string(response, "taxid")
organism <- node2string(response, "organism")
taxID
organism


# Next, fetch the actual sequence
URL <- paste(eUtilsBase,
             "efetch.fcgi?",
             "db=protein",
             "&id=",
             GID,
             "&retmode=text&rettype=fasta",
             sep="")
response <- htmlParse(URL)
URL
response

fasta <- node2string(response, "p")
fasta

seq <- unlist(strsplit(fasta, "\\n"))[-1] # Drop the first elelment,
                                          # it is the FASTA header.
seq


# Next, fetch the crossreference to the NCBI Gene
# database
URL <- paste(eUtilsBase,
             "elink.fcgi?",
             "dbfrom=protein",
             "&db=gene",
             "&id=",
             GID,
             sep="")
response <- htmlParse(URL)
URL
response

geneID <- node2string(response, "linksetdb/id")
geneID

# ... and the actual Gene record:
URL <- paste(eUtilsBase,
             "esummary.fcgi?",
             "&db=gene",
             "&id=",
             geneID,
             sep="")
response <- htmlParse(URL)
URL
response

name <- node2string(response, "name")
genome_xref <- node2string(response, "chraccver")
genome_from <- node2string(response, "chrstart")[1]
genome_to <- node2string(response, "chrstop")[1]
name
genome_xref
genome_from
genome_to

# So far so good. But since we need to do this a lot
# we need to roll all of this into a function. 

# I have added the function to the dbUtilities code
# so you can update it easily.

# Run:

updateDbUtilities("55ca561e2944af6e9ce5cf2a558d0a3c588ea9af")

# If that is successful, try these three testcases

myNewDB <- createDB()
tmp <- fetchProteinData("NP_010227") # Mbp1p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB

tmp <- fetchProteinData("NP_011036") # Swi4p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB

tmp <- fetchProteinData("NP_012881") # Phd1p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB


This new fetchProteinData() function seems to be quite convenient. I have compiled a set of APSES domain proteins for ten fungi species and loaded the 48 protein's data into an R database in a few minutes. This "reference database" will be automatically loaded for you with the next dbUtilities update. Note that it will be recreated every time you start up R. This means two things: (i) if you break something in the reference database, it's not a problem. (ii) if you store your own data in it, it will be lost. In order to add your own genes, you need to make a working copy for yourself.


Computer literacy

Digression - some musings on computer literacy and code engineering.

It's really useful to get into a consistent habit of giving your files a meaningful name. The name should include something that tells you what the file contains, and something that tells you the date or version. I give versions major and minor numbers, and - knowing how much things always change - I write major version numbers with a leading zero eg. 04 so that they will be correctly sorted by name in a directory listing. The same goes for dates: always write YYYY-MM-DD to ensure proper sorting.

On the topic of versions: creating the database with its data structures and the functions that operate on them is an ongoing process, and changes in one part of the code may have important consequences for another part. Imagine I made a poor choice of a column name early on: changing that would need to be done in every single function of the code that reads or writes or analyzes data. Once the code reaches a certain level of complexity, organizing it well is just as important as writing it in the first place. In the new update of dbUtilities.R, a database has a $version element, and every function checks that the database version matches the version for which the function was written. Obviously, this also means the developer must provide tools to migrate contents from an older version to a newer version. And since migrating can run into trouble and leave all data in an inconsistent and unfixable state, it's a good time to remind you to back up important data frequently. Of course you will want to save your database once you've done any significant work with it. And you will especially want to save the databases you create for your Term Project. But you should also (and perhaps more importantly) save the script that you use to create the database in the first place. And on that note: when was the last time you made a full backup of your computer's hard-drive? Too long ago? I thought so.

Backup your hard-drive now.


If your last backup at the time of next week's quiz was less than two days ago, you will receive a 0.5 mark bonus.


New Database

Here is some sample code to work with the new database, enter new protein data for YFO, save it and load it again when needed.


# You don't need to load the reference database refDB. If
# everything is set up correctly, it gets loaded at startup.
# (Just so you know: you can turn off that behaviour if you
# ever should want to...)


# First you need to load the newest version of dbUtilities.R

updateDButilities("7bb32ab3d0861ad81bdcb9294f0f6a737b820bf9")

# If you get an error: 
#    Error: could not find function "updateDButilities"
# ... then it seems you didn't do the previous update.

# Try getting the update with the new key but the previous function:
# updateDbUtilities()
#
# If that function is not found either, confirm that your ~/.Rprofile
# actually loads dbUtilites.R from your project directory. 

# As a desperate last resort, you could uncomment
# the following piece of code and run the update
# without verification...
#
# URL <- "http://steipe.biochemistry.utoronto.ca/abc/images/f/f9/DbUtilities.R"
# download.file(URL, paste(PROJECTDIR, "dbUtilities.R", sep="")), method="auto")
# source(paste(PROJECTDIR, "dbUtilities.R", sep=""))
#
# But be cautious: there is no verification. You yourself need
# to satisfy yourself that this "file from the internet" is what 
# it should be, before source()'ing it...


# After the file has been source()'d,  refDB exists.
ls(refDB)


# check the contents of refDB:
refDB$protein$name
refDB$taxonomy


# list refSeqIDs for saccharomyces cerevisiae genes.
refDB$protein[refDB$protein$taxID == 559292, "refSeqID"]


# To add some genes from YFO, I proceed as follows.
# Obviously, you need to adapt this to your YFO
# and the sequences in YFO that you have found
# with your PSI-BLAST search.

# Let's assume my YFO is the fly agaric (amanita muscaria)
# and its APSES domain proteins have the following IDs
# (these are not refSeq btw. and thus unlikely
# to be found in UniProt) ...
# KIL68212
# KIL69256
# KIL65817
#


# First, I create a copy of the database with a name that
# I will recognize to be associated with my YFO.
amamuDB <- refDB


# Then I fetch my protein data ...
tmp1 <- fetchProteinData("KIL68212")
tmp2 <- fetchProteinData("KIL69256")
tmp3 <- fetchProteinData("KIL65817")


# ... and if I am satisfied that it contains what I
# want, I add it to the database.
amamuDB <- addToDB(amamuDB, tmp1)
amamuDB <- addToDB(amamuDB, tmp2)
amamuDB <- addToDB(amamuDB, tmp3)


# Then I make a local backup copy. Note the filename and
# version number  :-)
save(amamuDB, file="amamuDB.01.RData")
 

# Now I can explore my new database ...
amamuDB$protein[amamuDB$protein$taxID == 946122, "refSeqID"]


# ... but if anything goes wrong, for example 
# if I make a mistake in checking which
# rows contain taxID 946122 ... 
amamuDB$protein$taxID = 946122

# Ooops ... what did I just do wrong?
#       ... wnat happened instead? 

amamuDB$protein$taxID


# ... I can simply recover from my backup copy:
load("amamuDB.01.RData")    
amamuDB$protein$taxID


 

Task:

Create your own version of the protein database by adding all the genes from YFO that you have discovered with the PSI-BLAST search for the APSES domain. Save it.


 


TBC


 



 


Footnotes and references


 

Ask, if things don't work for you!

If anything about the assignment is not clear to you, please ask on the mailing list. You can be certain that others will have had similar problems. Success comes from joining the conversation.



< Assignment 5 Assignment 7 >