Difference between revisions of "BIO Assignment Week 6"

From "A B C"
Jump to navigation Jump to search
m
 
(5 intermediate revisions by the same user not shown)
Line 24: Line 24:
 
 
 
 
  
In this assignment we will perform a few computations with coordinate data in PDB files, look more in depth at domain annotations, and compare them across different proteins related to yeast Mbp1. We will write a function in '''R''' to help us plot a graphic of the comparison and collaborate to share data for our plots. But first we need to go through a few more '''R''' concepts.
+
In this assignment we will first download a number of APSES domain containing sequences into our database - and we will automate the process. Then we will annotate them with domain data. First manually, and then again, we will automate this. Next we will extract the APSES domains from our database according to the annotations. And finally we will align them, and visualize domain conservation in the 3D model to study parts of the protein that are conserved.
  
  
 
 
 
 
  
 +
==Downloading Protein Data From the Web==
  
  
== Stereo vision ==
+
In [[BIO_Assignment_Week_3|Assignment 3]] we created a schema for a local protein sequence collection, and implemented it as an R list. We added sequences to this database by hand, but since the information should be cross-referenced and available based on a protein's RefSeq ID, we should really have a function that automates this process. It is far too easy to make mistakes and enter erroneous information otherwise.
  
{{task|
 
Continue with your stereo practice.
 
 
Practice at least ...
 
* two times daily,
 
* for 3-5 minutes each session.
 
 
* Measure your interocular distance and your fusion distance as explained '''[http://biochemistry.utoronto.ca/steipe/abc/students/index.php/Stereo_vision_data here on the Student Wiki]''' and add it to the table.
 
}}
 
 
Keep up your practice throughout the course. '''Once again: do not go through your practice sessions mechanically. If you are not making constant progress in your practice sessions, contact me so we can help you on the right track.'''
 
 
== Programming '''R''' code  ==
 
 
First, we will cover essentials of '''R''' programming: the fundamental statements that are needed to write programs–conditional expressions and loops, and how to define ''functions'' that allow us to use programming code. But let's start with a few more data types of '''R''' so we can use the concepts later on: matrices, lists and data frames.
 
 
 
{{task|
 
Please begin by working through the short [http://biochemistry.utoronto.ca/steipe/abc/index.php/R_tutorial#Matrices '''R''' - tutorial: matrices] section and the following sections on "Lists" and "Data frames".
 
 
Note that what we have done here is just the bare minimum on vectors, matrices and lists. The concepts are very generally useful, and there are many useful ways to extract subsets of values. We'll come across these in various parts of '''R''' sample code. But unless you read the provided code examples line by line, make sure you understand every single statement and '''ask''' if you are not clear about the syntax, this will all be quickly forgotten. Use it or lose it!
 
}}
 
 
 
'''R''' is a full featured programming language and in order to be that, it needs the ability to manipulate '''Control Flow''', i.e. the order in which instructions are executed. By far the most important control flow statement is a '''conditional branch''' i.e. the instruction to execute code at alternative points of the program, depending on the presence or absence of a condition. Using such conditional branch instructions, two main types of programming constructs can be realized: ''conditional expressions'' and ''loops''.
 
 
=== Conditional expressions ===
 
 
The template for conditional expression in R is:
 
 
<source lang="rsplus">
 
if( <expression 1> ) {
 
  <statement 1>
 
}
 
else if ( <expresssion 2> ) {
 
  <statement 2>
 
}
 
else {
 
  <statement 3>
 
}
 
 
</source>
 
 
...where both the <code>else if (...) { ... }</code> and the <code>else (...)</code> block are optional. We have encountered this construct previously, when we assigned the appropriate colors for amino acids in the frequency plot:
 
 
<source lang="rsplus">
 
if      (names(logRatio[i]) == "F") { barColors[i] <- hydrophobic }
 
else if (names(logRatio[i]) == "G") { barColors[i] <- plain }
 
  [... etc ...]
 
else                                { barColors[i] <- plain }
 
 
</source>
 
 
==== Logical expressions ====
 
 
We have to consider the <code>&lt;expression&gt;</code> in a bit more detail: anything that is, or produces, or can be interpreted as, a Boolean TRUE or FALSE value can serve as an expression in a conditional statement.
 
  
 
{{task|1=
 
{{task|1=
Here are some examples. Copy the code to an '''R''' script, predict what will happen on in each line and try it out:
 
  
<source lang="rsplus">
+
Work through the following code examples.
# A boolean constant is interpreted as is:
+
<source lang="R">
if (TRUE)    {print("true")} else {print("false")}
 
if (FALSE)  {print("true")} else {print("false")}
 
  
# Strings of "true" and "false" are coerced to their
+
# To begin, we load some libraries with functions
# Boolean equivalent, but contrary to other programming languages
+
# we need...
# arbitrary, non-empty or empty strings are not interpreted.
 
if ("true") {print("true")} else {print("false")}
 
if ("false") {print("true")} else {print("false")}
 
if ("widdershins")    {print("true")} else {print("false")}
 
if ("") {print("true")} else {print("false")}
 
  
# All non-zero, defined numbers are TRUE
+
# httr sends and receives information via the http
if (1)      {print("true")} else {print("false")}
+
# protocol, just like a Web browser.
if (0)      {print("true")} else {print("false")}
+
if (!require(httr, quietly=TRUE)) {
if (-1)    {print("true")} else {print("false")}
+
install.packages("httr")
if (pi)    {print("true")} else {print("false")}
+
library(httr)
if (NULL)  {print("true")} else {print("false")}
 
if (NA)    {print("true")} else {print("false")}
 
if (NaN)    {print("true")} else {print("false")}
 
if (Inf)    {print("true")} else {print("false")}
 
 
 
# functions can return Boolean values
 
affirm <- function() { return(TRUE) }
 
deny <- function() { return(FALSE) }
 
if (affirm())    {print("true")} else {print("false")}
 
if (deny())    {print("true")} else {print("false")}
 
 
 
# N.B. coercion of Booleans into numbers can be done as well
 
and is sometimes useful: consider ...
 
a <- c(TRUE, TRUE, FALSE, TRUE, FALSE)
 
a
 
as.numeric(a)
 
sum(a)
 
 
 
# ... or coercing the other way ...
 
as.logical(-1:1)
 
</source>
 
}}
 
 
 
==== Logical operators ====
 
 
 
To actually write a conditional statement, we have to be able to '''test a condition''' and this is what logical operators do. Is something '''equal''' to something else? Is it less? Does something exist? Is it a number?
 
 
 
{{task|1=
 
 
 
Here are some examples. Again, predict what will happen ...
 
 
 
<source lang="rsplus">
 
TRUE            # Just a statement.
 
 
 
#  unary operator
 
! TRUE          # NOT ...
 
 
 
# binary operators
 
FALSE > TRUE    # GREATER THAN ...
 
FALSE < TRUE    # ... these are coerced to numbers
 
FALSE < -1       
 
0 == FALSE      # Careful! == compares, = assigns!!!
 
 
 
"x" == "u"      # using lexical sort order ...
 
"x" >= "u"
 
"x" <= "u"
 
"x" != "u"
 
"aa" > "u"      # ... not just length, if different.
 
"abc" < "u" 
 
 
 
TRUE | FALSE    # OR: TRUE if either is true
 
TRUE & FALSE    # AND: TRUE if both are TRUE
 
 
 
# equality and identity
 
?identical
 
a <- c(TRUE)
 
b <- c(TRUE)
 
a; b
 
a == b
 
identical(a, b)
 
 
 
b <- 1
 
a; b
 
a == b
 
identical(a, b)  # Aha: equal, but not identical
 
 
 
 
 
# some other useful tests for conditional expressions
 
?all
 
?any
 
?duplicated
 
?exists
 
?is.character
 
?is.factor
 
?is.integer
 
?is.null
 
?is.numeric
 
?is.unsorted
 
?is.vector
 
</source>
 
}}
 
 
 
 
 
=== Loops ===
 
 
 
Loops allow you to repeat tasks many times over. The template is:
 
 
 
<source lang="rsplus">
 
for (<name> in <vector>) {
 
  <statement>
 
 
}
 
}
</source>
 
 
{{task|1=
 
Consider the following: Again, copy the code to a script, study it, predict what will happen and then run it.
 
  
<source lang="rsplus">
+
# NCBI's eUtils send information in XML format; we
# simple for loop
+
# need to be able to parse XML.
for (i in 1:10) {
+
if (!require(XML, quietly=TRUE)) {
print(c(i, i^2, i^3))
+
install.packages("XML")
 +
library(XML)
 
}
 
}
  
# Compare excution times: one million square roots from a vector ...
+
# stringr has a number of useful utility functions
n <- 1000000
+
# to work with strings. E.g. a function that
x <- 1:n
+
# strips leading and trailing whitespace from
y <- sqrt(x)
+
# strings.
 
+
if (!require(stringr, quietly=TRUE)) {
# ... or done explicitly in a for-loop
+
install.packages("stringr")
for (i in 1:n) {
+
library(stringr)
  y[i] <- sqrt (x[i])
 
 
}
 
}
  
</source>
 
  
''If'' you can achieve your result with an '''R''' vector expression, it will be faster than using a loop. But sometimes you need to do things explicitly, for example if you need to access intermediate results.
+
# We will walk through the process with the refSeqID
 +
# of yeast Mbp1
 +
refSeqID <- "NP_010227"
  
}}
 
  
 +
# UniProt.
 +
# The UniProt ID mapping service supports a "RESTful
 +
# API": responses can be obtained simply via a Web-
 +
# browsers request. Such requests are commonly sent
 +
# via the GET or POST verbs that a Webserver responds
 +
# to, when a client asks for data. GET requests are
 +
# visible in the URL of the request; POST requests
 +
# are not directly visible, they are commonly used
 +
# to send the contents of forms, or when transmitting
 +
# larger, complex data items. The UniProt ID mapping
 +
# sevice can accept long lists of IDs, thus using the
 +
# POST mechanism makes sense.
  
Here is an example to play with loops: a password generator. Passwords are a '''pain'''. We need them everywhere, they are frustrating to type, to remember and since the cracking programs are getting smarter they become more and more likely to be broken. Here is a simple password generator that creates random strings with consonant/vowel alterations. These are melodic and easy to memorize, but actually as '''strong''' as an 8-character, fully random password that uses all characters of the keyboard such as <code>)He.{2jJ</code> or <code>#h$bB2X^</code> (which is pretty much unmemorizable). The former is taken from 20<sup>7</sup> * 7<sup>7</sup> 10<sup>15</sup> possibilities, the latter is from 94<sup>8</sup> ~ 6*10<sup>15</sup> possibilities. HIgh-end GPU supported {{WP|Password cracking|password crackers}} can test about 10<sup>9</sup> passwords a second, the passwords generated by this little algorithm would thus take on the order of 10<sup>6</sup> seconds or eleven days to crack<ref>That's assuming the worst case in that the attacker needs to know the pattern with which the password is formed, i.e. the number of characters and the alphabet that we chose from. But note that there is an even worse case: if the attacker had access to our code and the seed to our random number generator. When the random number generator starts up, a new seed is generated from system time, thus the possible space of seeds can be devastatingly small. But even if a seed is set explicitly with the <code>set.seed()</code> function, that seed is a 32-bit integer and thus can take only a bit more than 4*10<sup>9</sup> values, six orders of magnitude less than the 10<sup>15</sup> password complexity we thought we had. It turns out that the code may be a much greater vulnerability than the password itself. Keep that in mind. <small>Keep it secret. <small>Keep it safe.</small></small></ref>. This is probably good enough to deter a casual attack.
+
# R has a POST() function as part of the httr package.
  
{{task|1=
+
# It's very straightforward to use: just define the URL
Copy, study and run ...
+
# of the server and send a list of items as the
<source lang="rsplus">
+
# body of the request.
# Suggest memorizable passwords
 
# Below we use the functions:
 
?nchar
 
?sample
 
?substr
 
?paste
 
?print
 
  
#define a string of  consonants ...
+
# uniProt ID mapping service
con <- "bcdfghjklmnpqrstvwxz"
+
URL <- "http://www.uniprot.org/mapping/"
# ... and a string of of vowels
+
response <- POST(URL,
vow <- "aeiouy"
+
                body = list(from = "P_REFSEQ_AC",
 +
                            to = "ACC",
 +
                            format = "tab",
 +
                            query = refSeqID))
  
for (i in 1:10) {  # ten sample passwords to choose from ...
+
response
    pass = rep("", 14)  # make an empty character vector
 
    for (j in 1:7) {    # seven consonant/vowel pairs to be created ...
 
        k  <- sample(1:nchar(con), 1)  # pick a random index for consonants ...
 
        ch  <- substr(con,k,k)          #  ... get the corresponding character ...
 
        idx <- (2*j)-1                  # ... compute the position (index) of where to put the consonant ...
 
        pass[idx] <- ch                # ...  and put it in the right spot
 
 
 
        # same thing for the vowel, but coded with fewer intermediate assignments
 
        # of results to variables
 
        k <- sample(1:nchar(vow), 1)
 
        pass[(2*j)] <- substr(vow,k,k)
 
    }
 
    print( paste(pass, collapse="") )  # collapse the vector in to a string and print
 
}
 
</source>
 
}}
 
  
=== Functions ===
+
# If the query is successful, tabbed text is returned.
 +
# and we capture the fourth element as the requested
 +
# mapped ID.
 +
unlist(strsplit(content(response), "\\s+"))
  
Finally: functions. Functions look very much like the statements we have seen above. the template looks like:
+
# If the query can't be fulfilled because of a problem
 +
# with the server, a WebPage is rturned. But the server status
 +
# is also returned and we can check the status code. I have
 +
# lately gotten many "503" status codes: Server Not Available...
  
<source lang="rsplus">
+
if (response$status_code == 200) { # 200: oK
<name> <- function (<parameters>) {
+
uniProtID <- unlist(strsplit(content(response), "\\s+"))[4]
  <statements>
+
if (is.na(uniProtID)) {
 +
warning(paste("UniProt ID mapping service returned NA.",
 +
              "Check your RefSeqID."))
 +
}
 +
} else {
 +
uniProtID <- NA
 +
warning(paste("No uniProt ID mapping available:",
 +
              "server returned status",
 +
              response$status_code))
 
}
 
}
</source>
 
 
In this statement, the function is assigned to the ''name'' - any valid name in '''R'''. Once it is assigned, it the function can be invoked with <code>name()</code>. The parameter list (the values we write into the parentheses followin the function name) can be empty, or hold a list of variable names. If variable names are present, you need to enter the corresponding parameters when you execute the function. These assigned variables are available inside the function, and can be used for computations. This is called "passing the variable into the function".
 
  
You have encountered a function to choose YFO names. In this function, your Student ID was the parameter. Here is another example to play with: a function that calculates how old you are. In days. This is neat - you can celebrate your 10,000 birth'''day''' - or so.
+
uniProtID  # Let's see what we got...
 
+
          # This should be "P39678"
{{task|1=
+
          # (or NA if the query failed)
 
 
Copy, explore and run ...
 
 
 
;Define the function ...
 
<source lang = "rsplus">
 
# A lifedays calculator function
 
 
 
myLifeDays <- function(date = NULL) { # give "date" a default value so we can test whether it has been set
 
    if (is.null(date)) {
 
        print ("Enter your birthday as a string in \"YYYY-MM-DD\" format.")
 
        return()
 
    }
 
    x <- strptime(date, "%Y-%m-%d") # convert string to time
 
    y <- format(Sys.time(), "%Y-%m-%d") # convert "now" to time
 
    diff <- round(as.numeric(difftime(y, x, unit="days")))
 
    print(paste("This date was ", diff, " days ago."))
 
}
 
 
</source>
 
</source>
  
;Use the function (example):
 
<source lang = "rsplus">
 
  myLifeDays("1932-09-25")  # Glenn Gould's birthday
 
</source>
 
}}
 
  
Here is a good opportunity to play and practice programming: modify this function to accept a second argument. When a second argument is present (e.g. 10000) the function should print the calendar date on which the input date will be that number of days ago. Then you could use it to know when to celebrate your 10,000<sup>th</sup> lifeDay, or your 777<sup>th</sup> anniversary day or whatever.
+
Next, we'll retrieve data from the various NCBI databases.
  
Enjoy.
+
It is has become unreasonably difficult to screenscrape the NCBI site
 +
since the actual page contents are dynamically loaded via
 +
AJAX. This may be intentional, or just  overengineering.
 +
While NCBI offers a subset of their data via the eutils API and
 +
that works well enough, some of the data that is available to the
 +
Web browser's eyes is not served to a program.
  
==The PDB==
+
The eutils API returns data in XML format. Have a
; Search for GO and EC numbers at PDB...
+
look at the following URL in your browser to see what that looks like:
  
The search options in the PDB structure database are as sophisticated as those at the NCBI. For now, we will try a simple keyword search to get us started.  
+
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=protein&term=NP_010227
  
  
{{task|
+
<source lang="R">
# Visit the RCSB PDB website at http://www.pdb.org/
 
# Briefly orient yourself regarding the database contents and its information offerings and services.
 
# Enter <code>Mbp1</code> into the search field.
 
# In your journal, note down the PDB IDs for the three ''Saccharomyces cerevisiae'' Mbp1 transcription factor structures your search has retrieved.
 
# Click on one of the entries and explore the information and services linked from that page.
 
}}
 
  
&nbsp;
+
# In order to parse such data, we need tools from the
 +
# XML package.
  
== CDD domain annotation ==
+
# First we build a query URL...
 +
eUtilsBase <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/"
  
In the last assignment, you followed a link to '''CDD Search Results''' from the [http://www.ncbi.nlm.nih.gov/protein/NP_010227 RefSeq record for yeast Mbp1] and briefly looked at the information offered by the NCBI's Conserved Domain Database, a database of ''Position Specific Scoring Matrices'' that embody domain definitions. Rather than access precomputed results, you can also search CDD with sequences: assuming you have saved the YFO Mbp1 sequence in FASTA format, this is straightforward. If you did not save this sequence, return to [[BIO_Assignment_Week_3|Assignment 3]] and retrieve it again.
 
  
 +
# Then we assemble an URL that will search for get the
 +
# unique, NCBI internal identifier,  the GI number,
 +
# for our refSeqID...
 +
URL <- paste(eUtilsBase,
 +
            "esearch.fcgi?",    # ...using the esearch program
 +
                                  # that finds an entry in an
 +
                                  # NCBI database
 +
            "db=protein",
 +
            "&term=", refSeqID,
 +
            sep="")
 +
# Copy the URL and paste it into your browser to see
 +
# what the response should look like.
 +
URL
  
{{task|1=
+
# To fetch a response in R, we use the function htmlParse()
# Access the [http://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml '''CDD database'''] at http://www.ncbi.nlm.nih.gov/Structure/cdd/cdd.shtml
+
# with our URL as its argument.
# Read the information. CDD is a superset of various other database domain annotations as well as NCBI-curated domain definitions.
+
response <- htmlParse(URL)
# Copy the YFO Mbp1 FASTA sequence, paste it into the search form and click '''Submit'''.
+
response
## On the result page, clik on '''View full result'''
 
## Note that there are a number of partially overlapping ankyrin domain modules. We will study ankyrin domains in a later assignment.
 
## Also note that there may be blocks of sequence colored cyan in the sequence bar. Hover your mouse over the blocks to see what these blocks signify.
 
## Open the link to '''Search for similar domain architecture''' in a separate window and study it. This is the '''CDART''' database. Think about what these results may be useful for.
 
## Click on one of the ANK superfamily graphics and see what the associated information looks like: there is a summary of structure and function, links to specific literature and a tree of the relationship of related sequences.
 
}}
 
  
== SMART domain annotation ==
+
# This is XML. We can take the response apart into
 +
# its indvidual components with the xmlToList function.
  
 +
xmlToList(response)
  
The [http://smart.embl-heidelberg.de/ SMART database] at the EMBL in Heidelberg offers an alternative view on domain architectures. I personally find it more useful for annotations because it integrates a number of additional features. You can search by sequence, or by accession number and that raises the question of how to retrieve a database cross-reference from an NCBI sequence identifier to a UniProt sequence ID:
+
# Note how the XML "tree" is represented as a list of
 +
# lists of lists ...
 +
# If we know exactly what elelement we are looking for,
 +
# we can extract it from this structure:
 +
xmlToList(response)[["body"]][["esearchresult"]][["idlist"]][["id"]]
  
 +
# But this is not very robus, it would break with the
 +
# slightest change that the NCBI makes to their response
 +
# and the NCBI changes things A LOT!
  
===ID mapping===
+
# Somewhat more robust is to specify the type of element
 +
# we want - its the text contained in an <id>...</id>
 +
# elelement, and use the XPath XML parsing language to
 +
# retrieve it.
  
 +
# getNodeSet() lets us fetch tagged contents by
 +
# applying toString.XMLNode() to it...
  
{{task|
+
node <- getNodeSet(response, "//id/text()")
<!-- Yeast:  NP_010227 ... P39678 -->
+
unlist(lapply(node, toString.XMLNode))  # "6320147 "
# Access the [http://www.uniprot.org/mapping/ UniProt ID mapping service] at http://www.uniprot.org/mapping/
 
# Paste the RefSeq identifier for YFO Mbp1 into the search field.
 
# Use the menu to choose '''From''' ''RefSeq Protein'' and '''To''' ''UniprotKB AC''&ndash;the UniProt Knowledge Base Accession number.
 
# Click on '''Map''' to execute the search.
 
# Note the ID - it probably starts with a letter, followed by numbers and letters. Here are some examples for fungal Mbp1-like proteins: <code>P39678 Q5B8H6 Q5ANP5 P41412</code> ''etc.''
 
# Click on the link, and explore how the UniProt sequence page is similar or different from the RefSeq page.
 
}}
 
  
===SMART search===
+
# We will be doing this a lot, so we write a function
 +
# for it...
 +
node2string <- function(doc, tag) {
 +
    # an extractor function for the contents of elements
 +
    # between given tags in an XML response.
 +
    # Contents of all matching elements is returned in
 +
    # a vector of strings.
 +
path <- paste("//", tag, "/text()", sep="")
 +
nodes <- getNodeSet(doc, path)
 +
return(unlist(lapply(nodes, toString.XMLNode)))
 +
}
  
{{task|1=
+
# using node2string() ...
# Access the [http://smart.embl-heidelberg.de/ '''SMART database'''] at http://smart.embl-heidelberg.de/
+
GID <- node2string(response, "id")
# Click the lick to access SMART in the '''normal''' mode.
+
GID
# Paste the YFO Mbp1 UniProtKB Accession number into the '''Sequence ID or ACC''' field.
 
# Check the boxes for:
 
## '''PFAM domains''' (domains defined by sequence similarity in the PFAM database)
 
## '''signal peptides''' (using the Gunnar von Heijne's SignalP 4.0 server at the Technical University in Lyngby, Denmark)
 
## '''internal repeats''' (using the programs ''ariadne'' and ''prospero'' at the Wellcome Trust Centre for Human Genetics at Oxford University, England)
 
## '''intrinsic protein disorder''' (using Rune Linding's DisEMBL program at the EMBL in Heidelberg, Germany)
 
# Click on '''Sequence SMART''' to run the search and annotation. <small>(In case you get an error like: "Sorry, your entry seems to have no SMART domain ...", let me know and repeat the search with the actual FASTA sequence instead of the accession number.)</small>
 
  
Study the results. Specifically, have a look at the proteins with similar domain '''ORGANISATION''' and '''COMPOSITION'''. This is similar to the NCBI's CDART.  
+
# The GI is the pivot for all our data requests at the
 +
# NCBI.  
  
}}
+
# Let's first get the associated data for this GI
 +
URL <- paste(eUtilsBase,
 +
            "esummary.fcgi?",
 +
            "db=protein",
 +
            "&id=",
 +
            GID,
 +
            "&version=2.0",
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
taxID <- node2string(response, "taxid")
 +
organism <- node2string(response, "organism")
 +
taxID
 +
organism
  
  
 +
# Next, fetch the actual sequence
 +
URL <- paste(eUtilsBase,
 +
            "efetch.fcgi?",
 +
            "db=protein",
 +
            "&id=",
 +
            GID,
 +
            "&retmode=text&rettype=fasta",
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
fasta <- node2string(response, "p")
 +
fasta
  
==Introduction==
+
seq <- unlist(strsplit(fasta, "\\n"))[-1] # Drop the first elelment,
 +
                                          # it is the FASTA header.
 +
seq
  
Integrating evolutionary information with structural information allows us to establish which residues are invariant in a family&ndash;these are presumably structurally important sites&ndash;and which residues are functionally important, since they are invariant within, but changeable between subfamilies.
 
  
To visualize these relationships, we will load an MSA of APSES domains with VMD and color it by conservation.
+
# Next, fetch the crossreference to the NCBI Gene
 +
# database
 +
URL <- paste(eUtilsBase,
 +
            "elink.fcgi?",
 +
            "dbfrom=protein",
 +
            "&db=gene",
 +
            "&id=",
 +
            GID,
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
geneID <- node2string(response, "linksetdb/id")
 +
geneID
  
 +
# ... and the actual Gene record:
 +
URL <- paste(eUtilsBase,
 +
            "esummary.fcgi?",
 +
            "&db=gene",
 +
            "&id=",
 +
            geneID,
 +
            sep="")
 +
response <- htmlParse(URL)
 +
URL
 +
response
  
 +
name <- node2string(response, "name")
 +
genome_xref <- node2string(response, "chraccver")
 +
genome_from <- node2string(response, "chrstart")[1]
 +
genome_to <- node2string(response, "chrstop")[1]
 +
name
 +
genome_xref
 +
genome_from
 +
genome_to
  
=== The DNA binding site ===
+
# So far so good. But since we need to do this a lot
 +
# we need to roll all of this into a function.
  
 +
# I have added the function to the dbUtilities code
 +
# so you can update it easily.
  
Now, that you know how YFO Mbp1 aligns with yeast Mbp1, you can evaluate functional conservation in these homologous proteins. You probably already downloaded the two Biochemistry papers by Taylor et al. (2000) and by Deleeuw et al. (2008) that we encountered in Assignment 2. These discuss the residues involved in DNA binding<ref>([http://www.ncbi.nlm.nih.gov/pubmed/10747782 Taylor ''et al.'' (2000) ''Biochemistry'' '''39''': 3943-3954] and [http://www.ncbi.nlm.nih.gov/pubmed/18491920 Deleeuw ''et al.'' (2008) Biochemistry. '''47''':6378-6385])</ref>. In particular the residues between 50-74 have been proposed to comprise the DNA recognition domain.
+
# Run:
 
 
{{task|
 
# Using the APSES domain alignment you have just constructed, find the YFO Mbp1 residues that correspond to the range 50-74 in yeast.
 
# Note whether the sequences are especially highly conserved in this region.
 
# Using Chimera, look at the region. Use the sequence window '''to make sure''' that the sequence numbering between the paper and the PDB file are the same (they are often not identical!). Then select the residues - the proposed recognition domain -  and color them differently for emphasis. Study this in stereo to get a sense of the spatial relationships. Check where the conserved residues are.
 
# A good representation is '''stick''' - but other representations that include sidechains will also serve well.
 
# Calculate a solvent accessible surface of the protein in a separate representation and make it transparent.
 
# You could  combine three representations: (1) the backbone (in '''ribbon view'''), (2) the sidechains of residues that presumably contact DNA, distinctly colored, and (3) a transparent surface of the entire protein. This image should show whether residues annotated as DNA binding form a contiguous binding interface.
 
}}
 
  
 +
updateDbUtilities("55ca561e2944af6e9ce5cf2a558d0a3c588ea9af")
  
DNA binding interfaces are expected to comprise a number of positively charged amino acids, that might form salt-bridges with the phosphate backbone.
+
# If that is successful, try these three testcases
  
 +
myNewDB <- createDB()
 +
tmp <- fetchProteinData("NP_010227") # Mbp1p
 +
tmp
 +
myNewDB <- addToDB(myNewDB, tmp)
 +
myNewDB
  
{{task|
+
tmp <- fetchProteinData("NP_011036") # Swi4p
*Study and consider whether this is the case here and which residues might be included.
+
tmp
}}
+
myNewDB <- addToDB(myNewDB, tmp)
 +
myNewDB
  
 +
tmp <- fetchProteinData("NP_012881") # Phd1p
 +
tmp
 +
myNewDB <- addToDB(myNewDB, tmp)
 +
myNewDB
  
  
===APSES domains in Chimera (from A4)===
 
What precisely constitutes an APSES domain however is a matter of definition, as you can explore in the following (optional) task.
 
  
 
+
</source>
<div class="mw-collapsible mw-collapsed" data-expandtext="Expand" data-collapsetext="Collapse" style="border:#000000 solid 1px; padding: 10px; margin-left:25px; margin-right:25px;">Optional: Load the structure in Chimera, like you did in the last assignment and switch on stereo viewing ... (more) <div  class="mw-collapsible-content">
 
<ol start="7">
 
<li>Display the protein in ribbon style, e.g. with the '''Interactive 1''' preset.
 
<li>Access the '''Interpro''' information page for Mbp1 at the EBI: http://www.ebi.ac.uk/interpro/protein/P39678
 
<li>In the section '''Domains and repeats''', mouse over the red annotations and note down the residue numbers for the annotated domains. Also follow the links to the respective Interpro domain definition pages.
 
</ol>
 
 
 
At this point we have definitions for the following regions on the Mbp1 protein ...
 
*The KilA-N (pfam 04383) domain definition as applied to the Mbp1 protein sequence by CDD;
 
*The InterPro ''KilA, N-terminal/APSES-type HTH, DNA-binding (IPR018004)'' definition annotated on the Mbp1 sequence;
 
*The InterPro ''Transcription regulator HTH, APSES-type DNA-binding domain (IPR003163)'' definition annotated on the Mbp1 sequence;
 
*<small>(... in addition &ndash; without following the source here &ndash; the UniProt record for Mbp1 annotates a "HTH APSES-type" domain from residues 5-111)</small>
 
 
 
... each with its distinct and partially overlapping sequence range. Back to Chimera:
 
 
 
<!-- For reference:
 
1MB1: 3-100
 
2BM8: 4-102
 
CDD KilA-N: 19-93
 
InterPro KilA-N: 23-88
 
InterPro APSES: 3-133
 
Uniprot HTH/APSES: 5-111
 
-->
 
 
 
<ol start="10">
 
<li>In the sequence window, select the sequence corresponding to the '''Interpro KilA-N''' annotation and colour this fragment red. <small>Remember that you can get the sequence numbers of a residue in the sequence window when you hover the pointer over it - but do confirm that the sequence numbering that Chimera displays matches the numbering of the Interpro domain definition.</small></li>
 
 
 
<li>Then select the residue range(s) by which the '''CDD KilA-N''' definition is larger, and colour that fragment orange.</li>
 
 
 
<li>Then select the residue range(s) by which the '''InterPro APSES domain''' definition is larger, and colour that fragment yellow.</li>
 
 
 
<li>If the structure contains residues outside these ranges, colour these white.</li>
 
 
 
<li>Study this in a side-by-side stereo view and get a sense for how the ''extra'' sequence beyond the Kil-A N domain(s) is part of the structure, and how the integrity of the folded structure would be affected if these fragments were missing.</li>
 
 
 
<li>Display Hydrogen bonds, to get a sense of interactions between residues from the differently colored parts. First show the protein as a stick model, with sticks that are thicker than the default to give a better sense of sidechain packing:<br />
 
::(i) '''Select''' &rarr; '''Select all''' <br />
 
::(ii) '''Actions''' &rarr; '''Ribbon''' &rarr; '''hide''' <br />
 
::(iii) '''Select''' &rarr; '''Structure''' &rarr; '''protein''' <br />
 
::(iv) '''Actions''' &rarr; '''Atoms/Bonds''' &rarr; '''show''' <br />
 
::(v)  '''Actions''' &rarr; '''Atoms/Bonds''' &rarr; '''stick''' <br />
 
::(vi) click on the looking glass icon at the bottom right of the graphics window to bring up the inspector window and choose '''Inspect ... Bond'''. Change the radius to 0.4.<br />
 
</li>
 
 
 
<li>Then calculate and display the hydrogen bonds:<br />
 
::(vii) '''Tools''' &rarr; '''Surface/Binding Analysis''' &rarr; '''FindHbond''' <br />
 
::(viii) Set the '''Line width''' to 3.0, leave all other parameters with their default values an click '''Apply'''<br />
 
:: Clear the selection.<br />
 
Study this view, especially regarding side chain H-bonds. Are there many? Do side chains interact more with other sidechains, or with the backbone?
 
</li>
 
 
 
<li>Let's now simplify the scene a bit and focus on backbone/backbone H-bonds:<br />
 
::(ix) '''Select''' &rarr; '''Structure''' &rarr; '''Backbone''' &rarr; '''full'''<br />
 
::(x)  '''Actions''' &rarr; '''Atoms/Bonds''' &rarr; '''show only'''<br /><br />
 
:: Clear the selection.<br />
 
In this way you can appreciate how H-bonds build secondary structure - &alpha;-helices and &beta;-sheets - and how these interact with each other ... in part '''across the KilA N boundary'''.
 
</li>
 
  
  
<li>Save the resulting image as a jpeg no larger than 600px across and upload it to your Lab notebook on the Wiki.</li>
 
<li>When you are done, congratulate yourself on having earned a bonus of 10% on the next quiz.</li>
 
</ol>
 
 
</div>
 
</div>
 
 
 
There is a rather important lesson in this: domain definitions may be fluid, and their boundaries may be computationally derived from sequence comparisons across many families, and do not necessarily correspond to individual structures. Make sure you understand this well.
 
 
}}
 
}}
  
  
Given this, it seems appropriate to search the sequence database with the sequence of an Mbp1 structure&ndash;this being a structured, stable, subdomain of the whole that presumably contains the protein's most unique and specific function. Let us retrieve this sequence. All PDB structures have their sequences stored in the NCBI protein database. They can be accessed simply via the PDB-ID, which serves as an identifier both for the NCBI and the PDB databases. However there is a small catch (isn't there always?). PDB files can contain more than one protein, e.g. if the crystal structure contains a complex<ref>Think of the [http://www.pdb.org/pdb/101/motm.do?momID=121 ribosome] or [http://www.pdb.org/pdb/101/motm.do?momID=3 DNA-polymerase] as extreme examples.</ref>. Each of the individual proteins gets a so-called '''chain ID'''&ndash;a one letter identifier&ndash; to identify them uniquely. To find their unique sequence in the database, you need to know the PDB ID as well as the chain ID. If the file contains only a single protein (as in our case), the chain ID is always '''<code>A</code>'''<ref>Otherwise, you need to study the PDB Web page for the structure, or the text in the PDB file itself, to identify which part of the complex is labeled with which chain ID. For example, immunoglobulin structures some time label the ''light-'' and ''heavy chain'' fragments as "L" and "H", and sometimes as "A" and "B"&ndash;there are no fixed rules. You can also load the structure in VMD, color "by chain" and use the mouse to click on residues in each chain to identify it.</ref>. make sure you understand the concept of protein chains, and chain IDs.
 
  
 +
This new <code>fetchProteinData()</code> function seems to be quite convenient. I have compiled a [[Reference_APSES_proteins_(reference_species)|set of APSES domain proteins]] for ten fungi species and loaded the 48 protein's data into an R database in a few minutes. This "reference database" will be automatically loaded for you with the '''next''' dbUtilities update. Note that it will be recreated every time you start up '''R'''. This means two things: (i) if you break something in the reference database, it's not a problem. (ii) if you store your own data in it, it will be lost. In order to add your own genes, you need to make a working copy for yourself.
  
  
 +
====Computer literacy====
  
  
 +
;Digression - some musings on computer literacy and code engineering.
 +
It's really useful to get into a consistent habit of giving your files a meaningful name. The name should include something that tells you what the file contains, and something that tells you the date or version. I give versions major and minor numbers, and - knowing how much things always change - I write major version numbers with a leading zero eg. <code>04</code> so that they will be correctly sorted by name in a directory listing. The same goes for dates: always write <code>YYYY-MM-DD</code> to ensure proper sorting.
  
 +
On the topic of versions: creating the database with its data structures and the functions that operate on them is an ongoing process, and changes in one part of the code may have important consequences for another part. Imagine I made a poor choice of a column name early on: changing that would need to be done in every single function of the code that reads or writes or analyzes data. Once the code reaches a certain level of complexity, organizing it well is just as important as writing it in the first place. In the new update of <code>dbUtilities.R</code>, a database has a <code>$version</code> element, and every function checks that the database version matches the version for which the function was written. Obviously, this also means the developer must provide tools to migrate contents from an older version to a newer version. And since migrating can run into trouble and leave all data in an inconsistent and unfixable state, it's a good time to remind you to back up important data frequently. Of course you will want to save your database once you've done any significant work with it. And you will especially want to save the databases you create for your Term Project. But you should also (and perhaps more importantly) save the script that you use to create the database in the first place. And on that note: when was the last time you made a full backup of your computer's hard-drive? Too long ago? I thought so.
  
 +
;Backup your hard-drive now.
  
&nbsp;
 
  
=== Chimera "sequence": implicit or explicit ? ===
+
If your last backup at the time of next week's quiz was less than two days ago, you will receive a 0.5 mark bonus.
  
We discussed the distinction between implicit and explicit sequence. But which one does the Chimera sequence window display? Let's find out.
 
  
{{task|1=
+
===New Database ===
# Open Chimera and load the 1BM8 structure from the PDB.
 
# Save the ccordinate file with '''File''' &rarr; '''Save PDB ...''', use a filename of <code>test.pdb</code>.
 
# Open this file in a '''plain text''' editor: notepad, TextEdit, nano or the like, but not MS Word! Make sure you view the file in a '''fixed-width font''', not proportionally spaced, i.e. Courier, not Arial. Otherwise the columns in the file won't line up.
 
# Find the records that begin with <code>SEQRES  ...</code> and confirm that the first amino acid is <code>GLN</code>.
 
# Now scroll down to the <code>ATOM  </code> section of the file. Identify the records for the first residue in the structure. Delete all lines for side-chain atoms except for the <code>CB</code> atom. This changes the coordinates for glutamine to those of alanine.
 
# Replace the <code>GLN</code> residue name with <code>ALA</code> (uppercase). This relabels the residue as Alanine in the coordinate section. Therefore you have changed the '''implicit''' sequence. Implicit and explicit sequence are now different. The second atom record should now look like this:<br />
 
:<code>ATOM      2  CA  ALA A  4      -0.575  5.127  16.398  1.00 51.22          C</code>
 
<ol start="7">
 
<li>Save the file and load it in Chimera.
 
<li>Open the sequence window: does it display <code>Q</code> or <code>A</code> as the first reside?
 
</ol>
 
  
Therefore, does Chimera use the '''implicit''' or '''explicit''' sequence in the sequence window?
+
Here is some sample code to work with the new database, enter new protein data for YFO, save it and load it again when needed.
  
}}
 
  
==Coloring by conservation==
+
<source lang="R">
 
+
# You don't need to load the reference database refDB. If
With VMD, you can import a sequence alignment into the MultiSeq extension and color residues by conservation. The protocol below assumes that an MSA exists - you could have produced it in many different ways, for convenience, I have precalculated one for you. This may not contain the sequences from YFO, if you are curious about these you are welcome to add them and realign.
+
# everything is set up correctly, it gets loaded at startup.
 
+
# (Just so you know: you can turn off that behaviour if you
{{task|1=
+
# ever should want to...)
;Load the Mbp1 APSES alignment into MultiSeq.
 
 
 
# Access [[Reference alignment for APSES domains (MUSCLE, reference species)|the set of MUSCLE aligned and edited fungal APSES domains]].
 
# Copy the alignment and save it into a convenient directory on your computer as a plain text file. Give it the extension <code>.aln</code> .
 
# Open VMD and load the <code>1BM8</code> structure.
 
# As usual, turn the axes off and display your structure in side-by-side stereo.
 
# Visualize the structure as '''New Cartoon''' with '''Index''' coloring to re-orient yourself. Identify the recognition helix and the "wing".
 
# Open '''Extensions &rarr; Analysis &rarr; Multiseq'''.
 
# You can answer '''No''' to download metadata databases, we won't need them here.
 
# In the MultiSeq Window, navigate to '''File &rarr; Import Data...'''; Choose "From Files" and '''Browse''' to the location of the alignment you have saved. The File navigation window gives you options which files to enable: choose to '''Enable <code>ALN</code>''' files (these are CLUSTAL formatted multiple sequence alignments).  
 
# Open the alignment file, click on '''Ok''' to import the data. If the data can't be loaded, the file may have the wrong extension: .aln is required.  
 
# find the <code>Mbp1_SACCE</code> sequence in the list, click on it and move it to the top of the Sequences list with your mouse (the list is not static, you can re-order the sequences in any way you like).
 
}}
 
  
  
You will see that the <code>1BM8</code> sequence and the <code>Mbp1_SACCA APSES</code> domain sequence do not match: at the N-terminus the sequence that corresponds to the PDB structure has extra residues, and in the middle the APSES sequences may have gaps inserted.
+
# First you need to load the newest version of dbUtilities.R
  
{{task|1=
+
updateDButilities("7bb32ab3d0861ad81bdcb9294f0f6a737b820bf9")
;Bring the 1MB1 sequence in register with the APSES alignment.
 
  
# MultiSeq supports typical text-editor selection mechanisms. Clicking on a residue selects it, clicking on a row selects the whole sequence. Dragging with the mouse selects several residues, shift-clicking selects ranges, and option-clicking toggles the selection on or off for individual residues. Using the mouse and/or the shift key as required, select the '''entire first column''' of the '''Sequences''' you have imported.  Note: don't include the 1BM8 sequence - this is just for the aligned sequences.
+
# If you get an error:  
# Select '''Edit &rarr; Enable Editing... &rarr; Gaps only''' to allow changing indels.
+
#   Error: could not find function "updateDButilities"
# Pressing the spacebar once should insert a gap character before the '''selected column''' in all sequences. Insert as many gaps as you need to align the beginning of sequences with the corresponding residues of 1BM8: <code>S I M ...</code> . Note: Have patience - the program's response can be a bit sluggish.
+
# ... then it seems you didn't do the previous update.
# Now insert as many gaps as you need into the <code>1BM8</code> structure sequence, to align it completely with the <code>Mbp1_SACCE</code> APSES domain sequence. (Simply select residues in the sequence and use the space bar to insert gaps. (Note: I have noticed a bug that sometimes prevents slider or keyboard input to the MultiSeq window; it fails to regain focus after operations in a different window. I don't know whether this is a Mac related problem or a more general bug in MultiSeq. When this happens I quit VMD and restore a saved session. It is a bit annoying but not mission-critical. But to be able to do that, you might want to save your session every now and then.)
 
# When you are done, it may be prudent to save the state of your alignment. Use '''File &rarr; Save Session...'''
 
}}
 
 
 
 
 
{{task|1=
 
;Color by similarity
 
  
# Use the '''View &rarr; Coloring &rarr; Sequence similarity &rarr; BLOSUM30''' option to color the residues in the alignment and structure. This clearly shows you where conserved and variable residues are located and allows to analyze their structural context.
+
# Try getting the update with the new key but the previous function:
# Navigate to the '''Representations''' window and create a '''Tube''' representation of the structure's backbone. Use '''User''' coloring to color it according to the conservation score that the Multiseq extension has calculated.
+
# updateDbUtilities()
# Create a new representation, choose '''Licorice''' as the drawing method, '''User''' as the coloring method and select <code>(sidechain or name CA) and not element H</code> (note: <code>CA</code>, the C-alpha atom must be capitalized.)
+
#
# Double-click on the NewCartoon representation to hide it.
+
# If that function is not found either, confirm that your ~/.Rprofile
# You can adjust the color scale in the usual way by navigating to '''VMD main &rarr; Graphics &rarr; Colors...''', choosing the Color Scale tab and adjusting the scale midpoint.  
+
# actually loads dbUtilites.R from your project directory.  
  
}}
+
# As a desperate last resort, you could uncomment
 +
# the following piece of code and run the update
 +
# without verification...
 +
#
 +
# URL <- "http://steipe.biochemistry.utoronto.ca/abc/images/f/f9/DbUtilities.R"
 +
# download.file(URL, paste(PROJECTDIR, "dbUtilities.R", sep="")), method="auto")
 +
# source(paste(PROJECTDIR, "dbUtilities.R", sep=""))
 +
#
 +
# But be cautious: there is no verification. You yourself need
 +
# to satisfy yourself that this "file from the internet" is what
 +
# it should be, before source()'ing it...
  
  
Study this structure in some detail. If you wish, you could load and superimpose the DNA complexes to determine which conserved residues are in the vicinity of the double helix strands and potentially able to interact with backbone or bases. Note that the most highly conserved residues in the family alignment are all structurally conserved elements of the core. Solvent exposed residues that comprise the surface of the recognition helix are quite variable, especially at the binding site. You may also find - if you load the DNA molecules, that residues that contact the phosphate backbone in general tend to be more highly conserved than residues that contact bases.  
+
# After the file has been source()'d, refDB exists.
 +
ls(refDB)
  
  
 +
# check the contents of refDB:
 +
refDB$protein$name
 +
refDB$taxonomy
  
&nbsp;
 
  
=== Visual comparison of domain annotations in '''R''' ===
+
# list refSeqIDs for saccharomyces cerevisiae genes.
 +
refDB$protein[refDB$protein$taxID == 559292, "refSeqID"]
  
  
The versatile plotting functions of '''R''' allow us to compare domain annotations. The distribution of segments that are annotated as being "low-complexity" or "disordered is particulalry interesting: these are functional features of the amino acid sequence that are often not associated with sequence similarity.
+
# To add some genes from YFO, I proceed as follows.
 +
# Obviously, you need to adapt this to your YFO
 +
# and the sequences in YFO that you have found
 +
# with your PSI-BLAST search.
  
In the following code tutorial, we create a plot similar to the CDD and SMART displays. It is based on the SMART domain annotations of the six-fungal reference species for the course.
+
# Let's assume my YFO is the fly agaric (amanita muscaria)
 +
# and its APSES domain proteins have the following IDs
 +
# (these are not refSeq btw. and thus unlikely
 +
# to be found in UniProt) ...
 +
# KIL68212
 +
# KIL69256
 +
# KIL65817
 +
#
  
{{task|1=
 
  
Copy the code to an '''R''' script, study and execute it.
+
# First, I create a copy of the database with a name that
<source lang="R">
+
# I will recognize to be associated with my YFO.
 +
amamuDB <- refDB
  
# plotDomains
 
# tutorial and functions to plot a colored rectangle from a list of domain annotations
 
  
 +
# Then I fetch my protein data ...
 +
tmp1 <- fetchProteinData("KIL68212")
 +
tmp2 <- fetchProteinData("KIL69256")
 +
tmp3 <- fetchProteinData("KIL65817")
  
# First task: create a list structure for the annotations: this is a list of lists
 
# As you see below, we need to mix strings, numbers and vectors of numbers. In R
 
# such mixed data types must go into a list.
 
  
Mbp1Domains <- list()   # start with an empty list
+
# ... and if I am satisfied that it contains what I
 +
# want, I add it to the database.
 +
amamuDB <- addToDB(amamuDB, tmp1)
 +
amamuDB <- addToDB(amamuDB, tmp2)
 +
amamuDB <- addToDB(amamuDB, tmp3)
  
# For each species annotation, compile the SMART domain annotations in a list.
 
Mbp1Domains <- rbind(Mbp1Domains, list(  # rbind() appends the list to the existing
 
    species = "Saccharomyces cerevisiae",
 
    code    = "SACCE",
 
    ACC    = "P39678",
 
    length  = 833,
 
    KilAN  = c(18,102),  # Note: Vector of (start, end) pairs
 
    AThook  = NULL,      # Note: NULL, because this annotation was not observed in this sequence
 
    Seg    = c(108,122,236,241,279,307,700,717),
 
    DisEMBL = NULL,
 
    Ankyrin = c(394,423,427,463,512,541),  # Note: Merge overlapping domains, if present
 
    Coils  = c(633, 655)
 
    ))
 
  
Mbp1Domains <- rbind(Mbp1Domains, list(
+
# Then I make a local backup copy. Note the filename and
    species = "Emericella nidulans",
+
# version number  :-)
    code    = "ASPNI",
+
save(amamuDB, file="amamuDB.01.RData")
    ACC    = "Q5B8H6",
+
   
    length  = 695,
 
    KilAN  = c(23,94),
 
    AThook = NULL,
 
    Seg    = c(529,543),
 
    DisEMBL = NULL,
 
    Ankyrin = c(260,289,381,413),
 
    Coils  = c(509,572)
 
    ))
 
  
Mbp1Domains <- rbind(Mbp1Domains, list(
+
# Now I can explore my new database ...
    species = "Candida albicans",
+
amamuDB$protein[amamuDB$protein$taxID == 946122, "refSeqID"]
    code    = "CANAL",
 
    ACC    = "Q5ANP5",
 
    length  = 852,
 
    KilAN  = c(19,102),
 
    AThook  = NULL,
 
    Seg    = c(351,365,678,692),
 
    DisEMBL = NULL,
 
    Ankyrin = c(376,408,412,448,516,545),
 
    Coils  = c(665,692)
 
    ))
 
  
Mbp1Domains <- rbind(Mbp1Domains, list(
 
    species = "Neurospora crassa",
 
    code    = "NEUCR",
 
    ACC    = "Q7RW59",
 
    length  = 833,
 
    KilAN  = c(31,110),
 
    AThook  = NULL,
 
    Seg    = c(130,141,253,266,514,525,554,564,601,618,620,629,636,652,658,672,725,735,752,771),
 
    DisEMBL = NULL,
 
    Ankyrin = c(268,297,390,419),
 
    Coils  = c(500,550)
 
    ))
 
  
Mbp1Domains <- rbind(Mbp1Domains, list(
+
# ... but if anything goes wrong, for example
    species = "Schizosaccharomyces pombe",
+
# if I make a mistake in checking which
    code    = "SCHPO",
+
# rows contain taxID 946122 ...
    ACC    = "P41412",
+
amamuDB$protein$taxID = 946122
    length  = 657,
 
    KilAN  = c(21,97),
 
    AThook  = NULL,
 
    Seg    = c(111,125,136,145,176,191,422,447),
 
    DisEMBL = NULL,
 
    Ankyrin = c(247,276,368,397),
 
    Coils  = c(457,538)
 
    ))
 
  
Mbp1Domains <- rbind(Mbp1Domains, list(
+
# Ooops ... what did I just do wrong?
    species = "Ustilago maydis",
+
#      ... wnat happened instead?
    code    = "USTMA",
 
    ACC    = "Q4P117",
 
    length  = 956,
 
    KilAN  = c(21,98),
 
    AThook  = NULL,
 
    Seg    = c(106,116,161,183,657,672,776,796),
 
    DisEMBL = NULL,
 
    Ankyrin = c(245,274,355,384),
 
    Coils  = c(581,609)
 
    ))
 
  
 +
amamuDB$protein$taxID
  
# Working with data in lists and dataframes can be awkward, since the result
 
# of filters and slices are themselves lists, not vectors.
 
# Therefore we need to use the unlist() function a lot. When in doubt: unlist()
 
  
#### Boxes #####
+
# ... I can simply recover from my backup copy:
# Define a function to draw colored boxes, given input of a vector with
+
load("amamuDB.01.RData")  
# (start,end) pairs, a color, and the height where the box should go.
+
amamuDB$protein$taxID
drawBoxes <- function(v, c, h) {  # vector of xleft, xright pairs; color; height
 
    if (is.null(v)) { return() }
 
    for (i in seq(1,length(v),by=2)) {
 
        rect(v[i], h-0.1, v[i+1], h+0.1, border="black", col=c)
 
    }
 
}
 
  
#### Annotations ####
 
# Define a function to write the species code, draw a grey
 
# horizontal line and call drawBoxes() for every annotation type
 
# in the list
 
drawGene <- function(rIndex) {
 
    # define colors:
 
    kil <- "#32344F"
 
    ank <- "#691A2C"
 
    seg <- "#598C9E"
 
    coi <- "#8B9998"
 
    xxx <- "#EDF7F7"
 
   
 
    text (-30, rIndex, adj=1, labels=unlist(Mbp1Domains[rIndex,"code"]), cex=0.75 )
 
    lines(c(0, unlist(Mbp1Domains[rIndex,"length"])), c(rIndex, rIndex), lwd=3, col="#999999")
 
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"KilAN"]),  kil, rIndex)
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"AThook"]),  xxx, rIndex)
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"Seg"]),    seg, rIndex)
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"DisEMBL"]), xxx, rIndex)
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"Ankyrin"]), ank, rIndex)
 
    drawBoxes(unlist(Mbp1Domains[rIndex,"Coils"]),  coi, rIndex)
 
}
 
 
#### Plot ####
 
# define the size of the plot-frame
 
yMax <- length(Mbp1Domains[,1])  # number of domains in list
 
xMax <- max(unlist(Mbp1Domains[,"length"]))  # largest sequence length
 
 
# plot an empty frame
 
plot(1,1, xlim=c(-100,xMax), ylim=c(0, yMax) , type="n", yaxt="n", bty="n", xlab="sequence number", ylab="")
 
 
# Finally, iterate over all species and call drawGene()
 
for (i in 1:length(Mbp1Domains[,1])) {
 
    drawGene(i)
 
}
 
 
# end
 
  
 
</source>
 
</source>
}}
 
  
  
When you execute the code, your plot should look similar to this one:
+
&nbsp;
 
 
[[Image:DomainAnnotations.jpg|frame|none|SMART domain annotations for Mbp1 proteins from six fungal species.
 
]]
 
 
 
 
{{task|1=
 
{{task|1=
  
On the Student Wiki, add the annotations for YFO to the plot:
+
;Create your own version of the protein database by adding all the genes from YFO that you have discovered with the PSI-BLAST search for the APSES domain. Save it.
  
# Copy one of the list definitions for Mbp1 domains and edit it with the appropriate values for your own annotations.
 
# Test that you can add the YFO annotation to the plot.
 
# Submit your validated code block to the [http://biochemistry.utoronto.ca/steipe/abc/students/index.php/BCH441_2014_Assignment_4_domain_annotations '''Student Wiki here''']. The goal is to compile an overview of all species we are studying in class. 
 
# If your working annotation block is in the Wiki before noontime on Wednesday, you will be awarded a 10% bonus on the quiz.
 
 
}}
 
}}
 
 
==EC==
 
 
 
 
==GO==
 
 
==Introduction==
 
 
{{#pmid: 18563371}}
 
{{#pmid: 19957156}}
 
 
==GO==
 
The Gene Ontology project is the most influential contributor to the definition of function in computational biology and the use of GO terms and GO annotations is ubiquitous.
 
 
{{WWW|WWW_GO}}
 
{{#pmid: 21330331}}
 
 
The GO actually comprises three separate ontologies:
 
 
;Molecular function
 
:...
 
 
 
;Biological Process
 
:...
 
 
 
;Cellular component:
 
: ...
 
 
 
===GO terms===
 
GO terms comprise the core of the information in the ontology: a carefully crafted definition of a term in any of GO's separate ontologies.
 
 
 
 
===GO relationships===
 
The nature of the relationships is as much a part of the ontology as the terms themselves. GO uses three categories of relationships:
 
 
* is a
 
* part of
 
* regulates
 
 
 
===GO annotations===
 
The GO terms are conceptual in nature, and while they represent our interpretation of biological phenomena, they do not intrinsically represent biological objects, such a specific genes or proteins. In order to link molecules with these concepts, the ontology is used to '''annotate''' genes. The annotation project is referred to as GOA.
 
 
{{#pmid:18287709}}
 
 
 
===GO evidence codes===
 
Annotations can be made according to literature data or computational inference and it is important to note how an annotation has been justified by the curator to evaluate the level of trust we should have in the annotation. GO uses evidence codes to make this process transparent. When computing with the ontology, we may want to filter (exclude) particular terms in order to avoid tautologies: for example if we were to infer functional relationships between homologous genes, we should exclude annotations that have been based on the same inference or similar, and compute only with the actual experimental data.
 
 
The following evidence codes are in current use; if you want to exclude inferred anotations you would restrict the codes you use to the ones shown in bold: EXP, IDA, IPI, IMP, IEP, and perhaps IGI, although the interpretation of genetic interactions can require assumptions.
 
 
;Automatically-assigned Evidence Codes
 
*IEA: Inferred from Electronic Annotation
 
;Curator-assigned Evidence Codes
 
*'''Experimental Evidence Codes'''
 
**EXP: Inferred from Experiment
 
**IDA: Inferred from Direct Assay
 
**IPI: Inferred from Physical Interaction
 
**IMP: Inferred from Mutant Phenotype
 
**IGI: Inferred from Genetic Interaction
 
**IEP: Inferred from Expression Pattern</b>
 
*'''Computational Analysis Evidence Codes'''
 
**ISS: Inferred from Sequence or Structural Similarity
 
**ISO: Inferred from Sequence Orthology
 
**ISA: Inferred from Sequence Alignment
 
**ISM: Inferred from Sequence Model
 
**IGC: Inferred from Genomic Context
 
**IBA: Inferred from Biological aspect of Ancestor
 
**IBD: Inferred from Biological aspect of Descendant
 
**IKR: Inferred from Key Residues
 
**IRD: Inferred from Rapid Divergence
 
**RCA: inferred from Reviewed Computational Analysis
 
*'''Author Statement Evidence Codes'''
 
**TAS: Traceable Author Statement
 
**NAS: Non-traceable Author Statement
 
*'''Curator Statement Evidence Codes'''
 
**IC: Inferred by Curator
 
**ND: No biological Data available
 
 
For further details, see the [http://www.geneontology.org/GO.evidence.shtml Guide to GO Evidence Codes] and the [http://www.geneontology.org/GO.evidence.tree.shtml GO Evidence Code Decision Tree].
 
  
  
 
&nbsp;
 
&nbsp;
 
===GO tools===
 
 
For many projects, the simplest approach will be to download the GO ontology itself. It is a well constructed, easily parseable file that is well suited for computation. For details, see [[Computing with GO]] on this wiki.
 
 
 
 
 
 
===AmiGO===
 
practical work with GO: at first via the AmiGO browser
 
[http://amigo.geneontology.org/cgi-bin/amigo/go.cgi '''AmiGO'''] is a [http://www.geneontology.org/ '''GO'''] browser developed by the Gene Ontology consortium and hosted on their website.
 
 
====AmiGO - Gene products====
 
{{task|1=
 
# Navigate to the [http://www.geneontology.org/ '''GO'''] homepage.
 
# Enter <code>SOX2</code> into the search box to initiate a search for the human SOX2 transcription factor ({{WP|SOX2|WP}}, [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=11195 HUGO]) (as ''gene or protein name'').
 
# There are a number of hits in various organisms: ''sulfhydryl oxidases'' and ''(sex determining region Y)-box'' genes. Check to see the various ways by which you could filter and restrict the results.
 
# Select ''Homo sapiens'' as the '''species''' filter and set the filter. Note that this still does not give you a unique hit, but ...
 
# ... you can identify the '''[http://amigo.geneontology.org/cgi-bin/amigo/gp-details.cgi?gp=UniProtKB:P48431 Transcription factor SOX-2]''' and follow its gene product information link. Study the information on that page.
 
# Later, we will need Entrez Gene IDs. The GOA information page provides these as '''GeneID''' in the '''External references''' section. Note it down.  With the same approach, find and record the Gene IDs (''a'') of the functionally related [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=9221 Oct4 (POU5F1)] protein, (''b'') the human cell-cycle transcription factor [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=3113 E2F1], (''c'') the human bone morphogenetic protein-4 transforming growth factor [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=1071 BMP4], (''d'') the human UDP glucuronosyltransferase 1 family protein 1, an enzyme that is differentially expressed in some cancers, [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=12530 UGT1A1], and (''d'') as a positive control, SOX2's interaction partner [http://www.genenames.org/cgi-bin/gene_symbol_report?hgnc_id=20857 NANOG], which we would expect to be annotated as functionally similar to both Oct4 and SOX2.
 
}}
 
  
  
<!--
+
;TBC
SOX2: 6657
 
POU5F1: 5460
 
E2F1: 1869
 
BMP4: 652
 
UGT1A1: 54658
 
NANOG: 79923
 
  
mgeneSim(c("6657", "5460", "1869", "652", "54658", "79923"), ont="BP", organism="human", measure="Wang")
 
-->
 
 
====AmiGO - Associations====
 
GO annotations for a protein are called ''associations''.
 
 
{{task|1=
 
# Open the ''associations'' information page for the human SOX2 protein via the [http://amigo.geneontology.org/cgi-bin/amigo/gp-assoc.cgi?gp=UniProtKB:P48431 link in the right column] in a separate tab. Study the information on that page.
 
# Note that you can filter the associations by ontology and evidence code. You have read about the three GO ontologies in your previous assignment, but you should also be familiar with the evidence codes. Click on any of the evidence links to access the Evidence code definition page and study the [http://www.geneontology.org/GO.evidence.shtml definitions of the codes]. '''Make sure you understand which codes point to experimental observation, and which codes denote computational inference, or say that the evidence is someone's opinion (TAS, IC ''etc''.).''' <small>Note: it is good practice - but regrettably not universally implemented standard - to clearly document database semantics and keep definitions associated with database entries easily accessible, as GO is doing here. You won't find this everywhere, but as a user please feel encouraged to complain to the database providers if you come across a database where the semantics are not clear. Seriously: opaque semantics make database annotations useless.</small> 
 
# There are many associations (around 60) and a good way to select which ones to pursue is to follow the '''most specific''' ones. Set <code>IDA</code> as a filter and among the returned terms select <code>GO:0035019</code> &ndash; [http://amigo.geneontology.org/cgi-bin/amigo/term_details?term=GO:0035019 ''somatic stem cell maintenance''] in the '''Biological Process''' ontology. Follow that link.
 
# Study the information available on that page and through the tabs on the page, especially the graph view.
 
# In the '''Inferred Tree View''' tab, find the genes annotated to this go term for ''homo sapiens''. There should be about 55. Click on [http://amigo.geneontology.org/cgi-bin/amigo/term-assoc.cgi?term=GO:0035019&speciesdb=all&taxid=9606 the number behind the term]. The resulting page will give you all human proteins that have been annotated with this particular term. Note that the great majority of these is via the <code>IEA</code> evidence code.
 
}}
 
 
 
====Semantic similarity====
 
 
A good, recent overview of ontology based functional annotation is found in the following article. This is not a formal reading assignment, but do familiarize yourself with section 3: ''Derivation of Semantic Similarity between Terms in an Ontology'' as an introduction to the code-based annotations below.
 
 
{{#pmid: 23533360}}
 
 
 
Practical work with GO:  bioconductor.
 
 
The bioconductor project hosts the GOSemSim package for semantic similarity.
 
 
{{task|1=
 
# Work through the following R-code. If you have problems, discuss them on the mailing list. Don't go through the code mechanically but make sure you are clear about what it does.
 
<source lang="R">
 
# GOsemanticSimilarity.R
 
# GO semantic similarity example
 
# B. Steipe for BCB420, January 2014
 
 
setwd("~/your-R-project-directory")
 
 
# GOSemSim is an R-package in the bioconductor project. It is not installed via
 
# the usual install.packages() comand (via CRAN) but via an installation script
 
# that is run from the bioconductor Website.
 
 
source("http://bioconductor.org/biocLite.R")
 
biocLite("GOSemSim")
 
 
library(GOSemSim)
 
 
# This loads the library and starts the Bioconductor environment.
 
# You can get an overview of functions by executing ...
 
browseVignettes()
 
# ... which will open a listing in your Web browser. Open the
 
# introduction to GOSemSim PDF. As the introduction suggests,
 
# now is a good time to execute ...
 
help(GOSemSim)
 
 
# The simplest function is to measure the semantic similarity of two GO
 
# terms. For example, SOX2 was annotated with GO:0035019 (somatic stem cell
 
# maintenance), QSOX2 was annotated with GO:0045454 (cell redox homeostasis),
 
# and Oct4 (POU5F1) with GO:0009786 (regulation of asymmetric cell division),
 
# among other associations. Lets calculate these similarities.
 
goSim("GO:0035019", "GO:0009786", ont="BP", measure="Wang")
 
goSim("GO:0035019", "GO:0045454", ont="BP", measure="Wang")
 
 
# Fair enough. Two numbers. Clearly we would appreciate an idea of the values
 
# that high similarity and low similarity can take. But in any case -
 
# we are really less interested in the similarity of GO terms - these
 
# are a function of how the Ontology was constructed. We are more
 
# interested in the functional similarity of our genes, and these
 
# have a number of GO terms associated with them.
 
 
# GOSemSim provides the functions ...
 
?geneSim()
 
?mgeneSim()
 
# ... to compute these values. Refer to the vignette for details, in
 
# particular, consider how multiple GO terms are combined, and how to
 
# keep/drop evidence codes.
 
# Here is a pairwise similarity example: the gene IDs are the ones you
 
# have recorded previously. Note that this will download a package
 
# of GO annotations - you might not want to do this on a low-bandwidth
 
# connection.
 
geneSim("6657", "5460", ont = "BP", measure="Wang", combine = "BMA")
 
# Another number. And the list of GO terms that were considered.
 
 
# Your task: use the mgeneSim() function to calculate the similarities
 
# between all six proteins for which you have recorded the GeneIDs
 
# previously (SOX2, POU5F1, E2F1, BMP4, UGT1A1 and NANOG) in the
 
# biological process ontology.
 
 
# This will run for some time. On my machine, half an hour or so.
 
 
# Do the results correspond to your expectations?
 
 
</source>
 
 
}}
 
 
===GO reading and resources===
 
;General
 
<div class="reference-box">[http://www.obofoundry.org/ '''OBO Foundry''' (Open Biological and Biomedical Ontologies)]</div>
 
{{#pmid: 18793134}}
 
 
 
;Phenotype ''etc.'' Ontologies
 
<div class="reference-box">[http://http://www.human-phenotype-ontology.org/ '''Human Phenotype Ontology''']<br/>
 
See also: {{#pmid: 24217912}}</div>
 
{{#pmid: 22080554}}
 
{{#pmid: 21437033}}
 
{{#pmid: 20004759}}
 
{{#pmid: 16982638}}
 
 
 
;Semantic similarity
 
{{#pmid: 23741529}}
 
{{#pmid: 23533360}}
 
{{#pmid: 22084008}}
 
{{#pmid: 21078182}}
 
{{#pmid: 20179076}}
 
 
;GO
 
{{#pmid: 22102568}}
 
{{#pmid: 21779995}}
 
{{#pmid: 19920128}}
 
Carol Goble on the tension between purists and pragmatists in life-science ontology construction. Plenary talk at SOFG2...
 
{{#pmid: 18629186}}
 
 
 
 
;That is all.
 
  
  
 
&nbsp;
 
&nbsp;
 
== Links and resources ==
 
 
 
 
{{#pmid: 10679470}}
 
{{#pmid: 15808743}}
 
 
  
  
<!-- {{#pmid: 19957275}} -->
 
<!-- {{WWW|WWW_GMOD}} -->
 
<!-- <div class="reference-box">[http://www.ncbi.nlm.nih.gov]</div> -->
 
  
  

Latest revision as of 05:54, 17 November 2015

Assignment for Week 6
Function

< Assignment 5 Assignment 7 >

Note! This assignment is currently inactive. Major and minor unannounced changes may be made at any time.

 
 

Concepts and activities (and reading, if applicable) for this assignment will be topics on next week's quiz.




 

Introduction

 

In this assignment we will first download a number of APSES domain containing sequences into our database - and we will automate the process. Then we will annotate them with domain data. First manually, and then again, we will automate this. Next we will extract the APSES domains from our database according to the annotations. And finally we will align them, and visualize domain conservation in the 3D model to study parts of the protein that are conserved.


 

Downloading Protein Data From the Web

In Assignment 3 we created a schema for a local protein sequence collection, and implemented it as an R list. We added sequences to this database by hand, but since the information should be cross-referenced and available based on a protein's RefSeq ID, we should really have a function that automates this process. It is far too easy to make mistakes and enter erroneous information otherwise.


Task:
Work through the following code examples.

# To begin, we load some libraries with functions
# we need...

# httr sends and receives information via the http
# protocol, just like a Web browser.
if (!require(httr, quietly=TRUE)) { 
	install.packages("httr")
	library(httr)
}

# NCBI's eUtils send information in XML format; we
# need to be able to parse XML.
if (!require(XML, quietly=TRUE)) {
	install.packages("XML")
	library(XML)
}

# stringr has a number of useful utility functions
# to work with strings. E.g. a function that
# strips leading and trailing whitespace from
# strings.
if (!require(stringr, quietly=TRUE)) {
	install.packages("stringr")
	library(stringr)
}


# We will walk through the process with the refSeqID
# of yeast Mbp1
refSeqID <- "NP_010227"


# UniProt.
# The UniProt ID mapping service supports a "RESTful
# API": responses can be obtained simply via a Web-
# browsers request. Such requests are commonly sent
# via the GET or POST verbs that a Webserver responds
# to, when a client asks for data. GET requests are 
# visible in the URL of the request; POST requests
# are not directly visible, they are commonly used
# to send the contents of forms, or when transmitting
# larger, complex data items. The UniProt ID mapping
# sevice can accept long lists of IDs, thus using the
# POST mechanism makes sense.

# R has a POST() function as part of the httr package.

# It's very straightforward to use: just define the URL
# of the server and send a list of items as the 
# body of the request.

# uniProt ID mapping service
URL <- "http://www.uniprot.org/mapping/"
response <- POST(URL, 
                 body = list(from = "P_REFSEQ_AC",
                             to = "ACC",
                             format = "tab",
                             query = refSeqID))

response

# If the query is successful, tabbed text is returned.
# and we capture the fourth element as the requested
# mapped ID.
unlist(strsplit(content(response), "\\s+"))

# If the query can't be fulfilled because of a problem
# with the server, a WebPage is rturned. But the server status
# is also returned and we can check the status code. I have
# lately gotten many "503" status codes: Server Not Available...

if (response$status_code == 200) { # 200: oK
	uniProtID <- unlist(strsplit(content(response), "\\s+"))[4]
	if (is.na(uniProtID)) {
	warning(paste("UniProt ID mapping service returned NA.",
	              "Check your RefSeqID."))
	}
} else {
	uniProtID <- NA
	warning(paste("No uniProt ID mapping available:",
	              "server returned status",
	              response$status_code))
}

uniProtID  # Let's see what we got...
           # This should be "P39678"
           # (or NA if the query failed)


Next, we'll retrieve data from the various NCBI databases.

It is has become unreasonably difficult to screenscrape the NCBI site since the actual page contents are dynamically loaded via AJAX. This may be intentional, or just overengineering. While NCBI offers a subset of their data via the eutils API and that works well enough, some of the data that is available to the Web browser's eyes is not served to a program.

The eutils API returns data in XML format. Have a look at the following URL in your browser to see what that looks like:

http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=protein&term=NP_010227


# In order to parse such data, we need tools from the 
# XML package. 

# First we build a query URL...
eUtilsBase <- "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/"


# Then we assemble an URL that will search for get the
# unique, NCBI internal identifier,  the GI number,
# for our refSeqID...
URL <- paste(eUtilsBase,
             "esearch.fcgi?",     # ...using the esearch program
                                  # that finds an entry in an
                                  # NCBI database
             "db=protein",
             "&term=", refSeqID,
             sep="")
# Copy the URL and paste it into your browser to see
# what the response should look like.
URL

# To fetch a response in R, we use the function htmlParse()
# with our URL as its argument.
response <- htmlParse(URL)
response

# This is XML. We can take the response apart into
# its indvidual components with the xmlToList function.

xmlToList(response)

# Note how the XML "tree" is represented as a list of
# lists of lists ...
# If we know exactly what elelement we are looking for,
# we can extract it from this structure:
xmlToList(response)[["body"]][["esearchresult"]][["idlist"]][["id"]]

# But this is not very robus, it would break with the
# slightest change that the NCBI makes to their response
# and the NCBI changes things A LOT!

# Somewhat more robust is to specify the type of element
# we want - its the text contained in an <id>...</id>
# elelement, and use the XPath XML parsing language to
# retrieve it.

# getNodeSet() lets us fetch tagged contents by 
# applying toString.XMLNode() to it...

node <- getNodeSet(response, "//id/text()")
unlist(lapply(node, toString.XMLNode))  # "6320147 "

# We will be doing this a lot, so we write a function
# for it...
node2string <- function(doc, tag) {
    # an extractor function for the contents of elements
    # between given tags in an XML response.
    # Contents of all matching elements is returned in
    # a vector of strings.
	path <- paste("//", tag, "/text()", sep="")
	nodes <- getNodeSet(doc, path)
	return(unlist(lapply(nodes, toString.XMLNode)))
}

# using node2string() ...
GID <- node2string(response, "id")
GID

# The GI is the pivot for all our data requests at the
# NCBI. 

# Let's first get the associated data for this GI
URL <- paste(eUtilsBase,
             "esummary.fcgi?",
             "db=protein",
             "&id=",
             GID,
             "&version=2.0",
             sep="")
response <- htmlParse(URL)
URL
response

taxID <- node2string(response, "taxid")
organism <- node2string(response, "organism")
taxID
organism


# Next, fetch the actual sequence
URL <- paste(eUtilsBase,
             "efetch.fcgi?",
             "db=protein",
             "&id=",
             GID,
             "&retmode=text&rettype=fasta",
             sep="")
response <- htmlParse(URL)
URL
response

fasta <- node2string(response, "p")
fasta

seq <- unlist(strsplit(fasta, "\\n"))[-1] # Drop the first elelment,
                                          # it is the FASTA header.
seq


# Next, fetch the crossreference to the NCBI Gene
# database
URL <- paste(eUtilsBase,
             "elink.fcgi?",
             "dbfrom=protein",
             "&db=gene",
             "&id=",
             GID,
             sep="")
response <- htmlParse(URL)
URL
response

geneID <- node2string(response, "linksetdb/id")
geneID

# ... and the actual Gene record:
URL <- paste(eUtilsBase,
             "esummary.fcgi?",
             "&db=gene",
             "&id=",
             geneID,
             sep="")
response <- htmlParse(URL)
URL
response

name <- node2string(response, "name")
genome_xref <- node2string(response, "chraccver")
genome_from <- node2string(response, "chrstart")[1]
genome_to <- node2string(response, "chrstop")[1]
name
genome_xref
genome_from
genome_to

# So far so good. But since we need to do this a lot
# we need to roll all of this into a function. 

# I have added the function to the dbUtilities code
# so you can update it easily.

# Run:

updateDbUtilities("55ca561e2944af6e9ce5cf2a558d0a3c588ea9af")

# If that is successful, try these three testcases

myNewDB <- createDB()
tmp <- fetchProteinData("NP_010227") # Mbp1p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB

tmp <- fetchProteinData("NP_011036") # Swi4p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB

tmp <- fetchProteinData("NP_012881") # Phd1p
tmp
myNewDB <- addToDB(myNewDB, tmp)
myNewDB


This new fetchProteinData() function seems to be quite convenient. I have compiled a set of APSES domain proteins for ten fungi species and loaded the 48 protein's data into an R database in a few minutes. This "reference database" will be automatically loaded for you with the next dbUtilities update. Note that it will be recreated every time you start up R. This means two things: (i) if you break something in the reference database, it's not a problem. (ii) if you store your own data in it, it will be lost. In order to add your own genes, you need to make a working copy for yourself.


Computer literacy

Digression - some musings on computer literacy and code engineering.

It's really useful to get into a consistent habit of giving your files a meaningful name. The name should include something that tells you what the file contains, and something that tells you the date or version. I give versions major and minor numbers, and - knowing how much things always change - I write major version numbers with a leading zero eg. 04 so that they will be correctly sorted by name in a directory listing. The same goes for dates: always write YYYY-MM-DD to ensure proper sorting.

On the topic of versions: creating the database with its data structures and the functions that operate on them is an ongoing process, and changes in one part of the code may have important consequences for another part. Imagine I made a poor choice of a column name early on: changing that would need to be done in every single function of the code that reads or writes or analyzes data. Once the code reaches a certain level of complexity, organizing it well is just as important as writing it in the first place. In the new update of dbUtilities.R, a database has a $version element, and every function checks that the database version matches the version for which the function was written. Obviously, this also means the developer must provide tools to migrate contents from an older version to a newer version. And since migrating can run into trouble and leave all data in an inconsistent and unfixable state, it's a good time to remind you to back up important data frequently. Of course you will want to save your database once you've done any significant work with it. And you will especially want to save the databases you create for your Term Project. But you should also (and perhaps more importantly) save the script that you use to create the database in the first place. And on that note: when was the last time you made a full backup of your computer's hard-drive? Too long ago? I thought so.

Backup your hard-drive now.


If your last backup at the time of next week's quiz was less than two days ago, you will receive a 0.5 mark bonus.


New Database

Here is some sample code to work with the new database, enter new protein data for YFO, save it and load it again when needed.


# You don't need to load the reference database refDB. If
# everything is set up correctly, it gets loaded at startup.
# (Just so you know: you can turn off that behaviour if you
# ever should want to...)


# First you need to load the newest version of dbUtilities.R

updateDButilities("7bb32ab3d0861ad81bdcb9294f0f6a737b820bf9")

# If you get an error: 
#    Error: could not find function "updateDButilities"
# ... then it seems you didn't do the previous update.

# Try getting the update with the new key but the previous function:
# updateDbUtilities()
#
# If that function is not found either, confirm that your ~/.Rprofile
# actually loads dbUtilites.R from your project directory. 

# As a desperate last resort, you could uncomment
# the following piece of code and run the update
# without verification...
#
# URL <- "http://steipe.biochemistry.utoronto.ca/abc/images/f/f9/DbUtilities.R"
# download.file(URL, paste(PROJECTDIR, "dbUtilities.R", sep="")), method="auto")
# source(paste(PROJECTDIR, "dbUtilities.R", sep=""))
#
# But be cautious: there is no verification. You yourself need
# to satisfy yourself that this "file from the internet" is what 
# it should be, before source()'ing it...


# After the file has been source()'d,  refDB exists.
ls(refDB)


# check the contents of refDB:
refDB$protein$name
refDB$taxonomy


# list refSeqIDs for saccharomyces cerevisiae genes.
refDB$protein[refDB$protein$taxID == 559292, "refSeqID"]


# To add some genes from YFO, I proceed as follows.
# Obviously, you need to adapt this to your YFO
# and the sequences in YFO that you have found
# with your PSI-BLAST search.

# Let's assume my YFO is the fly agaric (amanita muscaria)
# and its APSES domain proteins have the following IDs
# (these are not refSeq btw. and thus unlikely
# to be found in UniProt) ...
# KIL68212
# KIL69256
# KIL65817
#


# First, I create a copy of the database with a name that
# I will recognize to be associated with my YFO.
amamuDB <- refDB


# Then I fetch my protein data ...
tmp1 <- fetchProteinData("KIL68212")
tmp2 <- fetchProteinData("KIL69256")
tmp3 <- fetchProteinData("KIL65817")


# ... and if I am satisfied that it contains what I
# want, I add it to the database.
amamuDB <- addToDB(amamuDB, tmp1)
amamuDB <- addToDB(amamuDB, tmp2)
amamuDB <- addToDB(amamuDB, tmp3)


# Then I make a local backup copy. Note the filename and
# version number  :-)
save(amamuDB, file="amamuDB.01.RData")
 

# Now I can explore my new database ...
amamuDB$protein[amamuDB$protein$taxID == 946122, "refSeqID"]


# ... but if anything goes wrong, for example 
# if I make a mistake in checking which
# rows contain taxID 946122 ... 
amamuDB$protein$taxID = 946122

# Ooops ... what did I just do wrong?
#       ... wnat happened instead? 

amamuDB$protein$taxID


# ... I can simply recover from my backup copy:
load("amamuDB.01.RData")    
amamuDB$protein$taxID


 

Task:

Create your own version of the protein database by adding all the genes from YFO that you have discovered with the PSI-BLAST search for the APSES domain. Save it.


 


TBC


 



 


Footnotes and references


 

Ask, if things don't work for you!

If anything about the assignment is not clear to you, please ask on the mailing list. You can be certain that others will have had similar problems. Success comes from joining the conversation.



< Assignment 5 Assignment 7 >