Difference between revisions of "Computational Systems Biology Main Page"
(23 intermediate revisions by the same user not shown) | |||
Line 241: | Line 241: | ||
To illustrate the requirements with a model solution, I have provided an [http://steipe.biochemistry.utoronto.ca/abc/students/index.php/User:Boris/BCB420-2019-Data_STRING example project page '''here'''], which links to a Github repository with the corresponding package. Studying this with some care will probably clarify many questions. | To illustrate the requirements with a model solution, I have provided an [http://steipe.biochemistry.utoronto.ca/abc/students/index.php/User:Boris/BCB420-2019-Data_STRING example project page '''here'''], which links to a Github repository with the corresponding package. Studying this with some care will probably clarify many questions. | ||
+ | |||
+ | <div class="note"> | ||
+ | ;Note | ||
+ | :*If your data refers to chromosomal coordinates in any way, you '''must''' ensure the coordinates are from GRCh38 (hg38)<ref>For different approaches to convert from one to the other see [https://www.biostars.org/p/65558/ '''this thread''' on Biostars].</ref> | ||
+ | :*Your chosen database will not always be the best choice of data source: often you can achieve your objective faster though ensembl/biomart. See [http://useast.ensembl.org/Homo_sapiens/Transcript/PDB?_format=HTML;db=core;g=ENSG00000139618;genomic=off;output=fasta;param=cdna;r=13:32889611-32973347;strand=feature;t=ENST00000380152 this sample annotation of BRCA2] for examples of what data is available. | ||
+ | </div> | ||
+ | |||
+ | {{Vspace}} | ||
+ | |||
+ | ====Database choices==== | ||
+ | |||
+ | {{Smallvspace}} | ||
+ | |||
+ | Here are the chosen (or assigned) databases. Follow the link in the "Note" column for details: | ||
+ | |||
+ | {{Smallvspace}} | ||
+ | |||
+ | <table> | ||
+ | <tr> | ||
+ | <td>Name</td> | ||
+ | <td>DB</td> | ||
+ | <td>Note</td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Edouard Al-chami</td> | ||
+ | <td>[https://www.ncbi.nlm.nih.gov/geo/ GEO (stimulus)]</td> | ||
+ | <td> <ref>Cell response to external stimuli (eg. heat, salt, insulin, chemokines ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Emily Ayala</td> | ||
+ | <td>[https://www.gencodegenes.org/ Gene models]</td> | ||
+ | <td> <ref>Find gene models (exons and chromosomal coordinates) for each gene. Possible sources are Gencode v29 GTF or Gff3 files, or exons from biomart. Result: for each gene, a set of chromosomal start/end coordinates for the principal isoform as defined by APPRIS.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Deus Bajaj</td> | ||
+ | <td>EGGNOG</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Cathy Cha</td> | ||
+ | <td>[https://www.ncbi.nlm.nih.gov/geo/ GEO (tissues)]</td> | ||
+ | <td> <ref>Differential expression in tissues (eg. brain, epithelium, muscles ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Nada Elnour</td> | ||
+ | <td>Human Protein Atlas</td> | ||
+ | <td> <ref>Find subcellular localization for each gene. Result: for each gene, the subcellular localizations it is associated with.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Chantal Ho</td> | ||
+ | <td>[https://www.ncbi.nlm.nih.gov/geo/ GEO (diseases)]</td> | ||
+ | <td> <ref>Differential expression in disease states (eg. diabetes, hypertension, RA, ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Edward Ho</td> | ||
+ | <td>Cosmic</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Sapir Labes</td> | ||
+ | <td>GWAS</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Judy Lee</td> | ||
+ | <td>PDB</td> | ||
+ | <td> <ref>Find PDB structures of human proteins. Possible data sources: Biomart? PDB? NCBI's MMDB? If structures overlap, report only the best representative. This is a set of feature annotations for each gene that includes start and stop coordinates. You must validate the coordinates, i.e. make sure that the annotated residue numbers map accurately to the actual sequence associated with the HGNC symbol.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Tina Lee</td> | ||
+ | <td>Pfam</td> | ||
+ | <td> <ref>Obtain annotations via Ensembl/biomart. This is a set of feature annotation for each gene that includes start and stop coordinates. You must validate the coordinates, i.e. make sure that the annotated residue numbers map accurately to the actual sequence associated with the HGNC symbol.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Jian Bin Lin</td> | ||
+ | <td>GEO</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Matthew Mcneil</td> | ||
+ | <td>COSMIC and GEO</td> | ||
+ | <td> <ref>Tissue specific correlations of expression levels. Result: for each gene ... ??? Question: how are differentially spliced genes handled?</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Gabriela Morgenshtern</td> | ||
+ | <td>Awesome (or PANTHER)</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Yoonsik Park</td> | ||
+ | <td>[https://reactome.org/ Reactome pathways]</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Alesandro Rigido</td> | ||
+ | <td>[http://software.broadinstitute.org/gsea/msigdb/index.jsp MsigDB]</td> | ||
+ | <td> <ref>For a selected set of MSigDB sets compute co-occurrence probability of genes: how often do they co-occur in the same MSig Set? This is a network-type result. Output will be two HGNC symbols and one probability for each queried pair. Don't precompute all 1e9 possible pairs, but conceptualize this as a tool that queries a compact datastructure with the probabilities, e.g. a boolean matrix with one set-annotation per column (for each gene TRUE if present in the set, FALSE if not present) that compares two row-vectors for each query.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Fan Shen</td> | ||
+ | <td>SMART</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Rachel Silverstein</td> | ||
+ | <td>Human Phenotype Ontology</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Yiqiu Tang</td> | ||
+ | <td>[https://www.omim.org/ OMIM]</td> | ||
+ | <td> <ref>Gene phenotype associations. For each gene, the set of phenotypes it is associated with.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Denitsa Vasileva</td> | ||
+ | <td>[https://www.ebi.ac.uk/GOA GO annotations]</td> | ||
+ | <td> <ref>For each gene, the set of GO terms it is annotated to.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Rachel Woo</td> | ||
+ | <td>Human Protein Atlas</td> | ||
+ | <td> <ref>Tissue Data: tissue level expression vector. Result: for each gene ... ??? Question: how are differentially spliced genes handled?</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Alison Wu</td> | ||
+ | <td>[https://thebiogrid.org/ BioGRID]</td> | ||
+ | <td> <ref>Process genetic interactions only. Result: edge list (Weighted? Directed?)</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Yufei Yang</td> | ||
+ | <td>[http://gtrd.biouml.org/ GTRD]</td> | ||
+ | <td> <ref>ChipSeq verified TF binding sites in gene promoter regions. Result: for each genes, list of transcription factors that target its promoter region.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Yin Yin</td> | ||
+ | <td>[http://proteincomplexes.org/ huMAP]</td> | ||
+ | <td> <ref>Protein complexes. Result: for each gene, all complexes (if any) it has been annotated to.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Han Zhang</td> | ||
+ | <td>[http://hintdb.hgc.jp/htp/index.html HitPredict]</td> | ||
+ | <td> <ref>Weighted interaction graph. Result: edge list with weights.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Xindi Zhang</td> | ||
+ | <td>[http://mips.helmholtz-muenchen.de/corum/ CORUM]</td> | ||
+ | <td> <ref>Protein complexes. Result: for each gene, all complexes (if any) it has been annotated to.</ref></td> | ||
+ | </tr> | ||
+ | <tr class="s1"> | ||
+ | <td>Yuhan Zhang</td> | ||
+ | <td>Encode</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | <tr class="s2"> | ||
+ | <td>Liwen Zhuang</td> | ||
+ | <td>Human Disease Ontology</td> | ||
+ | <td> </td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | |||
+ | |||
+ | |||
+ | Contact me with any questions you may have. | ||
{{Vspace}} | {{Vspace}} | ||
Line 246: | Line 409: | ||
====Part II: Biocuration==== | ====Part II: Biocuration==== | ||
− | "Systems" are concepts and working with systems requires expert knowledge. To explore | + | "Systems" are concepts and working with systems requires expert knowledge. To explore the practice of expert curation of molecular systems, each of you will select one system in our second open-ended session and report on its components, its function(s) and its architecture. To start off: |
− | * a | + | * Choose a system from the [http://steipe.biochemistry.utoronto.ca/abc/students/index.php/BCB420_2019_Biocuration_table '''GO term table''' on the Student Wiki], confirm your choice with me and replace the "N.N." in the table with your name. |
− | ** that | + | * Explore the term on AmiGO, and explore the linked "seed-genes" on UniProt. |
− | ** | + | * In PubMed, find recent reviews or other manuscripts that discuss the system and its context. Make sure you have not overlooked important literature, this will be part of your evaluation. If there is no suitable literature available, your GO term is not a suitable choice. |
− | + | * Get an overview of your system and how it relates to the GO term you start out from. | |
− | + | * define the system well and define a five-letter code as a shorthand notation of the system as discussed in class. | |
− | + | ;Note | |
+ | :A GO term is not a system nor is the set of GOA annotated genes a complete description of the system's members. A system may overlap the component/function/process described in a GO term to a large degree, but the term is not informed or constrained by our "system" definition. We use GO terms as a first approximation to system functions, and we use GOA to define "seed" genes as a starting point that may help us build out the system description. However, a system's roles include the creation, maintenance, destruction, and potentially recycling of components, and these roles are not always included in either the literature nor the GO terms themselves. | ||
+ | |||
+ | {{Smallvspace}} | ||
+ | |||
+ | Read the [[Systems_curation|notes on curating a biological system]]. | ||
+ | |||
+ | {{Smallvspace}} | ||
+ | |||
+ | {{#lst:Systems_curation|deliverables}} | ||
− | + | ;Deliverables: Form | |
+ | <section begin=curation_form /> | ||
+ | * Create a '''project page''' on the Student Wiki named according to the pattern: <code><nowiki>User:<your_name>/BCB420-2019-System_<your_system_code></nowiki></code>; | ||
+ | * add the category tag: <code><nowiki>[[Category:BCH420-2019_Curation_project]]</nowiki></code>; | ||
+ | * add the <code><nowiki>{{CC-BY}}</nowiki></code> template; | ||
+ | * summarize your "seed" information (follow the model [http://steipe.biochemistry.utoronto.ca/abc/students/index.php/User:Boris/BCB420-2019-System_PHALY#Stage_1:_The_System_seed for the '''PHALY''' system]); | ||
+ | * as you are annotating your system, ensure all components have a SyRO role defined, and the evidence source and evidence code has been entered; | ||
+ | * the system data needs to be included in the page in a [https://jsonlint.com/ '''valid'''(!) JSON file], in an expansible section of text.<ref>Note: you '''must''' include line breaks with your JSON data! Data that has everything on one line will '''not''' be accepted.</ref> | ||
+ | <section end=curation_form /> | ||
+ | |||
+ | |||
+ | Both your data import script and your curated system model will be assessed in the Oral Test. | ||
{{Vspace}} | {{Vspace}} | ||
− | |||
====Part III: Exploration==== | ====Part III: Exploration==== | ||
Line 267: | Line 449: | ||
* a Vignette in the package that describes the tool and includes sample code for which the data is also provided in the package. | * a Vignette in the package that describes the tool and includes sample code for which the data is also provided in the package. | ||
− | Your deliverables will be | + | Your deliverables will be evaluated together with your participation in constructing the package. |
+ | |||
+ | ;Deliverables: Form | ||
+ | <section begin=exploration_form /> | ||
+ | * On the Student Wiki - | ||
+ | ** Create a '''project page''' on the Student Wiki named according to the pattern: <code><nowiki>User:<your_name>/BCB420-2019-ExploratorySystemsAnalysis</nowiki></code>; | ||
+ | ** add the category tag: <code><nowiki>[[Category:BCH420-2019_Exploration_project]]</nowiki></code>; | ||
+ | ** add the <code><nowiki>{{CC-BY}}</nowiki></code> template; | ||
+ | ** summarize the objectives of your exploration tool in terms of input, output, and interpretation; | ||
+ | ** write a specification for your exploration tool; | ||
+ | ** summarize example results. | ||
+ | |||
+ | * On GitHub - | ||
+ | ** Fork the project [https://github.com/hyginn/BCB420.2019.ESA <code>BCB420.2019.ESA</code>]; | ||
+ | ** Develop your code as a package function; | ||
+ | ** Write a vignette; | ||
+ | ** Make sure your changes pass without errors, warnings or notes; | ||
+ | ** Submit a pull request by Monday, March 25. | ||
+ | ** Address comments from the pull-request review before Tuesday, April 2. | ||
+ | |||
+ | The code is considered "submitted" when it passes the continuous integration checks, all pull-request reviews have been addressed, and your branch has been merged into the <code>BCB420.2019.ESA</code> package. | ||
{{Vspace}} | {{Vspace}} | ||
Line 287: | Line 489: | ||
::Scope for a '''"practicable"''' make-up opportunity for the Oral Test will be limited. | ::Scope for a '''"practicable"''' make-up opportunity for the Oral Test will be limited. | ||
− | * '''Submissions due on the {{ | + | * '''Submissions due on the {{LastdateSpring}}.''' |
− | ::Since the course does not have a final exam, the Faculty requires grades to be marked, collated and submitted a few days after the {{ | + | ::Since the course does not have a final exam, the Faculty requires grades to be marked, collated and submitted a few days after the {{LastdateSpring}}. Therefore I cannot normally grant extensions beyond this date. The Faculty allows so called ''informal extensions'' to be granted "in extraordinary circumstances"; in those cases too, the requirement to be "fair, equitable and reasonable" will apply, i.e. you would need to demonstrate that the need for the extension was due to unavoidable circumstances that go significantly beyond what was expected of the rest of the class, and submit "official" documentation to me. In that case, (i) we would determine an adjusted submission date, (ii) I will initially submit a mark of 0 for the missing submissions, and (iii) I will submit an amended mark, after that date, if appropriate. Note that the Faculty requires that such extensions don't go beyond a few days after the end of the Final Examination Period. If you require an extension beyond that date you need to submit a ''formal petition'' through your College Registrar. |
Line 455: | Line 657: | ||
---- | ---- | ||
* To prepare before next meeting ... | * To prepare before next meeting ... | ||
+ | :* create a project page on the Student Wiki | ||
+ | :* study your database and figure out how the information it provides is related to the system data model | ||
+ | :* define your requirements | ||
:* create a package based on [https://github.com/hyginn/rpt '''rpt'''] | :* create a package based on [https://github.com/hyginn/rpt '''rpt'''] | ||
− | :* | + | :* begin writing your workflow as a "literate programming" document |
− | |||
− | |||
---- | ---- | ||
{{Smallvspace}} | {{Smallvspace}} | ||
<span class="mw-customtoggle-Notes03" style="vertical-align:bottom;">Details ... ▽△</span> | <span class="mw-customtoggle-Notes03" style="vertical-align:bottom;">Details ... ▽△</span> | ||
<div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-Notes03"><small> | <div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-Notes03"><small> | ||
− | * | + | * Understand the context: |
− | * ... | + | ** What data is available? Explore your database and be sure to understand the semantics of the data. |
− | * ... | + | ** How is your data going to support systems annotations? Study the systems data model in the [https://github.com/hyginn/BCB420-2019-resources resources project] |
+ | ** How are you going to present your data? | ||
+ | *** The [https://github.com/hyginn/rpt <tt>'''rpt'''</tt> package]: read the <tt>README</tt> and understand how this supports you to construct your own R package. | ||
+ | *** Markdown: work through the [[RPR-Literate_programming|'''Literate Programming''' unit]] to get an idea in principle, but note the difference between <tt>.Rmd</tt> and <tt>.md</tt> documents (We are doing <tt>.md</tt> here, this is simpler.) | ||
+ | *** Study the [https://github.com/hyginn/BCB420.2019.STRING sample solution well.] Understand what parts of this are relevant for your project, which ones are not, and what parts you may need that are not in the sample solution. | ||
+ | * Get started: | ||
+ | ** Define your requirements. Define how you are going to download the source data, what the results data should look like, and how you are going to construct the results. Identify ambiguities, cleanup needs, possibilities for validation. | ||
+ | ** Start a project page on the Student Wiki, write your requirements in point form | ||
+ | ** Start building your package. Follow the instructions in the [https://github.com/hyginn/rpt <tt>'''rpt'''</tt> package]. Push the result to GitHub. | ||
+ | ** Link to your package from your project page. | ||
+ | ** Draft an outline of your workflow in your <tt>README.md</tt> document. Commit and push to GitHub. | ||
+ | * Communicate: whenever questions come up, post on the list. | ||
+ | {{Smallvspace}} | ||
+ | * Don't forget your Journal! | ||
Line 560: | Line 776: | ||
---- | ---- | ||
* To prepare before next meeting ... | * To prepare before next meeting ... | ||
− | :* Begin project page | + | :* Begin your project page |
:* Define observables | :* Define observables | ||
− | :* Begin | + | :* Begin exploring your system |
− | :* Start | + | :* Start drafting a systems architecture |
---- | ---- | ||
{{Smallvspace}} | {{Smallvspace}} | ||
<span class="mw-customtoggle-Notes05" style="vertical-align:bottom;">Details ... ▽△</span> | <span class="mw-customtoggle-Notes05" style="vertical-align:bottom;">Details ... ▽△</span> | ||
<div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-Notes05"><small> | <div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-Notes05"><small> | ||
− | + | {{#lst:Computational_Systems_Biology_Main_Page|curation_form}} | |
− | * ... | + | * draft a '''hand drawn sketch''' of the system architecture (cf. {{PDFlink|[http://steipe.biochemistry.utoronto.ca/abc/assets/BIN-SYS-Concepts.pdf "Systems Concepts"]}} <small>(this is the file that was assigned as required reading in Week 2</small>); |
− | * | + | * write down a '''list of observables''' for your system, the relationship of the '''data''' we explored in Phase I to the system: |
+ | ** What features do you expect to find for a gene that occurs in the system? (Annotation-type data) | ||
+ | ** What features do you expect to be shared by two genes that occur in your system? (Network-type data) | ||
+ | ** What features do you expect to be enriched for all genes in your system, or a defined subset? (Set/enrichment-type data) | ||
+ | {{Smallvspace}} | ||
+ | * Don't forget to write your Journal as you explore your system! | ||
Line 593: | Line 814: | ||
<td class="sc">6</td> | <td class="sc">6</td> | ||
<td class="sc"> | <td class="sc"> | ||
− | * | + | * Class was canceled due to an ice storm |
</td> | </td> | ||
<td class="sc"> | <td class="sc"> | ||
Line 636: | Line 857: | ||
<td class="sc"> | <td class="sc"> | ||
* To prepare during reading week ... | * To prepare during reading week ... | ||
− | :* | + | :* Start your project page on the Student Wiki; |
− | :* . | + | :* draft a hand drawn sketch of the system architecture; |
+ | :* draft a list of system observables; | ||
+ | For details see [[Computational_Systems_Biology_Main_Page#Part_II:_Biocuration|the "Biocuration" deliverables]] (above). | ||
+ | |||
+ | |||
+ | <!-- | ||
---- | ---- | ||
Line 649: | Line 875: | ||
</small></div> | </small></div> | ||
+ | --> | ||
</td> | </td> | ||
Line 668: | Line 895: | ||
<td class="sc">7</td> | <td class="sc">7</td> | ||
<td class="sc"> | <td class="sc"> | ||
− | * Milestone | + | * Milestone report: (major progress: you should be nearly done) |
</td> | </td> | ||
<td class="sc"> | <td class="sc"> |
Latest revision as of 11:23, 2 April 2019
Computational Systems Biology
Course Wiki for BCB420 (Computational Systems Biology) and JTB2020 (Applied Bioinformatics).
This is our main tool to coordinate information, activities and projects in University of Toronto's computational systems biology course BCB420. If you are not one of our students, this site is unlikely to be useful. If you are here because you are interested in general aspects of bioinformatics or computational biology, you may want to review the Wikipedia article on bioinformatics, or visit Wikiomics. Contact boris.steipe(at)utoronto.ca with any questions you may have.
If you are enrolled in this course but have not been subscribed to the mailing list, or do not have an account on the Student Wiki, please contact me immediately.
Contents
BCB420 / JTB2020
These are the course pages for BCB420H (Computational Systems Biology). Welcome, you're in the right place.
These are also the course pages for JTB2020H (Applied Bioinformatics). How come? Why is JTB2020 not the graduate equivalent of BCB410 (Applied Bioinformatics)? Let me explain. When this course was conceived as a required part of the (then so called) Collaborative PhD Program in Proteomics and Bioinformatics in 2003, there was an urgent need to bring graduate students to a minimal level of computer skills and programming; prior experience was virtually nonexistent. Fortunately, the field has changed and our current graduate students are usually quite competent at least in some practical aspects of computational biology. In this course we profit from the rich and diverse knowledge of the problem-domain our graduate students have, while bringing everyone up to a level of competence in the practical, computational aspects.
- The 2019 course...
In this course we explore systems biology of human genes with computational means in project oriented format. This will proceed in three phases:
- Foundations first: we will review basic computational skills and bioinformatics knowledge to bring everyone to the same level. In all likelihood you will need to start with these tasks well in advance of the actual lectures. This phase will include a comprehensive quiz on prerequisite material in week 3. We will explore data-sources and you will choose one data-source for which you will develop import code and document it in an R markdown document within an R package;
- Next we'll focus on Biocuration: the expertise-informed collection, integration and annotation of biological data. We will each choose a molecular "system" to work on, and define an ontology and data-model in which to annotate our system's components, their roles, and their relationships. The outcome of your curation task (together with your data script) will define the scope of this course's Oral Test;
- Finally, we will develop tools for Exploratory Data Analysis in computational systems biology. We will jointly develop code for a team-authored R package where everyone contributes one mini workflow for data preparation, exploration and interpretation. Your code contributions to the package will be assessed;
- There are several meta-skills that you will pick up "on the side" these include time management, working according to best practice of reproducible research in a collaborative environment on GitHub; report writing, and keeping a scientific lab journal.
Organization
- Dates
- BCB420/JTB2020 is a Winter Term course.
- Lectures: Tuesdays, 16:00 to 18:00. (Classes start at 10 minutes past the hour.)
- Note: there will be three open-ended collaborative planning sessions that may go well into the night. Attendance and participation is mandatory.
- Final Exam: None for this course.
- Events
- Tuesday, January 8 2019: Course officially begins. No class meeting. Get started on preparatory material (well in advance actually).
- Tuesday, January 15: First class meeting. Mock-quiz for preparatory material.
- Tuesday, January 22: First live quiz on preparatory material. Later: open ended session on data import
- Tuesday, February 5: Open ended session on system curation
- Tuesday, March 12: Open ended session on exploratory data analysis
- Location
- MS 3278 (Medical Sciences Building).
- Departmental information
- For BCB420 see the BCB420 Biochemistry Department Course Web page.
- For JTB2020 see the JTB2020 Course Web page for general information.
Prerequisites and Preparation
This course has formal prerequisites of BCH441H1 (Bioinformatics) or CSB472H1 (Computational Genomics and Bioinformatics). I have no way of knowing what is being taught in CSB472, and no way of confirming how much you remember from any of your previous courses, like BCH441 or BCB410. Moreover there are many alternative ways to become familiar with important course contents. Thus I generally enforce course-prerequisites only very weakly and you should not assume at all that having taken any particular combination of courses will have prepared you sufficiently. Instead I make the contents of the course very explicit. If your preparation is lacking, you will have to expend a very significant amount of effort. This is certainly possible, but whether you will succeed will depend on your motivation and aptitude.
The course requires (i) a solid understanding of molecular biology, (ii) solid, introductory level knowledge of bioinformatics, (iii) a good working knowledge of the R programming language.
The prerequisite material for this course includes the contents of the 2018 BCH441 course:
- <command>-Click to open the Bioinformatics Learning Units Map in a new tab, scale for detail.
- Open the Bioinformatics Knowledge Network Map and get an overview of the material. You should confidently be able to execute the tasks in the four Integrator Units .
- If you have taken BCH441 before, please note that many of the units have undergone significant revisions and material has been added. You will need to review the material and familiarize yourself more with the R programming aspects.
- If you have not taken BCH441, you will need to work through the material rather carefully. Estimate at least three weeks of time and get started immediately.
A minimal subset of bioinformatics knowledge you need to begin with work in BCB420 is linked from the BCB420-specific map below. To ensure everyone is adequately prepared, we will hold a Quiz on the Live units on that map in the third week of class. We will hold a mock-quiz on the material in the second week (our first class meeting) so everyone knows what to expect.
- <command>-Click to open the BCB420 Preparation Learning Units Map in a new tab, scale for detail.
- Hover over a learning unit to see its keywords.
- Click on a learning unit to open the associated page.
- The nodes of the learning unit network are colour-coded:
- Live units are green
- Units under development are light green. These are still in progress.
- Stubs (placeholders) are pale. These still need basic contents.
- Milestone units are blue. These collect a number of prerequisites to simplify the network.
- Integrator units are red. These embody the main goals of the course. These units are not for evaluation in BCB420.
- Arrows point from a prerequisite unit to a unit that builds on its contents.
Grading, Activities, Deliverables
For details of the deliverables, see below.
Activity | Weight BCB410 - (Undergraduates) |
Weight JTB2020 - (Graduates) |
Self-evaluation and Feedback session on preparatory material("Quiz"[1]) | 20 marks | 15 marks |
Oral Test (March 7/8) | 30 marks | 30 marks |
Collaborative software task and participation | 20 marks | 15 marks |
Journal | 25 marks | 25 marks |
Insights | 5 marks | 5 marks |
Pull request reviews | 10 marks | |
Total | 100 marks | 100 marks |
We are covering a lot of ground in this course, and all deliverables feed into a collaborative project. Everyone's continuous, active participation is essential for making this a success: for you personally and for the class as a team.
Getting started
Everything starts with the following four units:
- Introduction to editing Wiki pages (Optional if you have taken BCH441 or BCB410.)
This should be the first learning unit you work with, since your Course Journal will be kept on a Wiki, as well as all other deliverables. This unit includes an introduction to authoring Wikitext and the structure of Wikis, in particular how different pages live in separate "Namespaces". The unit also covers the standard markup conventions - "Wikitext markup" - the same conventions that are used on Wikipedia - as well as some extensions that are specific to our Course- and Student Wiki. We also discuss page categories that help keep a Wiki organized, licensing under a Creative Commons Attribution license, and how to add licenses and other page components through template codes.
- Your Course Journal (Mandatory - your Journals will be assessed. Note that the "rules" have changed - study the unit carefully and read the evaluation rubrics.)
Keeping a journal is an essential task in a laboratory. To practice keeping a technical journal, you will document your activities as you are working through the material of the course. A significant part of your term grade will be given for this Course Journal. This unit introduces components and best practice for lab- and course journals and includes a wiki-source template to begin your own journal on the Student Wiki.
- The "Plagiarism Unit" (Mandatory - must be the first entry in your Journal.)
Academic Integrity is a promise that scholars and scientists world-wide give each other, that we will uphold, protect, and promote ethical and practical standards for our work. Its most basic values are proclaimed as honesty, trust, fairness, respect, responsibility, and courage. These are simple ideas, but in order to give them meaning we need to discuss how these values get translated to the details of our everyday work. Unfortunately, this important topic is often compressed to discussing cheating and plagiarism, to managing procedures to detect dishonesty, and to threatening sanctions. It is overlooked that those are just the manifestations of much deeper problems, and focussing on those symptoms alone perpetuates a stereotyped us-versus-them mentality of educators and students alike that is much more likely to make the problem worse than to solve it. The key to counter this lies in a proper understanding of academic integrity as a relational value, and respect as its foundation.
Discussing academic integrity in the abstract is of limited use, the challenge is to put the concepts in practice, in every aspect of this course and this is not a question of behaviour, but of attitude. The attitude needs to be reflected in the choice of teaching materials, in the care in their preparation, in the attitude of impartiality and reproducibility we bring to our experiments, in mutual trust in class, in fairness in assessments, and honesty in assignments. One everyday issue is attribution and we operate a Full Disclosure Policy for attribution in this course. This means everything that is not one's own, original idea must be identified and properly attributed. Neither I nor you are already perfect in this, but I trust we can come together as a learning community to educate each other and improve.
- The "insights!" page (Mandatory - your "insights!" pages will be assessed.)
In paralell with your other work, you will maintain an insights! page on which you collect valuable insights and learning experiences of the course. Through this you ask yourself: what does this material mean - for the field, and for myself.
Once you have completed these four units, get started immediately on the Introduction-to-R units. You need time and practice, practice, practice[2] to acquire the programming skills you need for the course. Whenever you want to take a break from studying R, continue with the other preparatory units.
PartI: Foundations and Data
Don't forget to document your work in your Journal!
Your level of preparedness will be assessed in a "mock quiz" in week two, after which you have one more week to fill in gaps before our Quiz in week three. With that out of the way, we will look at different data sources that are useful in systems biology, including gene-level annotations and collections of experimental data, relationship data like physical and epistatic interactions, and systems-level data like metabolic or regulatory pathways. Each of you will select one data-source in our first open-ended session and then work on the following deliverables:
- a brief summary page on the Student Wiki: the page needs to be named according to the pattern:
User:<your_name>/BCB420-2019-Data_<your_data_resource>
and contain the category tag:[[Category:BCH420-2019_Data_project]]
. - an R package derived from rpt,
- hosted on GitHub,
- named according to the pattern
BCB420.2019.<your_data_resource>
[3], - containing an R markdown page that describes and annotates code for
- importing the chosen data in platform-independent function calls (see the footnote for details and restrictions)[4],
- and cleaning it up where necessary,
- and normalizing its identifiers to HuGO gene symbols,
- and containing sample data for our defined reference dataset of genes,
- and containing a report on the data statistics,
- and containing code to validate the import process,
- and containing the (provided) function to display the markdown file.
Required: a user needs to be able to use the information you provided to understand the semantics of the data, import the data, purify it where necessary, and associate it with HUGO IDs in an R data frame. They should be able to use the data as a feature in a machine learning protocol without further preprocessing steps.
To illustrate the requirements with a model solution, I have provided an example project page here, which links to a Github repository with the corresponding package. Studying this with some care will probably clarify many questions.
- Note
-
- If your data refers to chromosomal coordinates in any way, you must ensure the coordinates are from GRCh38 (hg38)[5]
- Your chosen database will not always be the best choice of data source: often you can achieve your objective faster though ensembl/biomart. See this sample annotation of BRCA2 for examples of what data is available.
Database choices
Here are the chosen (or assigned) databases. Follow the link in the "Note" column for details:
Name | DB | Note |
Edouard Al-chami | GEO (stimulus) | [6] |
Emily Ayala | Gene models | [7] |
Deus Bajaj | EGGNOG | |
Cathy Cha | GEO (tissues) | [8] |
Nada Elnour | Human Protein Atlas | [9] |
Chantal Ho | GEO (diseases) | [10] |
Edward Ho | Cosmic | |
Sapir Labes | GWAS | |
Judy Lee | PDB | [11] |
Tina Lee | Pfam | [12] |
Jian Bin Lin | GEO | |
Matthew Mcneil | COSMIC and GEO | [13] |
Gabriela Morgenshtern | Awesome (or PANTHER) | |
Yoonsik Park | Reactome pathways | |
Alesandro Rigido | MsigDB | [14] |
Fan Shen | SMART | |
Rachel Silverstein | Human Phenotype Ontology | |
Yiqiu Tang | OMIM | [15] |
Denitsa Vasileva | GO annotations | [16] |
Rachel Woo | Human Protein Atlas | [17] |
Alison Wu | BioGRID | [18] |
Yufei Yang | GTRD | [19] |
Yin Yin | huMAP | [20] |
Han Zhang | HitPredict | [21] |
Xindi Zhang | CORUM | [22] |
Yuhan Zhang | Encode | |
Liwen Zhuang | Human Disease Ontology |
Contact me with any questions you may have.
Part II: Biocuration
"Systems" are concepts and working with systems requires expert knowledge. To explore the practice of expert curation of molecular systems, each of you will select one system in our second open-ended session and report on its components, its function(s) and its architecture. To start off:
- Choose a system from the GO term table on the Student Wiki, confirm your choice with me and replace the "N.N." in the table with your name.
- Explore the term on AmiGO, and explore the linked "seed-genes" on UniProt.
- In PubMed, find recent reviews or other manuscripts that discuss the system and its context. Make sure you have not overlooked important literature, this will be part of your evaluation. If there is no suitable literature available, your GO term is not a suitable choice.
- Get an overview of your system and how it relates to the GO term you start out from.
- define the system well and define a five-letter code as a shorthand notation of the system as discussed in class.
- Note
- A GO term is not a system nor is the set of GOA annotated genes a complete description of the system's members. A system may overlap the component/function/process described in a GO term to a large degree, but the term is not informed or constrained by our "system" definition. We use GO terms as a first approximation to system functions, and we use GOA to define "seed" genes as a starting point that may help us build out the system description. However, a system's roles include the creation, maintenance, destruction, and potentially recycling of components, and these roles are not always included in either the literature nor the GO terms themselves.
Read the notes on curating a biological system.
- General goal: System Architecture
A system architecture describes the system’s behaviour in terms of its subsystems and their relationships, given its context, within its boundaries.
- Deliverables: Contents
- A structured description of the system, including its name, definition, description, associated GO terms, an initial set of computationally defined genes it contains, and references to a seed set of literature articles that will be used for curation;
- A description of concepts of importance. This includes the biological context, and background knowledge about the components.
- An enumeration of components from:
- literature review;
- direct annotation, i.e. genes discovered because they have been annotated with a relationship to the system, in a database such as UniProt, NCBI-Protein or any of the three GO ontologies represented in GOA (GO annotations);
- network and pathway annotation, i.e. genes discovered in the network neighbourhood of system components, in a database like STRING or IntAct, or in pathways such as KEGG or Reactome;
- phenotype and behaviour, i.e. genes annotated to a related phenotype in OMIM or the GWAS catalog;
- ... each with a note on the type and quality of evidence that supports their inclusion.
- Completion of role annotation: each component has one role annotated to it (list components more than once if several distinct roles relate to the same, or overlapping entities); list roles that are expected, or required, but have no components associated with them.
- A system architecture sketch that integrates the system information;
- A formatted set of system data, ready to be imported into a system database.
- Deliverables: Form
- Create a project page on the Student Wiki named according to the pattern:
User:<your_name>/BCB420-2019-System_<your_system_code>
; - add the category tag:
[[Category:BCH420-2019_Curation_project]]
; - add the
{{CC-BY}}
template; - summarize your "seed" information (follow the model for the PHALY system);
- as you are annotating your system, ensure all components have a SyRO role defined, and the evidence source and evidence code has been entered;
- the system data needs to be included in the page in a valid(!) JSON file, in an expansible section of text.[23]
Both your data import script and your curated system model will be assessed in the Oral Test.
Part III: Exploration
At the end of Parts I and II we will have data available and annotated systems that induce relations on the data. Using this information, we can formulate tools for exploratory data analysis (EDA): isolating and evaluating features, looking at correlations, identifying patterns in networks, clustering data etc. Each of you will select one EDA workflow in our third open-ended session for which to build a tool in a jointly authored R package. Your deliverables are:
- a project page on the student Wiki that contains a specification of your tool;
- an implementation of your tool as part of a jointly authored R package under continuous integration;
- a Vignette in the package that describes the tool and includes sample code for which the data is also provided in the package.
Your deliverables will be evaluated together with your participation in constructing the package.
- Deliverables: Form
- On the Student Wiki -
- Create a project page on the Student Wiki named according to the pattern:
User:<your_name>/BCB420-2019-ExploratorySystemsAnalysis
; - add the category tag:
[[Category:BCH420-2019_Exploration_project]]
; - add the
{{CC-BY}}
template; - summarize the objectives of your exploration tool in terms of input, output, and interpretation;
- write a specification for your exploration tool;
- summarize example results.
- Create a project page on the Student Wiki named according to the pattern:
- On GitHub -
- Fork the project
BCB420.2019.ESA
; - Develop your code as a package function;
- Write a vignette;
- Make sure your changes pass without errors, warnings or notes;
- Submit a pull request by Monday, March 25.
- Address comments from the pull-request review before Tuesday, April 2.
- Fork the project
The code is considered "submitted" when it passes the continuous integration checks, all pull-request reviews have been addressed, and your branch has been merged into the BCB420.2019.ESA
package.
Extensions for term work
Extensions for term work in this course are subject to Faculty regulations and will only be considered within the framework determined by the Faculty policies.
- Regular Submissions
- It is Faculty policy to require assessments to be "fair, equitable and reasonable". In order to be equitable, granting extensions requires the student to demonstrate that the need for the extension is due to unavoidable circumstances that go significantly beyond what was expected of the rest of the class. In general "official" documentation will be required: UofT Verification of Illness or Injury Form, Student Health or Disability Related Certificate, a College Registrar’s Letter, and an Accessibility Services Letter.
- Signing up for the oral tests.
- The dates for the Oral Test have been announced at the beginning of the term on this syllabus. If you fail to sign up for a slot, or if you fail to show up at the scheduled time, we apply the Faculty policy for a missed Midterm Test: "if the reasons for missing your test are acceptable to the instructor, a make-up opportunity should be offered to the student where practicable. "Acceptable" reasons will be considered
- if they are justified,
- if the consideration is "fair, equitable and reasonable", and
- if the reason is documented through one of the four types of "official" documentation: UofT Verification of Illness or Injury Form, Student Health or Disability Related Certificate, a College Registrar’s Letter, and an Accessibility Services Letter.
- Scope for a "practicable" make-up opportunity for the Oral Test will be limited.
- The dates for the Oral Test have been announced at the beginning of the term on this syllabus. If you fail to sign up for a slot, or if you fail to show up at the scheduled time, we apply the Faculty policy for a missed Midterm Test: "if the reasons for missing your test are acceptable to the instructor, a make-up opportunity should be offered to the student where practicable. "Acceptable" reasons will be considered
- Submissions due on the last day to submit course work in the Spring term (Tuesday, April 2 2019).
- Since the course does not have a final exam, the Faculty requires grades to be marked, collated and submitted a few days after the last day to submit course work in the Spring term (Tuesday, April 2 2019). Therefore I cannot normally grant extensions beyond this date. The Faculty allows so called informal extensions to be granted "in extraordinary circumstances"; in those cases too, the requirement to be "fair, equitable and reasonable" will apply, i.e. you would need to demonstrate that the need for the extension was due to unavoidable circumstances that go significantly beyond what was expected of the rest of the class, and submit "official" documentation to me. In that case, (i) we would determine an adjusted submission date, (ii) I will initially submit a mark of 0 for the missing submissions, and (iii) I will submit an amended mark, after that date, if appropriate. Note that the Faculty requires that such extensions don't go beyond a few days after the end of the Final Examination Period. If you require an extension beyond that date you need to submit a formal petition through your College Registrar.
Late penalties
Late penalties will be applied according to the following formula: (marks achieved) * 0.5^(fractional days late)
. However material submitted more than 3.0 days late (72 hours or more) will be marked zero. Note: this does not apply to material due before the Oral Test (see there).
Copyright and Licensing
We follow [FOSS] principles in this course. You automatically own copyright to all material you prepare. All material must be licensed for free re-use, under the condition of fair attribution. In practice:
All pages that you place on the Student Wiki must include a {{CC-BY}}
tag. All documentation within GitHub pages that you prepare for this course must include a Creative Commons License - Attribution (CC-BY), v. 4.0 or later. All code submitted for this course must be licensed under the MIT
software license. Unlicensed submissions will have marks deducted and may be removed from the Wiki.
Academic integrity
Our rules on Plagiarism and Academic Misconduct are clearly spelled out in this learning unit. This unit is part of our course prerequisites, and everyone documents in their course journal that they have worked through the unit and understood it. Consequences of having to report to the Office of Student Academic Integrity (OSAI) for plagiarism, misrepresentation or falsification include an indelible failing mark on the transcript, a delay in graduation, or not being able to complete your POSt. Please take extra time to clearly understand the requirements, and define for yourself what they mean for every aspect of your work.
Marks adjustments
I do not adjust marks towards a target mean and variance (i.e. there will be no "belling" of grades). I feel strongly that such "normalization" detracts from a collaborative and mutually supportive learning environment. If your classmate gets a great mark because you helped them with a difficult concept, this should never have the effect that it brings down your mark through class average adjustments. Collaborate as much as possible, it is a great way to learn. But do keep it honest and carefully consider our rules on Plagiarism and Academic Misconduct.
Timetable and contents details
Note: The general outline of the course as described above is current for the 2019 Winter Term. Filling in the activity details below is still in progress.
Note: Click on the "▽" - symbol to see details for each week's activities.
Part I: Foundations
Week | In class: Tuesday, January 8 2019 | This week's activities |
1 |
|
Details ... ▽△
|
Week | In class: Tuesday, January 15 2019 | This week's activities | ||
2 |
|
Details ... ▽△
To be well prepared, you need to understand the various categories of data that are available and have narrowed your choice to two or three datasets for which you know that they fulfill the requirements. Read:
|
Week | In class: Tuesday, January 22 2019 | This week's activities |
3 |
Open ended session:
|
Details ... ▽△
|
Week | In class: Tuesday, January 29 2019 | This week's activities |
4 |
|
Details ... ▽△
|
Part II: Curation
Week | In class: Tuesday, February 5 2019 | This week's activities |
5 |
Open ended session:
|
Details ... ▽△
|
Week | In class: Tuesday, February 12 2019 | This week's activities |
6 |
|
Details ... ▽△
|
Week | In class: Tuesday, February 19 2019 | This week's activities |
– |
|
For details see the "Biocuration" deliverables (above).
|
Week | In class: Tuesday, February 26 2019 | This week's activities |
7 |
|
Details ... ▽△
|
Week | In class: Tuesday, March 5 2019 | This week's activities |
8 |
|
Details ... ▽△
|
Part III: Exploration
Week | In class: Tuesday, March 12 2019 | This week's activities |
9 |
Open ended session:
|
Details ... ▽△
|
Week | In class: Tuesday, March 19 2019 | This week's activities |
10 |
|
Details ... ▽△
|
Week | In class: Tuesday, March 26 2019 | This week's activities |
11 |
|
Details ... ▽△
|
Week | In class: Tuesday, April 2 2019 | This week's activities |
12 |
|
NA Details ... ▽△
|
Resources
- Course related
- Student Wiki
- The Course Google Group.
- Netiquette for the Group mailing list
Miller et al. (2011) Strategies for aggregating gene expression data: the collapseRows R function. BMC Bioinformatics 12:322. (pmid: 21816037) |
[ PubMed ] [ DOI ] BACKGROUND: Genomic and other high dimensional analyses often require one to summarize multiple related variables by a single representative. This task is also variously referred to as collapsing, combining, reducing, or aggregating variables. Examples include summarizing several probe measurements corresponding to a single gene, representing the expression profiles of a co-expression module by a single expression profile, and aggregating cell-type marker information to de-convolute expression data. Several standard statistical summary techniques can be used, but network methods also provide useful alternative methods to find representatives. Currently few collapsing functions are developed and widely applied. RESULTS: We introduce the R function collapseRows that implements several collapsing methods and evaluate its performance in three applications. First, we study a crucial step of the meta-analysis of microarray data: the merging of independent gene expression data sets, which may have been measured on different platforms. Toward this end, we collapse multiple microarray probes for a single gene and then merge the data by gene identifier. We find that choosing the probe with the highest average expression leads to best between-study consistency. Second, we study methods for summarizing the gene expression profiles of a co-expression module. Several gene co-expression network analysis applications show that the optimal collapsing strategy depends on the analysis goal. Third, we study aggregating the information of cell type marker genes when the aim is to predict the abundance of cell types in a tissue sample based on gene expression data ("expression deconvolution"). We apply different collapsing methods to predict cell type abundances in peripheral human blood and in mixtures of blood cell lines. Interestingly, the most accurate prediction method involves choosing the most highly connected "hub" marker gene. Finally, to facilitate biological interpretation of collapsed gene lists, we introduce the function userListEnrichment, which assesses the enrichment of gene lists for known brain and blood cell type markers, and for other published biological pathways. CONCLUSIONS: The R function collapseRows implements several standard and network-based collapsing methods. In various genomic applications we provide evidence that both types of methods are robust and biologically relevant tools. |
Chang et al. (2013) Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline. BMC Bioinformatics 14:368. (pmid: 24359104) |
[ PubMed ] [ DOI ] BACKGROUND: As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. RESULTS: We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. CONCLUSIONS: The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website. |
Thompson et al. (2016) Cross-platform normalization of microarray and RNA-seq data for machine learning applications. PeerJ 4:e1621. (pmid: 26844019) |
[ PubMed ] [ DOI ] Large, publicly available gene expression datasets are often analyzed with the aid of machine learning algorithms. Although RNA-seq is increasingly the technology of choice, a wealth of expression data already exist in the form of microarray data. If machine learning models built from legacy data can be applied to RNA-seq data, larger, more diverse training datasets can be created and validation can be performed on newly generated data. We developed Training Distribution Matching (TDM), which transforms RNA-seq data for use with models constructed from legacy platforms. We evaluated TDM, as well as quantile normalization, nonparanormal transformation, and a simple log 2 transformation, on both simulated and biological datasets of gene expression. Our evaluation included both supervised and unsupervised machine learning approaches. We found that TDM exhibited consistently strong performance across settings and that quantile normalization also performed well in many circumstances. We also provide a TDM package for the R programming language. |
325C78 | 7097B8 | 9BACCF | A8A5CC | D7C0F0 |
Notes
- ↑ I call these activities Quiz sessions for brevity, however they are not quizzes in the usual sense, since they rely on self-evaluation and immediate feedback.
- ↑ It's practice!
- ↑ According to "Writing R Extensions": "The mandatory ‘Package’ field gives the name of the package. This should contain only (ASCII) letters, numbers and dot, have at least two characters and start with a letter and not end in a dot." Deviating from this will result in a package check error.
- ↑ Note: the repository absolutely must not contain any datafile of more than 1Mb in size! Rather it must contain clear instructions how to download the data. Packages that violate the size limitations will not be evaluated. The code you write shall expect the data in a sister-directory of your working directory which is called
data
. For example, if I were to store a datafile by the nameSTRING_90.dat
, my code would construct the path to it in a platform independent way asfile.path("..", "data", "STRING_90.dat")
. - ↑ For different approaches to convert from one to the other see this thread on Biostars.
- ↑ Cell response to external stimuli (eg. heat, salt, insulin, chemokines ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.
- ↑ Find gene models (exons and chromosomal coordinates) for each gene. Possible sources are Gencode v29 GTF or Gff3 files, or exons from biomart. Result: for each gene, a set of chromosomal start/end coordinates for the principal isoform as defined by APPRIS.
- ↑ Differential expression in tissues (eg. brain, epithelium, muscles ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.
- ↑ Find subcellular localization for each gene. Result: for each gene, the subcellular localizations it is associated with.
- ↑ Differential expression in disease states (eg. diabetes, hypertension, RA, ...): Find ~ 20 high-coverage experimental data sets, define the pipeline to download and process the sets into a common data structure, apply quantile normalization. Result: an expression vector for each gene.
- ↑ Find PDB structures of human proteins. Possible data sources: Biomart? PDB? NCBI's MMDB? If structures overlap, report only the best representative. This is a set of feature annotations for each gene that includes start and stop coordinates. You must validate the coordinates, i.e. make sure that the annotated residue numbers map accurately to the actual sequence associated with the HGNC symbol.
- ↑ Obtain annotations via Ensembl/biomart. This is a set of feature annotation for each gene that includes start and stop coordinates. You must validate the coordinates, i.e. make sure that the annotated residue numbers map accurately to the actual sequence associated with the HGNC symbol.
- ↑ Tissue specific correlations of expression levels. Result: for each gene ... ??? Question: how are differentially spliced genes handled?
- ↑ For a selected set of MSigDB sets compute co-occurrence probability of genes: how often do they co-occur in the same MSig Set? This is a network-type result. Output will be two HGNC symbols and one probability for each queried pair. Don't precompute all 1e9 possible pairs, but conceptualize this as a tool that queries a compact datastructure with the probabilities, e.g. a boolean matrix with one set-annotation per column (for each gene TRUE if present in the set, FALSE if not present) that compares two row-vectors for each query.
- ↑ Gene phenotype associations. For each gene, the set of phenotypes it is associated with.
- ↑ For each gene, the set of GO terms it is annotated to.
- ↑ Tissue Data: tissue level expression vector. Result: for each gene ... ??? Question: how are differentially spliced genes handled?
- ↑ Process genetic interactions only. Result: edge list (Weighted? Directed?)
- ↑ ChipSeq verified TF binding sites in gene promoter regions. Result: for each genes, list of transcription factors that target its promoter region.
- ↑ Protein complexes. Result: for each gene, all complexes (if any) it has been annotated to.
- ↑ Weighted interaction graph. Result: edge list with weights.
- ↑ Protein complexes. Result: for each gene, all complexes (if any) it has been annotated to.
- ↑ Note: you must include line breaks with your JSON data! Data that has everything on one line will not be accepted.
- ↑ Note: late-penalties apply.
- ↑ Note: you must include line breaks with your JSON data! Data that has everything on one line will not be accepted.