Difference between revisions of "BIO Machine learning"

From "A B C"
Jump to navigation Jump to search
m
Line 29: Line 29:
  
 
===Neural Networks===
 
===Neural Networks===
 +
{{WP|Artificial neural network|Neural network}}
 +
 +
 
===Hidden Markov Models===
 
===Hidden Markov Models===
 +
{{WP|Hidden Markov model}}
 +
 +
 
===Support Vector Machines===
 
===Support Vector Machines===
 +
{{WP|Support vector machine}}
 +
 +
 +
===Bayesian Networks===
 +
{{WP|Bayesian network}}
 +
 +
 
===Introduction===
 
===Introduction===
===Introduction===
 
  
==Introduction==
 
  
==Introduction==
+
==Training sets==
 +
Gold standards as true positives and the problem of generating true negatives from non-observed data...
 +
 
 +
 
 +
==ROC and associated metrics==
 +
 
 +
 
 +
==Machine learning in '''R'''==
 +
 
  
==Introduction==
 
  
 
 
 
 

Revision as of 20:16, 16 November 2012

Machine Learning


This page is a placeholder, or under current development; it is here principally to establish the logical framework of the site. The material on this page is correct, but incomplete.


Overview of "classical" and current approaches to machine learning.



 

Introductory reading

Machine learning


 

Introduction

Paradigms

Neural Networks

Neural network


Hidden Markov Models

Hidden Markov model


Support Vector Machines

Support vector machine


Bayesian Networks

Bayesian network


Introduction

Training sets

Gold standards as true positives and the problem of generating true negatives from non-observed data...


ROC and associated metrics

Machine learning in R

   

Further reading and resources

Hinton et al. (2006) A fast learning algorithm for deep belief nets. Neural Comput 18:1527-54. (pmid: 16764513)

PubMed ] [ DOI ] We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

Lodhi, H. (2012) Computational biology perspective: kernel methods and deep learning. WIREs: Computational Statistics 4(5):455-465.
(pmid: None)Source URL ][ DOI ] The field of machine learning provides useful means and tools for finding accurate solutions to complex and challenging biological problems. In recent years a class of learning algorithms namely kernel methods has been successfully applied to various tasks in computational biology. In this article we present an overview of kernel methods and support vector machines and focus on their applications to biological sequences. We also describe a new class of approaches that is termed as deep learning. These techniques have desirable characteristics and their use can be highly effective within the field of computational biology.