Download presentation
Presentation is loading. Please wait.
1
SUNY Korea BioData Mining Lab - Journal Review
Presenter: Observation Title: Trans-species learning of cellular signaling systems with bimodal deep belief networks Lujia Chen et al. Bioinformatics, 31(18), 2015, 3008–3015 Problem: Model organisms play critical roles in biomedical research of human diseases and drug development. An imperative task is to translate information/knowledge acquired from model organisms to humans. Challenges: In this study, we address a trans-species learning problem: predicting human cell responses to diverse stimuli, based on the responses of rat cells treated with the same stimuli. Key Words: Systems biology, cellular signaling systems, bimodal deep belief networks, SUNY Korea BioData Mining Lab - Journal Review 1/3
2
SUNY Korea BioData Mining Lab - Journal Review
Solution Approach: We hypothesized that rat and human cells share a common signal-encoding mechanism but employ different proteins to transmit signals, and we developed a bimodal deep belief network and a semi-restricted bimodal deep belief network to represent the common encoding mechanism and perform trans-species learning. These ‘deep learning’ models include hierarchically organized latent variables capable of capturing the statistical structures in the observed proteomic data in a distributed fashion Results: : Results: The results show that the models significantly outperform two current state of- the-art classification algorithms. Our study demonstrated the potential of using deep hierarchical models to simulate cellular signaling systems. SUNY Korea BioData Mining Lab - Journal Review 2/3
3
SUNY Korea BioData Mining Lab - Journal Review
Representative Figure(s) Trans-species learning task specification Restricted Boltzmann machine 4 layer DBN Semi-Restricted Boltzmann machine Training DBN models Conventional bimodal DBN 4-layer Additional activated regulatory edge btw signaling proteins SUNY Korea BioData Mining Lab - Journal Review 3/3 bDBN sbDBN
4
SUNY Korea BioData Mining Lab - Journal Review
Presenter: Data/Method/Observation/Other Title: Classifying and segmenting microscopy images with deep multiple instance learning Oren Z. Kraus et al. Bioinformatics, 32, 2016, i52–i59 Problem: Motivation High-content screening (HCS) technologies have enabled large scale imaging experiments for studying cell biology and for drug screening. These systems produce hundreds of thousands of microscopy images per day and their utility depends on automated image analysis. Challenges: Problems and Challenges: Recently, deep learning approaches that learn feature representations directly from pixel intensity values have dominated object recognition challenges. These tasks typically have a single centered object per image and existing models are not directly applicable to microscopy datasets. Key Words: deep convolutional neural networks (CNNs), multiple instance learning (MIL), High-content screening (HCS) SUNY Korea BioData Mining Lab - Journal Review 1/3
5
SUNY Korea BioData Mining Lab - Journal Review
Solution Approach: Here we develop an approach that combines deep convolutional neural networks (CNNs) with multiple instance learning (MIL) in order to classify and segment microscopy images using only whole image level annotations. We base our approach on the similarity between the aggregation function used in MIL and pooling layers used in CNNs. To facilitate aggregating across large numbers of instances in CNN feature maps we present the Noisy-AND pooling function, a new MIL operator that is robust to outliers. Combining CNNs with MIL enables training CNNs using whole microscopy images with image level labels. Results: : Results: We introduce a new neural network architecture that uses MIL to simultaneously classify and segment microscopy images with populations of cells. We show that training end-to-end MIL CNNs outperforms several previous methods on both mammalian and yeast datasets without requiring any segmentation steps. SUNY Korea BioData Mining Lab - Journal Review 2/3
6
SUNY Korea BioData Mining Lab - Journal Review
Representative Figure(s) SUNY Korea BioData Mining Lab - Journal Review 3/3
7
SUNY Korea BioData Mining Lab - Journal Review
Presenter: Data/Method/Observation/Other Title: Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation Yonghui Wu et al. submitted to Nature on 26 September arXiv: v1 [cs.CL] 26 Sep 2016 Problem: Neural Machine Translation (NMT) is an end-to-end learning approach for automated translation, with the potential to overcome many of the weaknesses of conventional phrase-based translation systems. Challenges: Unfortunately, NMT systems are known to be computationally expensive both in training and in translation inference – sometimes prohibitively so in the case of very large data sets and large models. Several authors have also charged that NMT systems lack robustness, particularly when input sentences contain rare words. These issues have hindered NMT’s use in practical deployments and services, where both accuracy and speed are essential. Key Words: Neural Machine Translation (NMT), recurrent neural networks (RNNs), SUNY Korea BioData Mining Lab - Journal Review 1/3
8
SUNY Korea BioData Mining Lab - Journal Review
Solution Approach: In this work, we present GNMT, Google’s Neural Machine Translation system, which attempts to address many of these issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder layers using residual connections as well as attention connections from the decoder network to the encoder. To improve parallelism and therefore decrease training time, our attention mechanism connects the bottom layer of the decoder to the top layer of the encoder. To accelerate the final translation speed, we employ low-precision arithmetic during inference computations. To improve handling of rare words, we divide words into a limited set of common sub-word units (“wordpieces”) for both input and output. Results: : Results: This method provides a good balance between the flexibility of “character”-delimited models and the efficiency of “word”-delimited models, naturally handles translation of rare words, and ultimately improves the overall accuracy of the system. Our beam search technique employs a length-normalization procedure and uses a coverage penalty, which encourages generation of an output sentence that is most likely to cover all the words in the source sentence. To directly optimize the translation BLEU scores, we consider refining the models by using reinforcement learning, but we found that the improvement in the BLEU scores did not reflect in the human evaluation. On the WMT’14 English-to-French and English-to-German benchmarks, GNMT achieves competitive results to state-of-the-art. Using a human side-by-side evaluation on a set of isolated simple sentences, it reduces translation errors by an average of 60% compared to Google’s phrase-based production system. SUNY Korea BioData Mining Lab - Journal Review 2/3
9
SUNY Korea BioData Mining Lab - Journal Review
encoder network decoder network Representative Figure(s) attention module. Normal stacked LSTM stacked LSTM with residual connections SUNY Korea BioData Mining Lab - Journal Review 3/3 stacked LSTM with bi- residual connections
10
SUNY Korea BioData Mining Lab - Journal Review
Representative Figure(s) BioDM Lab Take-ins: We can utilize. *Notes: SUNY Korea BioData Mining Lab - Journal Review 3/3
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.