1 Multi-Task Semi-Supervised Underwater Mine Detection Lawrence Carin, Qiuhua Liu and Xuejun Liao Duke University Jason Stack Office of Naval Research.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Knowledge Transfer via Multiple Model Local Structure Mapping Jing Gao, Wei Fan, Jing Jiang, Jiawei Han l Motivate Solution Framework Data Sets Synthetic.
January 23 rd, Document classification task We are interested to solve a task of Text Classification, i.e. to automatically assign a given document.
Personalized Query Classification Bin Cao, Qiang Yang, Derek Hao Hu, et al. Computer Science and Engineering Hong Kong UST.
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Machine learning continued Image source:
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Relational Learning with Gaussian Processes By Wei Chu, Vikas Sindhwani, Zoubin Ghahramani, S.Sathiya Keerthi (Columbia, Chicago, Cambridge, Yahoo!) Presented.
Self Taught Learning : Transfer learning from unlabeled data Presented by: Shankar B S DMML Lab Rajat Raina et al, CS, Stanford ICML 2007.
Transfer Learning for WiFi-based Indoor Localization
Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –One exception: games with multiple moves In particular, the Bayesian.
Locally Constraint Support Vector Clustering
Inferring Data Inter-Relationships Via Fast Hierarchical Models Lawrence Carin Duke University
Graph-Based Semi-Supervised Learning with a Generative Model Speaker: Jingrui He Advisor: Jaime Carbonell Machine Learning Department
Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford.
Semi-Supervised Learning D. Zhou, O Bousquet, T. Navin Lan, J. Weston, B. Schokopf J. Weston, B. Schokopf Presents: Tal Babaioff.
Knowledge Transfer via Multiple Model Local Structure Mapping Jing Gao† Wei Fan‡ Jing Jiang†Jiawei Han† †University of Illinois at Urbana-Champaign ‡IBM.
CS Ensembles and Bayes1 Semi-Supervised Learning Can we improve the quality of our learning by combining labeled and unlabeled data Usually a lot.
Hierarchical Subquery Evaluation for Active Learning on a Graph Oisin Mac Aodha, Neill Campbell, Jan Kautz, Gabriel Brostow CVPR 2014 University College.
Learning from Multiple Outlooks Maayan Harel and Shie Mannor ICML 2011 Presented by Minhua Chen.
Training and future (test) data follow the same distribution, and are in same feature space.
1 A Network Traffic Classification based on Coupled Hidden Markov Models Fei Zhang, Wenjun Wu National Lab of Software Development.
(ACM KDD 09’) Prem Melville, Wojciech Gryc, Richard D. Lawrence
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
Semi-Supervised Learning with Concept Drift using Particle Dynamics applied to Network Intrusion Detection Data Fabricio Breve Institute of Geosciences.
Modeling Relationship Strength in Online Social Networks Rongjing Xiang: Purdue University Jennifer Neville: Purdue University Monica Rogati: LinkedIn.
Random Walks and Semi-Supervised Learning Longin Jan Latecki Based on : Xiaojin Zhu. Semi-Supervised Learning with Graphs. PhD thesis. CMU-LTI ,
Random Walk with Restart (RWR) for Image Segmentation
1 Learning with Local and Global Consistency Presented by Qiuhua Liu Duke University Machine Learning Group March 23, 2007 By Dengyong Zhou, Olivier Bousquet,
Sparse Gaussian Process Classification With Multiple Classes Matthias W. Seeger Michael I. Jordan University of California, Berkeley
Transfer Learning Task. Problem Identification Dataset : A Year: 2000 Features: 48 Training Model ‘M’ Testing 98.6% Training Model ‘M’ Testing 97% Dataset.
11/12/2012ISC471 / HCI571 Isabelle Bichindaritz 1 Prediction.
Modern Topics in Multivariate Methods for Data Analysis.
Transfer Learning Motivation and Types Functional Transfer Learning Representational Transfer Learning References.
Hidden Markov Models in Keystroke Dynamics Md Liakat Ali, John V. Monaco, and Charles C. Tappert Seidenberg School of CSIS, Pace University, White Plains,
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
Date : 2013/03/18 Author : Jeffrey Pound, Alexander K. Hudek, Ihab F. Ilyas, Grant Weddell Source : CIKM’12 Speaker : Er-Gang Liu Advisor : Prof. Jia-Ling.
Bayesian Generalized Kernel Mixed Models Zhihua Zhang, Guang Dai and Michael I. Jordan JMLR 2011.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
CS Statistical Machine learning Lecture 24
1/18 New Feature Presentation of Transition Probability Matrix for Image Tampering Detection Luyi Chen 1 Shilin Wang 2 Shenghong Li 1 Jianhua Li 1 1 Department.
COP5992 – DATA MINING TERM PROJECT RANDOM SUBSPACE METHOD + CO-TRAINING by SELIM KALAYCI.
Context-based vision system for place and object recognition Antonio Torralba Kevin Murphy Bill Freeman Mark Rubin Presented by David Lee Some slides borrowed.
Lecture 2: Statistical learning primer for biologists
Detecting New a Priori Probabilities of Data Using Supervised Learning Karpov Nikolay Associate professor NRU Higher School of Economics.
Pang-Ning Tan Associate Professor Dept of Computer Science & Engineering Michigan State University
Towards Total Scene Understanding: Classification, Annotation and Segmentation in an Automatic Framework N 工科所 錢雅馨 2011/01/16 Li-Jia Li, Richard.
Contextual models for object detection using boosted random fields by Antonio Torralba, Kevin P. Murphy and William T. Freeman.
Information Extraction Entity Extraction: Statistical Methods Sunita Sarawagi.
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
Multi-label Prediction via Sparse Infinite CCA Piyush Rai and Hal Daume III NIPS 2009 Presented by Lingbo Li ECE, Duke University July 16th, 2010 Note:
Semi-Supervised Learning with Graph Transduction Project 2 Due Nov. 7, 2012, 10pm EST Class presentation on Nov. 12, 2012.
1 Relational Factor Graphs Lin Liao Joint work with Dieter Fox.
Paper: A. Kapoor, H. Ahn, and R. Picard, “Mixture of Gaussian Processes for Combining Multiple Modalities,” MIT Media Lab Technical Report, Paper.
A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning Ronan Collobert Jason Weston Presented by Jie Peng.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
Multi-Class Sentiment Analysis with Clustering and Score Representation Yan Zhu.
Bayesian Active Learning with Evidence-Based Instance Selection LMCE at ECML PKDD th September 2015, Porto Niall Twomey, Tom Diethe, Peter Flach.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
Intrinsic Data Geometry from a Training Set
Multimodal Learning with Deep Boltzmann Machines
Probability Review Definitions/Identities Random Variables
Image Retrieval Longin Jan Latecki.
Partially labeled classification with Markov random walks
Learning to Rank Typed Graph Walks: Local and Global Approaches
Human-object interaction
An M-ary KMP Classifier for Multi-aspect Target Classification
Presentation transcript:

1 Multi-Task Semi-Supervised Underwater Mine Detection Lawrence Carin, Qiuhua Liu and Xuejun Liao Duke University Jason Stack Office of Naval Research

Intra-Scene Context

What Analyst Processes Individual Signatures Processed by Supervised Classifiers Message: Analyst Places Classification of Any Given Item Within Context of All Items in the Scene Supervised Classifier Classifies Each Item in Isolation

Decision surface based on labeled data (supervised) Decision surface based on labeled & Unlabeled data (semi-supervised)

Inter-Scene Context

8 Message  Humans are very good at exploiting context, both within a given scene and across multiple scenes  Intra-scene context: semi-supervised learning  Inter-scene context: multi-task and transfer learning  A major focus of machine learning these days

9 Data Manifold Representation Based on Markov Random Walks Given X={x 1, …,x N }, first construct a graph G=(X,W), with the affinity matrix W, where the (i, j)-th element of W is defined by a Gaussian kernel: we consider a Markov transition matrix A, which defines a Markov random walk, where the (i, j)-th element: gives the probability of walking from x i to x j by a single step. The one-step Markov random work provides a local similarity measure between data points.

10 Semi-Supervised Multitask Learning(1/2) Semi-supervised MTL: Given M partially labeled data manifolds, each defining a classification task, we propose a unified sharing structure to learn the M classifiers simultaneously. The Sharing Prior: We consider M PNBC classifiers, parameterized by The M classifiers are not independent but coupled by a joint prior distribution:

11 Semi-Supervised Multitask Learning(2/2) With  The normal distributions indicates the meta-knowledge indicating how the present task should be learned, based on the experience with a previous task.  When there are no previous tasks, only the baseline prior is used by setting m=1 =>PNBC.  Sharing tasks to have similar, not exactly the same(advantages over the Dirac delta function used in previous MTL work). Baseline prior Prior transferred from previous tasks Balance parameter

13

14

15

Thanks