†UC Berkeley, ‡University of Bologna, and *ETH Zurich

Slides:



Advertisements
Similar presentations
Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Advertisements

Text Categorization.
Context-based object-class recognition and retrieval by generalized correlograms by J. Amores, N. Sebe and P. Radeva Discussion led by Qi An Duke University.
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Forearm Surface Electromyography Activity Detection Noise Detection, Identification and Quantification Signal Enhancement.
On the Dimensionality of Face Space Marsha Meytlis and Lawrence Sirovich IEEE Transactions on PAMI, JULY 2007.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
Discriminative Segment Annotation in Weakly Labeled Video Kevin Tang, Rahul Sukthankar Appeared in CVPR 2013 (Oral)
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
ICIP 2000, Vancouver, Canada IVML, ECE, NTUA Face Detection: Is it only for Face Recognition?  A few years earlier  Face Detection Face Recognition 
DIMENSIONALITY REDUCTION BY RANDOM PROJECTION AND LATENT SEMANTIC INDEXING Jessica Lin and Dimitrios Gunopulos Ângelo Cardoso IST/UTL December
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Parallel K-Means Clustering Based on MapReduce The Key Laboratory of Intelligent Information Processing, Chinese Academy of Sciences Weizhong Zhao, Huifang.
Scalable Text Mining with Sparse Generative Models
FLANN Fast Library for Approximate Nearest Neighbors
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Sensor Networks Storage Sanket Totala Sudarshan Jagannathan.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Feature Extraction Spring Semester, Accelerometer Based Gestural Control of Browser Applications M. Kauppila et al., In Proc. of Int. Workshop on.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Introduction SNR Gain Patterns Beam Steering Shading Resources: Wiki:
Processing of large document collections Part 2 (Text categorization) Helena Ahonen-Myka Spring 2006.
Associative Pattern Memory (APM) Larry Werth July 14, 2007
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 A Static Hand Gesture Recognition Algorithm Using K- Mean Based Radial Basis Function Neural Network 作者 :Dipak Kumar Ghosh,
A Novel Local Patch Framework for Fixing Supervised Learning Models Yilei Wang 1, Bingzheng Wei 2, Jun Yan 2, Yang Hu 2, Zhi-Hong Deng 1, Zheng Chen 2.
Event retrieval in large video collections with circulant temporal encoding CVPR 2013 Oral.
Gang WangDerek HoiemDavid Forsyth. INTRODUCTION APROACH (implement detail) EXPERIMENTS CONCLUSION.
An Approximate Nearest Neighbor Retrieval Scheme for Computationally Intensive Distance Measures Pratyush Bhatt MS by Research(CVIT)
Hand Motion Identification Using Independent Component Analysis of Data Glove and Multichannel Surface EMG Pei-Jarn Chen, Ming-Wen Chang, and and Yi-Chun.
A DYNAMIC APPROACH TO THE SELECTION OF HIGH ORDER N-GRAMS IN PHONOTACTIC LANGUAGE RECOGNITION Mikel Penagarikano, Amparo Varona, Luis Javier Rodriguez-
Iterative similarity based adaptation technique for Cross Domain text classification Under: Prof. Amitabha Mukherjee By: Narendra Roy Roll no: Group:
Vector Quantization CAP5015 Fall 2005.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Fast Query-Optimized Kernel Machine Classification Via Incremental Approximate Nearest Support Vectors by Dennis DeCoste and Dominic Mazzoni International.
Using decision trees to build an a framework for multivariate time- series classification 1 Present By Xiayi Kuang.
References: [1] E. Tognoli, J. Lagarde, G.C. DeGuzman, J.A.S. Kelso (2006). The phi complex as a neuromarker of human social coordination. PNAS 104:
CS Machine Learning Instance Based Learning (Adapted from various sources)
Intro. ANN & Fuzzy Systems Lecture 16. Classification (II): Practical Considerations.
Chapter 15: Classification of Time- Embedded EEG Using Short-Time Principal Component Analysis by Nguyen Duc Thang 5/2009.
Next, this study employed SVM to classify the emotion label for each EEG segment. The basic idea is to project input data onto a higher dimensional feature.
GPGPU Performance and Power Estimation Using Machine Learning Gene Wu – UT Austin Joseph Greathouse – AMD Research Alexander Lyashevsky – AMD Research.
1 Text Categorization  Assigning documents to a fixed set of categories  Applications:  Web pages  Recommending pages  Yahoo-like classification hierarchies.
Experience Report: System Log Analysis for Anomaly Detection
Unsupervised Learning of Video Representations using LSTMs
Exploring Hyperdimensional Associative Memory
Guillaume-Alexandre Bilodeau
Compact Bilinear Pooling
Hyperdimensional Computing with 3D VRRAM In-Memory Kernels: Device-Architecture Co-Design for Energy-Efficient, Error-Resilient Language Recognition H.
Instance Based Learning
Recognition using Nearest Neighbor (or kNN)
Instance Based Learning (Adapted from various sources)
Nearest-Neighbor Classifiers
Jewels, Himalayas and Fireworks, Extending Methods for
Text Categorization Assigning documents to a fixed set of categories
Local Binary Patterns with Hyperdimensional Computing
Jewels, Himalayas and Fireworks, Extending Methods for
Sahand Salamat, Mohsen Imani, Behnam Khaleghi, Tajana Šimunić Rosing
Machine Learning for Visual Scene Classification with EEG Data
Information Retrieval
Exploring Hyperdimensional Associative Memory
Topological Signatures For Fast Mobility Analysis
Abbas Rahimi, Pentti Kanerva, Jan M. Rabaey UC Berkeley
EECS Department, UC Berkeley
Ch6: AM and BAM 6.1 Introduction AM: Associative Memory
Lecture 16. Classification (II): Practical Considerations
Modeling IDS using hybrid intelligent systems
Week 7 Presentation Ngoc Ta Aidean Sharghi
INFORMATION RETRIEVAL TECHNIQUES BY DR. ADNAN ABID
Presentation transcript:

†UC Berkeley, ‡University of Bologna, and *ETH Zurich Hyperdimensional Biosignal Processing: A Case Study for EMG−based Hand Gesture Recognition Abbas Rahimi†, Simone Benatti‡, Pentti Kanerva†, Luca Benini‡*, Jan M. Rabaey† †UC Berkeley, ‡University of Bologna, and *ETH Zurich

Outline Background in HD Computing EMG−based Hand Gesture Recognition Embedded Platform for EMG Acquisition Mapping EMG Signals to HD Vectors Spatiotemporal HD Encoding Experimental Results

Brain-inspired Hyperdimensional Computing Hyperdimensional (HD) computing: Emulation of cognition by computing with high-dimensional vectors as opposed to computing with numbers Information distributed in high-dimensional space Supports full algebra Superb properties: General and scalable model of computing Well-defined set of arithmetic operations Fast and one-shot learning (no need of back-prop) Memory-centric with embarrassingly parallel operations Extremely robust against most failure mechanisms and noise [P. Kanerva, An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors, 2009]

What Are Hypervectors? Patterns (mapped to hypervector) as basic data representation in contrast to computing with numbers! Hypervectors are: high-dimensional (e.g., 10,000 dimensions) (pseudo)random with i.i.d. components holographically distributed (i.e., not microcoded) Hypervectors can: use various coding: dense or sparse, bipolar or binary be combined using arithmetic operations: multiplication, addition, and permutation be compared for similarity using distance metrics

Mapping to Hypervectors Each symbol is represented by a 10,000−D hypervector chosen at random: A = [−1 +1 −1 −1 −1 +1 −1 −1 ...] B = [+1 −1 +1 +1 +1 −1 +1 −1 ...] C = [−1 −1 −1 +1 +1 −1 +1 −1 ...] D = [−1 −1 −1 +1 +1 −1 +1 −1 ...] ... Z = [−1 −1 +1 −1 +1 +1 +1 −1 ...] Every letter hypervector is dissimilar to others, e.g., ⟨A, B⟩ = 0 This assignment is fixed throughout computation Item Memory (iM) “a” A 8 10,000

HD Arithmetic Addition (+) is good for representing sets, since sum vector is similar to its constituent vectors. ⟨A+B, A⟩=0.5 Multiplication(*) is good for binding, since product vector is dissimilar to its constituent vectors. ⟨A*B, A⟩=0 Permutation (ρ) makes a dissimilar vector by rotating, it good for representing sequences. ⟨A, ρA⟩=0 * and ρ are invertible and preserve distance

Computing a Profile Using HD Arithmetic A trigram (3−letter sequence) is represented by a 10,000−D hypervector computed from its Letter Vectors with permutation and multiplication Example: “eat”  ρ ρ E * ρ A * T E = −1 −1 −1 +1 +1 −1 +1 −1 ... A = −1 +1 −1 −1 −1 +1 −1 −1 ... T = −1 −1 +1 −1 +1 +1 +1 −1 ... −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− “eat”= +1 +1 −1 −1 +1 +1 ...   Example: “ate”  ρ ρ A * ρ T * E A = −1 +1 −1 −1 −1 +1 −1 −1 ... T = −1 −1 +1 −1 +1 +1 +1 −1 ... E = −1 −1 −1 +1 +1 −1 +1 −1 ... −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− “ate” = −1 +1 −1 +1 −1 +1 ... Applications N−grams HD Baseline Language identification [QI’16, ISLPED’16] N=3 96.7% 97.9% Text categorization [DATE’16] N=5 94.2% 86.4% EMG gesture recognition [ICRC’16] N={3,4,5} 97.8% 89.7%

Outline Background in HD Computing EMG−based Hand Gesture Recognition Embedded Platform for EMG Acquisition Mapping EMG Signals to HD Vectors Spatiotemporal HD Encoding Experimental Results

Embedded Platform for Electromyography (EMG) Block diagram of a versatile embedded platform [TBCAS’15] Placement of EMG electrodes on the subjects’ forearms Four EMG sensors Sampled at 1K Hz Max amplitude: 20 mV Five gestures: {closed hand, open hand, 2−finger pinch, point index, rest}

Experimental Setup SVM Flow (baseline) Our Proposed HD Flow Full dataset Test dataset SVM Classification Algorithm Gesture 1 Gesture 5 25% of dataset Training dataset SVM Training Algorithm SVM Model Data segmentation and labelling EMG dataset SVM Flow (baseline) Full dataset Query GV Test dataset Gesture 1 Gesture 5 HDC Encoder Associative Memory 25% of dataset Training dataset Data segmentation and labelling EMG dataset Our Proposed HD Flow

Signal Partitioning for Encoding Closed hand Spatial encoding R1 R3 R4 R5 Temporal encoding, e.g., pentagram R2 Label channels

Mapping to HD Space Item Memory (iM) maps channels to orthogonal hypervectors. CiM maps quantities continuously to hypervectors. Q: 21 levels CiM(SCH1[t]) iM CH1 iM(CH1) CiM CiM(SCH2[t]) CiM(SCH3[t]) CiM(SCH4[t]) CH2 iM(CH2) CH3 iM(CH3) CH4 iM(CH4) iM ⟨ iM(CH1), iM(CH2) ⟩ = 0 ⟨ iM(CH2), iM(CH3) ⟩ = 0 ⟨ iM(CH3), iM(CH4) ⟩ = 0 CiM ⟨ CiM(0), CiM(1) ⟩ = 0.95 ⟨ CiM(0), CiM(2) ⟩ = 0.90 ⟨ CiM(0), CiM(3) ⟩ = 0.85 ⟨ CiM(0), CiM(4) ⟩ = 0.80 …. ⟨ CiM(0), CiM(20) ⟩ = 0

Spatiotemporal Encoding Spatial encoder Q: 21 levels CiM(SCH1[t]) iM CH1 iM(CH1) CiM CiM(SCH2[t]) CiM(SCH3[t]) CiM(SCH4[t]) CH2 iM(CH2) CH3 iM(CH3) CH4 iM(CH4) * + R[t] ρ N−gram[t] ρ(R[t−1]) ρ2(R[t−2]) * ρN−1(R[t−N+1]) Temporal encoder … GV(Label[t]) += N−gram[t] Bind a channel to its signal level: iM(CH1) * CiM(SCH1[t]) Generate a holistic record (R[t]) across 4 channels by addition Rotate (ρ) a record to capture sequences

Person-to-person Differences HDC has up to 100% accuracy (on average 8.1% higher than SVM) with equal training!

Adaptive Encoder How can we robustly reuse the encoder across different test subjects? Train with the best N−gram Adaptively tune N−grams in the encoder based on stored patterns in AM using feedback GV (Label=1) GV (Label=2) GV (Label=5) Cosine … Associative Memory (AM) Encoder Query GV EMG Channels Spatial Encoding Temporal Encoding Plant Controller Measurement: cosine similarity Actuation: change N N argmax cosine−similarity

Similarity Is VERY Low for Tested≠Trained N−grams

Accuracy with Overlapping Gestures What if the classification window contains multiple gestures? Peak-accuracy up to 30% overlapping between two gestures

HDC Learns Fast 97.8% accuracy with only 1/3 the training data required by state-of-the-art SVM Increasing training 10% to 80%  increases SVs from 30 to 155 (higher execution)

Summary Simple vector−space operations are used to encode analog input signals for classification Compared to state-of-the-art SVM: A high level of accuracy (97.8%) with only 1/3 the training data HD encoder adjusts to variations in gesture−timing across different subjects 30% overlapping between two neighboring gestures Next: online and continuous learning!

Acknowledgment This work was supported in part by Systems on Nanoscale Information fabriCs (SONIC), one of the six SRC STARnet Centers, sponsored by MARCO and DARPA.