Reconstruction with Adaptive Feature-Specific Imaging Jun Ke 1 and Mark A. Neifeld 1,2 1 Department of Electrical and Computer Engineering, 2 College of.

Slides:



Advertisements
Similar presentations
An Adaptive Learning Method for Target Tracking across Multiple Cameras Kuan-Wen Chen, Chih-Chuan Lai, Yi-Ping Hung, Chu-Song Chen National Taiwan University.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
1 CPC group SeminarThursday, June 1, 2006 Classification techniques for Hand-Written Digit Recognition Venkat Raghavan N. S., Saneej B. C., and Karteek.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Image Denoising using Locally Learned Dictionaries Priyam Chatterjee Peyman Milanfar Dept. of Electrical Engineering University of California, Santa Cruz.
FIO/LS 2006 ece Task-Specific Information Amit Ashok 1, Pawan K Baheti 1 and Mark A. Neifeld 1,2 Optical Computing and Processing Laboratory 1 Dept. of.
Principal Component Analysis
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
7th IEEE Technical Exchange Meeting 2000 Hybrid Wavelet-SVD based Filtering of Noise in Harmonics By Prof. Maamar Bettayeb and Syed Faisal Ali Shah King.
Ensemble Learning: An Introduction
Fast Image Replacement Using Multi-Resolution Approach Chih-Wei Fang and Jenn-Jier James Lien Robotics Lab Department of Computer Science and Information.
Feature Detection and Emotion Recognition Chris Matthews Advisor: Prof. Cotter.
Face Recognition Using Eigenfaces
Speech Recognition in Noise
Reconstruction with Adaptive Feature-specific Imaging Jun Ke 1 and Mark A. Neifeld 1,2 1 Department of Electrical and Computer Engineering, 2 College of.
Optical Architectures for Compressive Imaging Mark A. Neifeld and Jun Ke Electrical and Computer Engineering Department College of Optical Sciences University.
Distributed Feature-Specific Imaging Jun Ke 1, Premchandra Shankar 1, and Mark A. Neifeld 1,2 Computational Optical Sensing and Imaging (COSI) Department.
Object Class Recognition using Images of Abstract Regions Yi Li, Jeff A. Bilmes, and Linda G. Shapiro Department of Computer Science and Engineering Department.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Sub-Nyquist Sampling DSP & SCD Modules Presented by: Omer Kiselov, Daniel Primor Supervised by: Ina Rivkin, Moshe Mishali Winter 2010High Speed Digital.
SINGLE CHANNEL SPEECH MUSIC SEPARATION USING NONNEGATIVE MATRIXFACTORIZATION AND SPECTRAL MASKS Jain-De,Lee Emad M. GraisHakan Erdogan 17 th International.
HMM-BASED PSEUDO-CLEAN SPEECH SYNTHESIS FOR SPLICE ALGORITHM Jun Du, Yu Hu, Li-Rong Dai, Ren-Hua Wang Wen-Yi Chu Department of Computer Science & Information.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Image Restoration using Iterative Wiener Filter --- ECE533 Project Report Jing Liu, Yan Wu.
Tinoosh Mohsenin and Bevan M. Baas VLSI Computation Lab, ECE Department University of California, Davis Split-Row: A Reduced Complexity, High Throughput.
Mixture Models, Monte Carlo, Bayesian Updating and Dynamic Models Mike West Computing Science and Statistics, Vol. 24, pp , 1993.
Compressive Sensing for Multimedia Communications in Wireless Sensor Networks By: Wael BarakatRabih Saliba EE381K-14 MDDSP Literary Survey Presentation.
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
Applications of Neural Networks in Time-Series Analysis Adam Maus Computer Science Department Mentor: Doctor Sprott Physics Department.
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
Unsupervised Learning Motivation: Given a set of training examples with no teacher or critic, why do we learn? Feature extraction Data compression Signal.
Channel Independent Viterbi Algorithm (CIVA) for Blind Sequence Detection with Near MLSE Performance Xiaohua(Edward) Li State Univ. of New York at Binghamton.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
Data Projections & Visualization Rajmonda Caceres MIT Lincoln Laboratory.
Motivation: Sorting is among the fundamental problems of computer science. Sorting of different datasets is present in most applications, ranging from.
1/18 New Feature Presentation of Transition Probability Matrix for Image Tampering Detection Luyi Chen 1 Shilin Wang 2 Shenghong Li 1 Jianhua Li 1 1 Department.
Supervisor: Nakhmani Arie Semester: Winter 2007 Target Recognition Harmatz Isca.
Intelligent Database Systems Lab 國立雲林科技大學 National Yunlin University of Science and Technology 1 Adaptive FIR Neural Model for Centroid Learning in Self-Organizing.
EE368 Digital Image Processing Face Detection Project By Gaurav Srivastava Siddharth Joshi.
Student: Chih-Wei Fang ( 方志偉 ) Adviser: Jenn-Jier James Lien ( 連震杰 ) Robotics Laboratory, Department of Computer Science and Information Engineering, National.
Principal Component Analysis (PCA).
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Robodog Frontal Facial Recognition AUTHORS GROUP 5: Jing Hu EE ’05 Jessica Pannequin EE ‘05 Chanatip Kitwiwattanachai EE’ 05 DEMO TIMES: Thursday, April.
Chapter 15: Classification of Time- Embedded EEG Using Short-Time Principal Component Analysis by Nguyen Duc Thang 5/2009.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Experience Report: System Log Analysis for Anomaly Detection
Compressive Coded Aperture Video Reconstruction
An Image Database Retrieval Scheme Based Upon Multivariate Analysis and Data Mining Presented by C.C. Chang Dept. of Computer Science and Information.
Evaluation of mA Switching Method with Penalized Weighted Least-Square Noise Reduction for Low-dose CT Yunjeong Lee, Hyekyun Chung, and Seungryong Cho.
9.3 Filtered delay embeddings
Outlier Processing via L1-Principal Subspaces
Principal Component Analysis (PCA)
Optimal Sequence Allocation and Multi-rate CDMA Systems
Principal Component Analysis
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
PCA vs ICA vs LDA.
Blind Signal Separation using Principal Components Analysis
Object Modeling with Layers
Historic Document Image De-Noising using Principal Component Analysis (PCA) and Local Pixel Grouping (LPG) Han-Yang Tang1, Azah Kamilah Muda1, Yun-Huoy.
Principal Component Analysis (PCA)
By Viput Subharngkasen
第 四 章 VQ 加速運算與編碼表壓縮 4-.
Non-negative Matrix Factorization (NMF)
Emad M. Grais Hakan Erdogan
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
NOISE FILTER AND PC FILTERING
Presentation transcript:

Reconstruction with Adaptive Feature-Specific Imaging Jun Ke 1 and Mark A. Neifeld 1,2 1 Department of Electrical and Computer Engineering, 2 College of Optical Sciences University of Arizona Frontiers in Optics 2007

Outline Frontiers in Optics 2007  Motivation for FSI and adaptation.  Adaptive FSI using PCA/Hadamard features.  Adaptive FSI in noise.  Conclusion.

Motivation - FSI Reconstruction with Feature-specific Imaging (FSI) : Frontiers in Optics 2007 FSI benefits:  Lower hardware complexity  Smaller equipment size/weight  Higher measurement SNR  High data acquisition rate  Lower operation bandwidth  Less power consumption Sequential architecture: Parallel architecture: LCD G (NxM) Reconstruction matrix G (NxM) object object reconstruction DMD Imaging optics light collection single detector feature projection vector

Motivation - Adaptation Frontiers in Optics 2007  Acquire feature measurements sequentially  Use acquired feature measurements and training data to adapt the next projection vector  The design of projection vector effects reconstruction quality.  Using Principal Component Analysis (PCA) projection as example Testing sample Training samples Projection axis 2 Static PCA Projection axis 1 Reconstruction Adaptive PCA Projection axis 2 Projection value Training samples for 2 nd projection vector Projection axis 1 Reconstruction

Frontiers in Optics 2007 Object estimate y i = f i T x Calculate f i+1 Reconstruction Object x Update A i to A i+1 according to y i Computational Optics Calculate f 1 R i+1 Calculate R 1 from A 1 Adaptive FSI (AFSI) – PCA: i: adaptive step index A i : i th training set K (i) : # of samples for A i+1  High diversity of training data helps adaptation PCA-Based AFSI Testing sample K (1) nearest samples Projection axis Testing sample K (1) nearest samples Selected samples According to 1 st feature According to 2 nd feature K (2) nearest samples Projection axis 2 Projection axis 1 R i : autocorrelation matrix of A i f i : dominate eigenvector of A i y i : feature value measured by f i

Object examples (32x32):  Reconstructed object:  RMSE:  Feature measurements: where,  is the total # of features PCA-Based AFSI Frontiers in Optics 2007  Number of training objects: 100,000  Number of testing objects: 60

 RMSE reduces using more features  RMSE reduces using AFSI compare to static FSI  Improvement is larger for high diversity data  RMSE improvement is 33% and 16% for high and low diversity training data, when M = 250. Frontiers in Optics 2007 AFSI – PCA: PCA-Based AFSI K (i) decreases iteration index i Reconstruction from static FSI (i = 100) Reconstruction from AFSI (i = 100)

 Projection vector’s implementation order is adapted. Frontiers in Optics 2007 AFSI – Hadamard: Hadamard-Based AFSI Selected samples K (1) nearest samples testing sample projection axis 1 K (1) nearest samples testing sample projection axis 2 K (2) nearest samples sample mean First 5 Hadamard basis ←Static FSI AFSI→ according to 1st feature according to 2nd feature sample mean projection axis 1  Sample mean for training set A i is  y j = f i T j = 1,…,M  max{y j } corresponds to the dominant Hadamard projection vector

 L : # of features in each adaptive step Frontiers in Optics 2007 : sample mean of A i f i : i th Hadamard vector for A i AFSI – Hadamard: Hadamard-Based AFSI K (1) nearest samples testing sample projection axis 1 sample mean projection axis 2 Selected samples according to 1st 2 features Object estimate y iL+j = f iL+j T x (j=1,…,L) Choose f iL+1 ~ f (i+1)L Reconstruction Object x Update A i to A i+1 according to y iL+j Computational Optics Choose f 1 ~f L <Ai><Ai> Sort Sort Hadamard basis vectors

 RMSE reduces in AFSI compared with static FSI  RMSE improvement is 32% and 18% for high and low diversity training data, when M = 500 and L = 10.  AFSI has smaller RMSE using small L when M is also small  AFSI has smaller RMSE using large L when M is also large Hadamard-Based AFSI Frontiers in Optics 2007 AFSI – Hadamard: K (i) decreases number of features M = Li L decreases L increases number of features M = Li Reconstruction from adaptive FSI Reconstruction from static FSI

Hadamard-Based AFSI – Noise Frontiers in Optics 2007 AFSI – Hadamard:  Hadmard projection is used because of its good reconstruction performance  Feature measurements are de-noised before used in adaptation  Wiener operator is used for object reconstruction  Auto-correlation matrix is updated in each adaptation step  T : integration time  σ 0 2 = 1  detector noise variance: σ 2 2 = σ 0 2 /T Object estimate y iL+j = f iL+j T x+n iL+j (j = 1,2,…L) Choose f iL+1 ~f (i+1)L Reconstruction Object x Update A i to A i+1 according to Computational Optics Choose f 1 ~f L from de- noising y iL+j Calculate R i for A i Sort Hadamard bases Sort

Frontiers in Optics 2007  RMSE in AFSI is smaller than in static FSI  RMSE is reduced further by modifying R x in each adaptation step  RMSE improvement is larger using small L when M is also small  RMSE is small using large L when M is also large Hadamard-Based AFSI – Noise High diversity training data; σ 0 2 = 1 K (i) decreases L decreases L increases High diversity training data; σ 0 2 = 1 AFSI – fixed R x AFSI – adapted R x Static FSI

T : integration time/per feature; M 0 : the number of features Total feature collection time = T × M 0  Reducing Measurement error  Losing adaptation advantage Hadamard-Based AFSI – Noise Frontiers in Optics 2007 High diversity training data; σ 0 2 = 1 Minimum total feature collection time Increasing T Trade-off

Conclusion Frontiers in Optics 2007 Noise free measurements:  PCA-based and Hadmard-based AFSI system are presented  AFSI system presents lower RMSE than static FSI system Noisy measurements:  Hadamard-based AFSI system in noise is presented  AFSI system presents smaller RMSE than static FSI system  There is a minimum total feature collection time to achieve a reconstruction quality requirement