REALTIME OBJECT-OF-INTEREST TRACKING BY LEARNING COMPOSITE PATCH-BASED TEMPLATES Yuanlu Xu, Hongfei Zhou, Qing Wang*, Liang Lin Sun Yat-sen University,

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

Loris Bazzani*, Marco Cristani*†, Alessandro Perina*, Michela Farenzena*, Vittorio Murino*† *Computer Science Department, University of Verona, Italy †Istituto.
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
DONG XU, MEMBER, IEEE, AND SHIH-FU CHANG, FELLOW, IEEE Video Event Recognition Using Kernel Methods with Multilevel Temporal Alignment.
Foreground Focus: Finding Meaningful Features in Unlabeled Images Yong Jae Lee and Kristen Grauman University of Texas at Austin.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Face Recognition. Introduction Why we are interested in face recognition? Why we are interested in face recognition? Passport control at terminals in.
Limin Wang, Yu Qiao, and Xiaoou Tang
Proposed concepts illustrated well on sets of face images extracted from video: Face texture and surface are smooth, constraining them to a manifold Recognition.
Face Description with Local Binary Patterns:
A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008.
Patch to the Future: Unsupervised Visual Prediction
Activity Recognition Aneeq Zia. Agenda What is activity recognition Typical methods used for action recognition “Evaluation of local spatio-temporal features.
Yuanlu Xu Human Re-identification: A Survey.
Real-Time Accurate Stereo Matching using Modified Two-Pass Aggregation and Winner- Take-All Guided Dynamic Programming Xuefeng Chang, Zhong Zhou, Yingjie.
Yuanlu Xu Advisor: Prof. Liang Lin Person Re-identification by Matching Compositional Template with Cluster Sampling.
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
Robust Object Tracking via Sparsity-based Collaborative Model
Hierarchical Saliency Detection School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
Watching Unlabeled Video Helps Learn New Human Actions from Very Few Labeled Snapshots Chao-Yeh Chen and Kristen Grauman University of Texas at Austin.
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
São Paulo Advanced School of Computing (SP-ASC’10). São Paulo, Brazil, July 12-17, 2010 Looking at People Using Partial Least Squares William Robson Schwartz.
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
A Study of Approaches for Object Recognition
Dorin Comaniciu Visvanathan Ramesh (Imaging & Visualization Dept., Siemens Corp. Res. Inc.) Peter Meer (Rutgers University) Real-Time Tracking of Non-Rigid.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Face Recognition: An Introduction
DVMM Lab, Columbia UniversityVideo Event Recognition Video Event Recognition: Multilevel Pyramid Matching Dong Xu and Shih-Fu Chang Digital Video and Multimedia.
Computer vision.
Robust Hand Tracking with Refined CAMShift Based on Combination of Depth and Image Features Wenhuan Cui, Wenmin Wang, and Hong Liu International Conference.
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Shape-Based Human Detection and Segmentation via Hierarchical Part- Template Matching Zhe Lin, Member, IEEE Larry S. Davis, Fellow, IEEE IEEE TRANSACTIONS.
Visual Object Tracking Xu Yan Quantitative Imaging Laboratory 1 Xu Yan Advisor: Shishir K. Shah Quantitative Imaging Laboratory Computer Science Department.
Visual Tracking Decomposition Junseok Kwon* and Kyoung Mu lee Computer Vision Lab. Dept. of EECS Seoul National University, Korea Homepage:
Visual Tracking with Online Multiple Instance Learning
A General Framework for Tracking Multiple People from a Moving Camera
Video Tracking Using Learned Hierarchical Features
Video Based Palmprint Recognition Chhaya Methani and Anoop M. Namboodiri Center for Visual Information Technology International Institute of Information.
Marco Pedersoli, Jordi Gonzàlez, Xu Hu, and Xavier Roca
Computer Vision Lab Seoul National University Keyframe-Based Real-Time Camera Tracking Young Ki BAIK Vision seminar : Mar Computer Vision Lab.
1 Webcam Mouse Using Face and Eye Tracking in Various Illumination Environments Yuan-Pin Lin et al. Proceedings of the 2005 IEEE Y.S. Lee.
Tracking People by Learning Their Appearance Deva Ramanan David A. Forsuth Andrew Zisserman.
Face Recognition: An Introduction
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
Shuai Zheng TNT group meeting 1/12/2011.  Paper Tracking  Robust view transformation model for gait recognition.
Human Re-identification by Matching Compositional Template with Cluster Sampling Yuanlu Xu 1, Liang Lin 1, Wei-Shi Zheng 1, Xiaobai Liu 2 Abstract This.
Jiu XU, Axel BEAUGENDRE and Satoshi GOTO Computer Sciences and Convergence Information Technology (ICCIT), th International Conference on 1 Real-time.
Human Detection Method Combining HOG and Cumulative Sum based Binary Pattern Jong Gook Ko', Jin Woo Choi', So Hee Park', Jang Hee You', ' Electronics and.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
Non-Ideal Iris Segmentation Using Graph Cuts
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
Week 10 Emily Hand UNR.
Max-Confidence Boosting With Uncertainty for Visual tracking WEN GUO, LIANGLIANG CAO, TONY X. HAN, SHUICHENG YAN AND CHANGSHENG XU IEEE TRANSACTIONS ON.
A NEW ALGORITHM FOR THE VISUAL TRACKING OF SURGICAL INSTRUMENT IN ROBOT-ASSISTED LAPAROSCOPIC SURGERY 1 Interdisciplinary Program for Bioengineering, Graduate.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Detecting Occlusion from Color Information to Improve Visual Tracking
Week 3 Emily Hand UNR. Online Multiple Instance Learning The goal of MIL is to classify unseen bags, instances, by using the labeled bags as training.
Guillaume-Alexandre Bilodeau
Yuanke Zhang1,2, Hongbing Lu1, Junyan Rong1, Yuxiang Xing3, Jing Meng2
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Video-based human motion recognition using 3D mocap data
Image Segmentation Techniques
Brief Review of Recognition + Context
Online Graph-Based Tracking
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
Learning complex visual concepts
Presentation transcript:

REALTIME OBJECT-OF-INTEREST TRACKING BY LEARNING COMPOSITE PATCH-BASED TEMPLATES Yuanlu Xu, Hongfei Zhou, Qing Wang*, Liang Lin Sun Yat-sen University, Guangzhou, China Objective   To tracking people with partial occlusions or significant appearance variations, as Fig. 1 illustrates. Contributions   A novel model maintenance approach for patch-based tracking.   A more concise and effective texture descriptor. INTRODUCTION Fig. 1. Difficulties of tracking: partial occlusions (left two), significant appearance variations (right two). Composite Patch-based Templates (CPT) Model   We extract image patches with size from the tracking window and the up, down, left and right 4 sub-regions around the tracking window, forming the patch set of the tracking target and the patch set of the background, respectively.   For each image patch, different types of features are applied to capture the local statistics. A. A.Histogram of gradient (HOG), to capture edges. B. B.Center-symmetric local binary patterns (CS-LBP), as illustrated in Fig. 3, to capture texture. C. C.Color histogram, to capture flatness. OUR METHODS   CPT model is initialized by selecting image patches from the tracking window, which maximizes the difference with the background. Tracking Algorithm   To infer the tracking target location, as shown in Fig. 4, we independently find the best match of each template (green rectangle) within the surrounding area (blue rectangle). And compute the location with the following formulation:   To maintain the CPT model online, we propose a new maintenance algorithm by picking new patches and fusing them into CPT model, as shown in Fig. 5. EXPERIMENTS The novel maintenance approach selects effective composite templates from the fusion of the matching templates and the candidate set, which outperforms other state-of-art algorithms in tracking targets with various challenges. CS-LBP descriptor is an effective dimension-reduced texture descriptor. CONCLUSION [1] X. Liu et al, “Representing and recognizing objects with massive local image patches,” Pattern Recognition, vol. 45(1), pp. 231–240, [2] Y. Xie, L. Lin, and Y. Jia, “Tracking objects with adaptive feature patches for PTZ camera visual surveillance,” ICPR, pp. 1739–1742, [3] X. Liu, L. Lin, S. Yan, H. Jin, and W. Jiang, “Adaptive object tracking by learning hybrid template on-line,” TCSVT, vol. 21(11), pp. 1588–1599, REFERENCES Please contact FOR FURTHER INFORMATION Fig. 7. Quantitative comparisons 2010 International Conference on Image Processing Fig. 2. Illustration of CPT model. Fig. 3. Illustration of CS-LBP descriptor, CS-LBP operator compares the intensities in center-symmetric direction. where indicates the offset of each template, the discriminability weight. Fusing in an excess model Matching template set Candidate template set The new CPT model Fig. 5. Find the most discrimination patches in matching templates and candidate templates to maintain the CPT model online. Fig. 4. Finding the best match of each template. By thresholding the matching distance, we can obtain the matching templates in two successive frames. Constructing the candidate template set with a new frame and the inferred target location, we obtain two feature sets, from the inferred tracking window and the background, respectively. Re-selecting We collect 4 test videos to verify our approach, two video of the human face from Multiple Instance Learning (MIL) and two surveillance videos from the internet. A number of sampled tracking results are shown in Fig. 6. We compute the average tracking error with manually labeled groundtruth of the tracking target location and compare with two state-of-the-art algorithms: MIL and Ensemble Tracking, as shown in Fig. 7. Fig. 6. Sampled results of our tracking methods, target with severe body variations and large-scale occlusions. Girl Face Occlusi on Camera 1 Camera 2