MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,

Slides:



Advertisements
Similar presentations
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Advertisements

A Graph based Geometric Approach to Contour Extraction from Noisy Binary Images Amal Dev Parakkat, Jiju Peethambaran, Philumon Joseph and Ramanathan Muthuganapathy.
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Automatic 3D modeling from range images Daniel Huber Carnegie Mellon University Robotics Institute.
Ming-Ming Cheng 1 Ziming Zhang 2 Wen-Yan Lin 3 Philip H. S. Torr 1 1 Oxford University, 2 Boston University 3 Brookes Vision Group Training a generic objectness.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
SPONSORED BY SA2014.SIGGRAPH.ORG Annotating RGBD Images of Indoor Scenes Yu-Shiang Wong and Hung-Kuo Chu National Tsing Hua University CGV LAB.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains Baochen Sun and Kate Saenko UMass Lowell.
Database-Based Hand Pose Estimation CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
IIIT Hyderabad Pose Invariant Palmprint Recognition Chhaya Methani and Anoop Namboodiri Centre for Visual Information Technology IIIT, Hyderabad, INDIA.
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Cambridge, Massachusetts Pose Estimation in Heavy Clutter using a Multi-Flash Camera Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Rama Chellappa, Amit.
Hierarchical Saliency Detection School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
MESA LAB Two papers in IFAC14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
Ghunhui Gu, Joseph J. Lim, Pablo Arbeláez, Jitendra Malik University of California at Berkeley Berkeley, CA
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
MESA LAB Depth ordering Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
Computer Graphics Lab Electrical Engineering, Technion, Israel June 2009 [1] [1] Xuemiao Xu, Animating Animal Motion From Still, Siggraph 2008.
1 Color Segmentation: Color Spaces and Illumination Mohan Sridharan University of Birmingham
A Study of Approaches for Object Recognition
Segmentation Divide the image into segments. Each segment:
3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views
Automatic 2D-3D Registration Student: Lingyun Liu Advisor: Prof. Ioannis Stamos.
MESA LAB Applied Fractional Calculus Workshop Series Cal Maritime Academy, Vallejo, CA Applied Fractional Calculus Workshop Series Tomas Oppenheim MESA.
MESA LAB IFAC’14 Paper Review (On selected two papers) Zhuo Li MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering,
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
MESA Lab Synthesis of bidimensional α -stable models with long-range dependence xiaodong sun MESA (Mechatronics, Embedded Systems and Automation) Lab School.
The Three R’s of Vision Jitendra Malik.
Francois de Sorbier Hiroyuki Shiino Hideo Saito. I. Introduction II. Overview of our system III. Violin extraction and 3D registration IV. Virtual advising.
MESA LAB CTRW MATLAB Toolbox Applied Fractional Calculus Workshop Series Tomas Oppenheim MESA LAB MESA (Mechatronics, Embedded Systems and Automation)
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Tiebiao Zhao MESA (Mechatronics, Embedded Systems and Automation)Lab
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Semantic Decomposition and Reconstruction of Residential Scenes from LiDAR Data Hui Lin 1 *, Jizhou Gao 1 *, Yu Zhou 2, Guiliang Lu 2, Mao Ye 1, Ligang.
MESA LAB ICFDA’14 Paper Review (On selected two papers) Zhuo Li MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering,
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
MESA Lab Two Interesting Papers Introduction at ICFDA 2014 Xiaobao Jia MESA (Mechatronics, Embedded Systems and Automation) Lab School of Engineering,
MESA LAB A Brief Introduction to MFD (Matrix Fraction Description) Zhuo Li MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of.
Stylization and Abstraction of Photographs Doug Decarlo and Anthony Santella.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
ImArray - An Automated High-Performance Microarray Scanner Software for Microarray Image Analysis, Data Management and Knowledge Mining Wei-Bang Chen and.
MESA LAB PEM fuel cell fractional order modeling and identification Tiebiao Zhao MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School.
MESA LAB Self-Introduction Applied Fractional Calculus Workshop Series Your name MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Category Independent Region Proposals Ian Endres and Derek Hoiem University of Illinois at Urbana-Champaign.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Multi Scale CRF Based RGB-D Image Segmentation Using Inter Frames Potentials Taha Hamedani Robot Perception Lab Ferdowsi University of Mashhad The 2 nd.
Edge Segmentation in Computer Images CSE350/ Sep 03.
MESA LAB On IFAC2014 Selected Two Papers Jianxiong Cao MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
Effect of Hough Forests Parameters on Face Detection Performance: An Empirical Analysis M. Hassaballah, Mourad Ahmed and H.A. Alshazly Department of Mathematics,
Machine learning & object recognition Cordelia Schmid Jakob Verbeek.
AFC Workshop UCMerced Slide-1/ /21/2014 AFC Workshop UCMerced.
A Plane-Based Approach to Mondrian Stereo Matching
DIGITAL SIGNAL PROCESSING
Lecture 26 Hand Pose Estimation Using a Database of Hand Images
Real Time Dense 3D Reconstructions: KinectFusion (2011) and Fusion4D (2016) Eleanor Tursman.
Real-Time Human Pose Recognition in Parts from Single Depth Image
A Tutorial on HOG Human Detection
Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day Dr. Dimitrios Tzovaras.
IFAC’14 Paper Review (On selected two papers)
Paper Presentation Aryeh Zapinsky
RGB-D Image for Scene Recognition by Jiaqi Guo
Adaptive Occlusion Boundary Extraction for Depth Inference
Presentation transcript:

MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California, Merced E: Phone: Lab: CAS Eng 820 (T: ) June 30, Monday 4:00-6:00 PM Applied Fractional Calculus Workshop MESA UCMerced

MESA LAB The first paper 06/30/2014 AFC Workshop UCMerced Slide-2/1024 Paper title:

MESA LAB Motivation 1.Detect and localize objects in single view RGB images, the environments containing arbitrary illumination, much clutter for the purpose of autonomous grasping. 2. Objects can be of arbitrary color and interior texture, thus, we assume knowledge of only their 3D model without any appearance/texture information. 3. Using 3D models makes an object detector immune to intra-class texture variations.

MESA LAB Motivation In this paper, we address the problem of a robot grasping 3D objects of known 3D shape from their projections in single images of cluttered scenes. We further abstract the 3D model by only using its 2D Contour and thus detection is driven by the shape of the 3D object’s projected occluding boundary.

MESA LAB Main achievements

MESA LAB

Overview of the proposed approach a) The input image b) Edge image used gPb method c) The hypothesis bounding box (red) is segmented into superpixels. d) The set of superpixels with the closest distance to the model contour is selected. e)three textured synthetic views of the final pose estimate are shown.

MESA LAB How to do 1.3D model acquisition and rendering (use a low-cost RGB-D depth sensor and a dense surface reconstruction algorithm, KinectFusion) 2. Image feature (edge) 3. Object detection 4. Shape descriptor 5. Shape verification for contour extraction 6. Pose estimation (image registration)

MESA LAB Example (a) bounding boxes ordered by the detection score ( b) Corresponding pose output (c) Segmentation of top scored (d) Foreground mask selected by shape (e) Three iterations in pose refinement (f) Visualization of PR2 model with the Kinect point cloud (g) Another view of the same scene

MESA LAB The second paper Paper title:

MESA LAB Motivation Problems: 1.big and complex scenes, there must be many 3D point clouds, which need human label and will result in to spend much time. 2.Considering the bias problem of model learning caused by bias accumulation in a sample collection

MESA LAB Therefore, this paper proposes a semi-supervised method to learn category models from unlabeled “big point cloud data”. The algorithm only requires to label a small number of object seeds in each object category to start the model learning, as shown in Fig. 1. Such design saves both the manual labeling and computation cost to satisfy the model-mining efficiency requirement. Motivation

MESA LAB

The main contributions 1. To the best of our knowledge, this is the first proposal for an efficient mining of category models from “big point cloud data”. With limited computation and human labeling, the method is oriented toward an efficient construction of a category model base. 2.A multiple-model strategy is proposed as a solution to the bias problem, and provides several discrete and selective category boundaries.

MESA LAB Expermient Model-based point labeling results. Different colors indicate different categories, i.e. wall (green), tree (red), and street (blue).

MESA LAB Thanks