Foundations & Core in Computer Vision: A System Perspective Ce Liu Microsoft Research New England.

Slides:



Advertisements
Similar presentations
Human-Assisted Motion Annotation Ce Liu William T. Freeman Edward H. Adelson Massachusetts Institute of Technology Yair Weiss The Hebrew University of.
Advertisements

How it works? Actor & green screen 3D virtual background Complete video Camera position Must be very accurate ISLT Broadcaster.
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
C1 - The Impact of CAD on the Design Process.  Consider CAD drawing, 2D, 3D, rendering and different types of modelling.
Scalable Synthesis Brandon Lucia and Todd Mytkowicz Microsoft Research.
1 Semi-supervised learning for protein classification Brian R. King Chittibabu Guda, Ph.D. Department of Computer Science University at Albany, SUNY Gen*NY*sis.
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
3/5/2002Phillip Saltzman Video Motion Capture Christoph Bregler Jitendra Malik UC Berkley 1997.
Vision Based Control Motion Matt Baker Kevin VanDyke.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
ECE 562 Computer Architecture and Design Project: Improving Feature Extraction Using SIFT on GPU Rodrigo Savage, Wo-Tak Wu.
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Multi video camera calibration and synchronization.
Paper Title Your Name CMSC 838 Presentation. CMSC 838T – Presentation Motivation u Problem paper is trying to solve  Characteristics of problem  … u.
When Does a Camera See Rain? Department of Computer Science Columbia University Kshitiz Garg Shree K. Nayar ICCV Conference October 2005, Beijing, China.
CS107 Introduction to Computer Science
High-Quality Video View Interpolation
Identifiability of biological systems Afonso Guerra Assunção Senra Paula Freire Susana Barbosa.
Agenda The Subspace Clustering Problem Computer Vision Applications
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
Automatic Estimation and Removal of Noise from a Single Image
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Beyond datasets: Learning in a fully-labeled real world Thesis proposal Alexander Sorokin.
A Brief Overview of Computer Vision Jinxiang Chai.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
How to Choose Frame Grabber …that’s right for your application Coreco Imaging.
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
NATIONAL TECHNICAL UNIVERSITY OF ATHENS Image, Video And Multimedia Systems Laboratory Background
Interactive Graph Cuts for Optimal Boundary & Region Segmentation of Objects in N-D Images (Fri) Young Ki Baik, Computer Vision Lab.
Digital Image Processing & Analysis Spring Definitions Image Processing Image Analysis (Image Understanding) Computer Vision Low Level Processes:
Andrew Ng Feature learning for image classification Kai Yu and Andrew Ng.
What is Computer Science? “Computer Science is no more about computers than astronomy is about telescopes.” - Edsger Dijkstra “Computer Science is no more.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Digital Image Processing & Analysis Fall Outline Sampling and Quantization Image Transforms Discrete Cosine Transforms Image Operations Image Restoration.
PSEUDO-RELEVANCE FEEDBACK FOR MULTIMEDIA RETRIEVAL Seo Seok Jun.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
HCI 입문 Graphics Korea University HCI System 2005 년 2 학기 김 창 헌.
Scene Reconstruction Seminar presented by Anton Jigalin Advanced Topics in Computer Vision ( )
Contents  Teleoperated robotic systems  The effect of the communication delay on teleoperation  Data transfer rate control for teleoperation systems.
Chittampally Vasanth Raja vasanthexperiments.wordpress.com.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Journal of Visual Communication and Image Representation
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
MACHINE VISION GROUP MOBILE FEATURE-CLOUD PANORAMA CONSTRUCTION FOR IMAGE RECOGNITION APPLICATIONS Miguel Bordallo, Jari Hannuksela, Olli silvén Machine.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
  Computer vision is a field that includes methods for acquiring,prcessing, analyzing, and understanding images and, in general, high-dimensional data.
RVP vision probe for REVO-2. Non-contact measurement Surface finish 5-axis scanning and touch 2D scanning and 3D touch 3D scanning and touch The REVO-2.
Experimental vs. Theoretical Probability. Theoretical vs. Experimental Probability Objectives: (1)(2) Essential Questions: (1)(2)
Spectral subtraction algorithm and optimize Wanfeng Zou 7/3/2014.
Multi-Sensor 180° Panoramic View IP Cameras
Visual Information Retrieval
Automatic Speed Control Using Distance Measurement By Single Camera
ASiMoV: Automated Stop-Motion Music Videos
Multiple View Geometry
Contents Team introduction Project Introduction Applicability
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
From Algorithm to System to Cloud Computing
Augmented Reality And Virtual Reality.
Tracking parameter optimization
CAPTURING OF MOVEMENT DURING MUSIC PERFORMANCE
Vision for Robotic Applications
Real-Time Human Pose Recognition in Parts from Single Depth Image
Human Vision Nov Wu Pei.
V. Mezaris, I. Kompatsiaris, N. V. Boulgouris, and M. G. Strintzis
VC-B20D HD PTZ Camera.
Ryan Layer CU Boulder CS Ryan Layer
Presentation transcript:

Foundations & Core in Computer Vision: A System Perspective Ce Liu Microsoft Research New England

Vision vs. Learning Computer vision: visual application of machine learning? Data  features  algorithms  data ML: design algorithms given input and output data CV: find the best input and output data given available algorithms

Theoretical vs. Experimental Theoretical analysis of a visual system – Best & worst cases – Average performance Theoretical analysis is challenging as many visual distributions are hard to model (signal processing: 2 nd order processes, machine learning: exponential families) Experimental approach: full spectrum of system performance as a function of the amount of data, annotation, number of categories, noise, and other conditions

Quality vs. Speed HD videos, billions of images to index Real time & 90% vs. one hour per frame & 95%? Mechanism to balance quality and speed in modeling

Automatic vs. semi-automatic Common review feedback: parameters are hand-tuned; not clear how to set the parameters Vision system user feedback: I don’t know how to tweak parameters! Computer-oriented vs. human-oriented representations Human-in-the-loop (collaborative) vision – How to optimally use humans (what, which and how accurate) beyond traditional active learning – Model design by crowd-sourcing – Learning by subtraction

Algorithms vs. Sensors Two approaches to solving a vision problem – Look at images, design algorithms, experiment, improve… – Look at cameras, design new/better sensors, … Cameras for full-spectrum, high res, low noise, depth, motion, occluding boundary, object, … What’s the optimal sensor/device for solving a vision problem? What’s the limit of sensors?

Thank you! Ce Liu Microsoft Research New England