Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

C280, Computer Vision Prof. Trevor Darrell Lecture 2: Image Formation.
More on single-view geometry
Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
2D Geometric Transformations
Lecture 11: Two-view geometry
CSE473/573 – Stereo and Multiple View Geometry
Correcting Projector Distortions on Planar Screens via Homography
Computer vision: models, learning and inference
1 pb.  camera model  calibration  separation (int/ext)  pose Don’t get lost! What are we doing? Projective geometry Numerical tools Uncalibrated cameras.
Generation of Virtual Image from Multiple View Point Image Database Haruki Kawanaka, Nobuaki Sado and Yuji Iwahori Nagoya Institute of Technology, Japan.
Mapping: Scaling Rotation Translation Warp
Two-view geometry.
Camera Calibration. Issues: what are intrinsic parameters of the camera? what is the camera matrix? (intrinsic+extrinsic) General strategy: view calibration.
Camera calibration and epipolar geometry
A synthetic camera model to test calibration procedures A four step procedure (last slide) based on an initial position (LookAt) and 13 parameters: ( 
Multi video camera calibration and synchronization.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Epipolar Geometry and the Fundamental Matrix F
CS485/685 Computer Vision Prof. George Bebis
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
The Pinhole Camera Model
CV: 3D sensing and calibration
Projected image of a cube. Classical Calibration.
MSU CSE 803 Fall 2008 Stockman1 CV: 3D sensing and calibration Coordinate system changes; perspective transformation; Stereo and structured light.
Camera Calibration from Planar Patterns Homework 2 Help SessionCS223bStanford University Mitul Saha (courtesy: Jean-Yves Bouguet, Intel)
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
1 Digital Images WorldCameraDigitizer Digital Image (i) What determines where the image of a 3D point appears on the 2D image? (ii) What determines how.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Instance-level recognition I. - Camera geometry and image alignment Josef Sivic INRIA, WILLOW, ENS/INRIA/CNRS UMR 8548 Laboratoire.
Automatic Camera Calibration
Computer vision: models, learning and inference
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan and Mr Mehrdad Ghaziasgar.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Camera Geometry and Calibration Thanks to Martial Hebert.
Geometric Camera Models and Camera Calibration
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Camera. Content Coordinate systems and transformations Viewing coordinates Coordinate transformation matrix Projections Window and viewport Acknowledgments:
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Metrology 1.Perspective distortion. 2.Depth is lost.
Geometric Camera Models
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Sensing Camera Model Camera Calibration
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
1 Chapter 2: Geometric Camera Models Objective: Formulate the geometrical relationships between image and scene measurements Scene: a 3-D function, g(x,y,z)
Feature Matching. Feature Space Outlier Rejection.
Reconnaissance d’objets et vision artificielle Jean Ponce Equipe-projet WILLOW ENS/INRIA/CNRS UMR 8548 Laboratoire.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Computational Rephotography Soonmin Bae Aseem Agarwala Frédo Durand.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Camera Model Calibration
EECS 274 Computer Vision Projective Structure from Motion.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Geometric Model of Camera
Homography From Wikipedia In the field of computer vision, any
Epipolar geometry.
GEOMETRIC CAMERA MODELS
Geometric Camera Models
Multiple View Geometry for Robotics
Camera Calibration from Planar Patterns
The Pinhole Camera Model
Presentation transcript:

Development of a system to reproduce the drainage from Tsujun Bridge for environment education Hikari Takehara Kumamoto National College of Technology

Background Tsujun Bridge – an aqueduct bridge made of stones in Kumamoto, Japan – a representative structure in water environment surrounding Shiraito plateau – This drains water to remove sediment in its water pipe The drainage from Tsujun Bridge – The drainage from Tsujun Bridge is utilized for education of water environment in Kumamoto

Problem and Situation Problem – Tsujun Bridge has been given damages whenever it drains water – The reason Leaking water from Tsujun Bridge Situation – A number of the drainage from Tsujun Bridge has decreased for protecting Tsujun Bridge – We cannot always watch the drainage from Tsujun Bridge

Purpose Creating a contents for learning the water environment surrounding Shiraito plateau – at any time – without giving any damages to Tsujun Bridge We developed a system which reproduces the drainage scene from Tsujun Bridge using Mixed Reality (MR) technology

Mixed Reality (MR) MR technology – Synthesize image of real environment (actually visible landscape) and image of virtual environment (Computer Graphics: CG) – Making the image as if there is a teapot on table image of real environment image of virtual environment synthesized image

Overview of system Capturing the camera image Geometric registration Synthesizing camera image and 3DCG model

Program flow Camera Calibration Capturing camera image Geometric Registration Synthesizing the CG and camera image Once Each frame

Program flow Camera Calibration Capturing camera image Geometric Registration Synthesizing the CG and camera image Once Each frame

Camera Calibration Intrinsic parameters which express how a camera projects a real space (object) on the projection plane are obtained object projection plane Cameras focus

Camera Calibration Camera coordinate Image coordinate Intrinsic parameters matrix A

Program flow Camera Calibration Capturing camera image Geometric Registration Synthesizing the CG and camera image At once Each frame intrinsic parameters

Capturing the camera image Capturing the camera image was used with OpenCV OpenCV(Open source Computer Vision library) – Library for Computer Vision

Program flow Camera Calibration Capturing camera image Geometric Registration Synthesizing the CG and camera image At once Each frame intrinsic parameters camera image

Geometric registration – to relate the position and orientation of camera with the coordinate system (world coordinate system) which is set at will in real environment – This relation is expressed by extrinsic parameters World coordinate C Camera coordinate M XWXW YWYW ZWZW Projection plane XCXC YCYC ZCZC

Extrinsic parameters – expressed as 3x4 matrix using rotation matrix R and translation vector T – This study assumed that the position of camera is fixed and that the orientation of camera can be changed freely – So, T is not need to be obtained, and R is only need to be obtained Extrinsic parameters matrix M

Extrinsic parameters – R matrix is obtained using Zhangs method with homography matrix Homography matrix H – transforms an image of a plane taken from one perspective into an image taken from the other perspective – expressed as 3x3 matrix XCXC YCYC ZCZC X C Y C Z C v H u v A plane u

Homography matrix – Obtaining this matrix needs more than 4 corresponding points between 2 images from each other perspectives – As the method of obtaining corresponding points, the method based on natural features is adopted – As the method of extracting natural features, SURF is used

Program flow Camera Calibration Capturing camera image Geometric Registration Synthesizing the CG and camera image At once Each frame intrinsic parameters camera image extrinsic parameters

Synthesizing the CG and camera image Camera image and 3DCG model of drainage were synthesized on the basis of the intrinsic and extrinsic parameters using OpenGL OpenGL (Open source Graphics Library) – Library for 3D graphics Intrinsic parameters Extrinsic parameters 3DCG model of drainage Camera image Synthesized image

Result of reproducing drainage scene

Result The drainage scene was reproduced to the correct position at the most part. An average of execution time – [s] An average of Estimation error – [pixel]

Conclusion In the future – The execution speed of this system will be improve using other method of obtaining corresponding feature points such as the optical flow – This system will be available even if the position of camera will be freely – This system will be operated in actual Tsujun Bridge, and will be utilized for learning water environment

End

SURF SURF (Speeded Up Robust Features) – The method extracts robust feature points for scaling and rotation of image and describes feature quantity – In following image, the corresponding points is obtained using SURF