Paul Mann. Overview  Background  Outline of principles of methods  Limitations  Thoughts on design  Potential outputs  Variations  Any suggestions?

Slides:



Advertisements
Similar presentations
P3 Revision. How do forces have a turning effect? The turning effect of a force is called the moment. Distance from force to pivot – perpendicular to.
Advertisements

Jan Visualization with 3D CG Masaki Hayashi Digitization.
Prénom Nom Document Analysis: Document Image Processing Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
In all cameras, light enters through the lens and hits the recording medium. (In film cameras, the film plane, in digital cameras (for our purposes),
The score book The idea behind this presentation is to show how a score card is used. When several of these cards are put together they will form the basis.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Lighting. Lighting Instruments Ellipsoidal Spot Light.
3D Skeletons Using Graphics Hardware Jonathan Bilodeau Chris Niski.
CS 128/ES Lecture 10a1 Raster Data Sources: Paper maps & Aerial photographs.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
CS485/685 Computer Vision Prof. George Bebis
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Computer Vision - A Modern Approach
Data Input How do I transfer the paper map data and attribute data to a format that is usable by the GIS software? Data input involves both locational.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Airborne LIDAR The Technology Slides adapted from a talk given by Mike Renslow - Spencer B. Gross, Inc. Frank L.Scarpace Professor Environmental Remote.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Scenes, Cameras & Lighting. Outline  Constructing a scene  Using hierarchy  Camera models  Light models.
The Ray Model of Light Light from an object either results because the object is emitting light or light is reflecting from the surface of the object.
A light wave Magnetic Field Electric Field. The Ray Model of Light Taken from Since light seems to move in straight.
Integration Of CG & Live-Action For Cinematic Visual Effects by Amarnath Director, Octopus Media School.
PLANOMETRIC VIEW OF A KITCHEN.
Unit 1 Physics Detailed Study 3.1 Chapter 10: Astronomy.
1 GEOMETRIC OPTICS I. What is GEOMTERIC OPTICS In geometric optics, LIGHT is treated as imaginary rays. How these rays interact with at the interface of.
Manufacturing Engineering Department Lecture 9 – Automated Inspection
Magnifiers, Projectors, CamerasPaul Avery (PHY 3400)1 Magnifiers, Projectors, Cameras Applied Optics Paul Avery University of Florida
Anna Mason, David Mountain and Jonathan Raper
STC Robot 2 Majd Srour, Anis Abboud Under the supervision of: Yotam Elor and Prof. Alfred Bruckstein Optimally Covering an Unknown Environment with Ant-like.
Comparing Regular Film to Digital Photography
Composition: The Graphics Unit of Study. What is a camera angle? This is the angle from which the camera photographs a subject or scene. There are a great.
Lesson 6 Blood Spatter Analysis
Chapter 18-1 Mirrors. Plane Mirror a flat, smooth surface light is reflected by regular reflection rather than by diffuse reflection Light rays are reflected.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
LASER AND ADVANCES IN METROLOGY
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
TECHNIQUES IN BASIC SURVEYING
Photogrammetry for Large Structures M. Kesteven CASS, CSIRO From Antikythera to the SKA Kerastari Workshop, June
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
CAD Cad means computer aided design, this is used by people such as graphic designers and manufacturers etc. They use the program to design an object.
Introduction to the Principles of Aerial Photography
Composition: The Graphics Unit of Study. What is a camera angle? This is the angle from which the camera photographs a subject or scene. There are a great.
Freiburg 7/07/20001 laser beam lens CCD (long arm) screen + grid CCD (short arm) glass plates at  45° STAMP : Saclay Telescope for the Alignment of Many.
How to startpage 1. How to start How to specify the task How to get a good image.
Introduction to Soft Copy Photogrammetry
11/23/2015On Camera Flash1 Basic Photography Using Flash.
Computer Graphics Rendering 2D Geometry CO2409 Computer Graphics Week 2.
Rotational Scanning Techniques for Hyperspectral Imaging Timothy Kelman, Stephen Marshall, Jinchang Ren, John R Gilchrist HSI 2012.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
1 Teaching Innovation - Entrepreneurial - Global The Centre for Technology enabled Teaching & Learning, N Y S S, India DTEL DTEL (Department for Technology.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Computer Vision Computer Vision based Hole Filling Chad Hantak COMP December 9, 2003.
Train Body The next series of slides will guide you through the construction of the train body. Start a new drawing and save it as Train Body.
DIGITAL PHOTOGRAPHY Imaging Partnership. LESSON ONE Introduction and Portraiture.
Computer – Aided Design Terminology You must have a generic understanding of commands and techniques which are used in a number of packages (YOU CANNOT.
Spherical Aberration. Rays emanating from an object point that are incident on a spherical mirror or lens at different distances from the optical axis,
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
360 Degree Cameras for Transit Opportunities and Hazards
Chapter 33 Lenses and Optical Instruments
Tools and techniques.
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Chapter I, Digital Imaging Fundamentals: Lesson II Capture
Overview Pin-hole model From 3D to 2D Camera projection
Maps Base maps Coordinate Systems, Datums, Projections
Bragg’s Law, the Reciprocal Lattice and the Ewald Sphere Construction
Science of Crime Scenes
Presentation transcript:

Paul Mann

Overview  Background  Outline of principles of methods  Limitations  Thoughts on design  Potential outputs  Variations  Any suggestions?

Background Last years symposium... I liked Kevin Dixon’s laser scanner, a toy I could get used to playing with. But most impressed by the potential of Anna Mason’s videogrammetry work.

Videogrammetry The low contrast problem... The method relies on tracking automatically identifiable features through a sequence of frames. Cave walls are often low contrast dull browns, lacking identifiable features. Shadows are the most readily identified feature, but these change position as the light source moves Suggested solution... Add targets But this takes time and is invasive. So why not use projected spots of light? Thinking time...

Trying out videogrammetry Freeware and demo versions of videogrammetry software on internet. Attempt to project light patterns Also enhance edge detection Plenty of difficulties, not many results

Sidelined “to build your own laser scanner for under a fiver...” 1. A computer 2. A web cam 3. A few sheets of paper 4. A laser 5. A glass fuse 6. David...

Success at last!

How it works...  Orthogonal background planes  Marked with a calibration grid Locates camera Scales and orientates planes Provides corrections for lens distortions  Structured light Planar laser  Calculate plane of light From incidence on orthogonal planes  Any other point lit must be on this plane And on linear ray path calculated for that pixel Hence point co-ordinates given by intersection of these two. Image from David-Laserscanner.com home page

Drawbacks  The David system is constrained by its need for two reference planes.  It is possible to scan though a hole to capture detail a behind the planes, but range is very limited as laser line must still intersect both planes.  Like most structured light solutions it is “inward” looking.

Useful in caves?  Excellent for capturing details on a cave wall Rock art Scallops  But can we capture the whole cave? Back to the drawing board

More internet research...  Excel 2009 survey trade show in York  Surveying is fun......it has lots of nice toys......but better to do it properly  So off to Glasgow project required

One solution  A structured light triangulation profiler

One solution  A structured light triangulation profiler

One solution  A structured light triangulation scanner

Design  The components: A planar laser A digital camera A suitably wide lens ○ A parabolic reflector lens is a bit pricey, but ideal. Fortunately I have a couple spare.

How it works Photo OUCC archives: F2, The Font Pitch, 1983

How it works.  Centre point equivalent to down  Fixed radius out is horizontal ie a circle  Lens tangent point at edge of circle About 70 degrees upwards

How it works..  Vertical angle (clino) is a function of distance from central point  Horizontal angle (compass) is directly equivalent to angle from central point

How it works...  Laser sheet is perpendicular to camera axis, and offset from lens by a measured distance.  School geometry Tan Φ = opposite adjacent Φ adjacent = offset x opposite = L, the distance from axis to point

Processing  First, find illuminated pixels  Each illuminated pixel has (X,Y) value Convert this to image polar co-ordinates (r,)  Convert image polar co-ordinates to instrument space polar co-ordinates (L, ) remains same Φ is function of r, L is a runction of Φ ie L is a function of r  Convert instrument polar co-ordinates into real space co-ordinates (x,y,z) Need to know orientation and position of instrument.

Complications to processing  Barrelling / pincushioning of camera lens Easily modelled  Modelling of parabolic lens curvature Again relatively easily modelled  Offset between axis of two lenses A bit more challenging  Either rigorous calibration to determine lens constants  Or skip and calibrate straight to a (X,Y) look up table

Limitations  Precision Limited by pixel spacing For 6MPixel camera: 1000 pixels cover 160 degrees vertically 1 pixel equates to about 1/6 degree For laser offset of 1m, and passage radius 1m this equates to 3mm precision But because of the tan Φ this drops off rapidly as passage radius increases.

Limitations  Is the drop off in precision a problem in cave survey situations? I suspect not In big passages, increase offset of laser  Need for low light Hard to make system work in daylight Suits cave survey

Output  Remember, not just one point being recorded – Typically a passage profile of 3000 points Captured at reshoot rate of camera Can easily distinguish red and green lasers Can easily determine above and below reflector plane Hence could create upwards of 10,000 points per shot Comparable to commercially available scanners

Challenges  Getting from a profiler to a scanner Movement need not be rotational, well suited for linear capture But how do we control such motion and record it accurately? Stringing taut wires a lot of work Shafts a lot easier to constrain Possibly active railway tunnels (or even recording vegetation overhanging lines)

Handling data  Pointclouds A set of points defined by their x,y,z coordinates May have other attributes linked (eg RGB)  Huge data sets  Specialist software  Getting away from paper

Variations  Other structured light solutions exist  How about combining this approach with photogrammetry / videogrammetry?  Increasing computer power is making complex solutions viable  Robotics is driving a lot of research in this area – worth keeping an eye on these developments

Over to you...  Thoughts about how this could be used, developed, etc very welcome.