Stereoscopic PIV.

Slides:



Advertisements
Similar presentations
Particle Image Velocimetry
Advertisements

MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Geometry 1: Projection and Epipolar Lines Introduction to Computer Vision Ronen Basri Weizmann Institute of Science.
Laser Anemometry P M V Subbarao Professor Mechanical Engineering Department Creation of A Picture of Complex Turbulent Flows…..
Micro PIV  An optical diagnostic technique for microfluidics (e.g. MEMS, biological tissues, inkjet printer head) Requirements: Measure instantaneously.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
CAU Kiel DAGM 2001-Tutorial on Visual-Geometric 3-D Scene Reconstruction 1 The plan for today Leftovers and from last time Camera matrix Part A) Notation,
Fundamentals of Digital PIV Partially in reference to J. Westerweel ‘s presentation.
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
Projected image of a cube. Classical Calibration.
CSE473/573 – Stereo Correspondence
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
© 2009, TSI Incorporated Stereoscopic Particle Image Velocimetry.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
CSE 6367 Computer Vision Stereo Reconstruction Camera Coordinate Transformations “Everything should be made as simple as possible, but not simpler.” Albert.
Automatic Camera Calibration
Computer vision: models, learning and inference
© 2010, TSI Incorporated Time Resolved PIV Systems.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
2 nd UK-JAPAN Bilateral and 1 st ERCOFTAC Workshop, Imperial College London.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Week 10: Imaging Flow Around a Radio Controlled Race Car
Volumetric 3-Component Velocimetry (V3V)
Ben Falconer Supervisors: Peter Bryanston-Cross, Brenda Timmerman.
Particle Image Velocimetry (PIV) Introduction
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
WASET Defence, Computer Vision Theory and Application, Venice 13 th August – 14 th August 2015 A Four-Step Ortho-Rectification Procedure for Geo- Referencing.
Measurements in Fluid Mechanics 058:180:001 (ME:5180:0001) Time & Location: 2:30P - 3:20P MWF 218 MLH Office Hours: 4:00P – 5:00P MWF 223B-5 HL Instructor:
3D SLAM for Omni-directional Camera
Digital Image Processing CCS331
Integral University EC-024 Digital Image Processing.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Measurements in Fluid Mechanics 058:180:001 (ME:5180:0001) Time & Location: 2:30P - 3:20P MWF 218 MLH Office Hours: 4:00P – 5:00P MWF 223B-5 HL Instructor:
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
1 Howard Schultz, Edward M. Riseman, Frank R. Stolle Computer Science Department University of Massachusetts, USA Dong-Min Woo School of Electrical Engineering.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Measurements in Fluid Mechanics 058:180 (ME:5180) Time & Location: 2:30P - 3:20P MWF 3315 SC Office Hours: 4:00P – 5:00P MWF 223B-5 HL Instructor: Lichuan.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
1 Imaging Techniques for Flow and Motion Measurement Lecture 19 Lichuan Gui University of Mississippi 2011 Stereoscopic Particle Image Velocimetry (SPIV)
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Elementary Mechanics of Fluids Lab # 3 FLOW VISUALIZATION.
COS429 Computer Vision =++ Assignment 4 Cloning Yourself.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer vision: models, learning and inference M Ahad Multiple Cameras
3D Reconstruction Using Image Sequence
Particle Image Velocimetry Demo Outline (For reference) ‏ Topic NumberTopic NamePage Type 1Flow of PIVAnimated page.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Ben Falconer. Background A bit about me Ben Falconer Came to Warwick 2006 Computer and Information engineering MEng project Project based around PIV Current.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Measurements in Fluid Mechanics 058:180:001 (ME:5180:0001) Time & Location: 2:30P - 3:20P MWF 218 MLH Office Hours: 4:00P – 5:00P MWF 223B-5 HL Instructor:
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Dynamic View Morphing performs view interpolation of dynamic scenes.
제 5 장 스테레오.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
: Chapter 11: Three Dimensional Image Processing
Image Processing, Leture #20
Multiple View Geometry for Robotics
Measurement of Flow Velocity
Computer Vision Stereo Vision.
Elementary Mechanics of Fluids Lab # 3 FLOW VISUALIZATION
Elementary Mechanics of Fluids Lab # 3 FLOW VISUALIZATION
Range calculations For each pixel in the left image, determine what pixel position in the right image corresponds to the same point on the object. This.
Presentation transcript:

Stereoscopic PIV

Stereoscopic PIV Theory of stereoscopic PIV Dantec Dynamics’ stereoscopic PIV software Application example: Stereoscopic PIV in an automotive wind tunnel (used as example throughout the slide show)

Fundamentals of Stereo Vision Displacement True seen from left displacement Displacement Focal plane = seen from right Centre of light sheet Left Right camera camera Stereoscopic PIV is based on the same fundamental principle as human eye-sight: Stereo vision. Our two eyes see slightly different images of the world surrounding us, and comparing these images, the brain is able to make a 3-dimensional interpretation. With only one eye you will be perfectly able to recognize motion up, down or sideways, but you may have difficulties judging distances and motion towards or away from yourself. As with 2D measurements, stereo-PIV measures displacements rather than actual velocities, and here cameras play the role of “eyes”. The most accurate determination of the out-of-plane displacement (i.e. velocity) is accomplished when there is 90° between the two cameras. In case of restricted optical access, smaller angles can be used at the cost of a somewhat reduced accuracy. For each vector, we extract 3 true displacements (dX, dY, dZ) from a pair of 2-dimensional displacements (dx, dy) as seen from left and right camera respectively, and basically it boils down to solving 4 equations with 3 unknowns in a least squares manner. Depending on the numerical model used, these equations may or may not be linear. True 3D displacement ( X,  Y,  Z) is estimated from a pair of 2D displacements ( x,  y) as seen from left and right camera respectively

Stereo Recording Geometry Focusing an off-axis camera requires tilting of the camera sensor (Scheimpflug condition) Stereoscopic evaluation requires a numerical model, describing how objects in space are mapped onto the sensor of each camera Parameters for the numerical model are determined through camera calibration When viewing the light sheet at an angle, the camera backplane (i.e. the camera sensor) must be tilted in order to properly focus the camera’s entire field of view. It can be shown that image plane, lens plane and object plane must cross each other along a common line in space for the camera images to be properly focused in the entire field of view. This is referred to as the Scheimpflug condition, and is used in most stereoscopic PIV setups. Performing the stereoscopic evaluation requires a numerical model describing how objects in 3-dimensional space are mapped onto the 2-dimensional image recorded by each of the cameras. The pinhole camera model is based on geometrical optics, and leads to the so-called direct linear transformation. -where uppercase symbols X, Y & Z are used for world coordinates, while lowercase symbols x & y represent image coordinates. The DLT is an idealised model, not accounting for lens distortion, and cannot properly handle refractions when for example measuring from air into water through a window. A multitude of alternative camera models exist, but for measuring airflows the DLT is in most cases sufficient.

Camera Calibration Images of a calibration target are recorded. The target contains calibration markers in known positions. Comparing known marker positions with corresponding marker positions on each camera image, model parameters are adjusted to give the best possible fit. With the DLT model, coefficients of the A-matrix can in principle be calculated from known angles, distances and so on for each camera. In practice however this approach is not very accurate, since, as any experimentalist will know, once you are in the laboratory you cannot set up the experiment exactly as planned, and it is very difficult if not impossible to measure the relevant angles and distances with sufficient accuracy. Instead an experimental camera calibration is performed: Images of a calibration target are recorded. The calibration target contains calibration markers (for example dots or crosses), the true (X,Y,Z)-position of which are known. Comparing the known marker positions with the positions of their respective images on each camera image, model parameters can be estimated.

Overlapping Fields of View Stereoscopic evaluation is possible only within the area covered by both cameras. Due to perspective distortion each camera covers a trapezoidal region of the light sheet. Careful alignment is required to maximize the overlap area. Interrogation grid is chosen to match the spatial resolution. Obviously stereoscopic reconstruction is possible only where information is available from both cameras. Due to perspective distortion each camera covers a trapezoidal region of the light sheet, and even with careful alignment of the two cameras, their respective fields of view will only partly overlap each other. Within the region of overlap interrogation points are chosen in a rectangular grid. In principle, stereoscopic calculations can be performed in an infinitely dense grid, but the 2D results from each camera have limited spatial resolution, and using a very dense grid for 3D evaluation will not improve the fundamental spatial resolution of the technique.

Left / Right 2D Vector Maps Left & Right camera images are recorded simultaneously. Conventional PIV processing produces 2D vector maps representing the flow field as seen from left and right. The vector maps are re-sampled in points corresponding to the interrogation grid. Combining left / right results, all three velocity components are calculated. The actual stereoscopic measurements start with conventional 2D-PIV processing of simultaneous recordings from left and right camera respectively. This produces two 2-dimensional vector maps representing the instantaneous flow field as seen from the left and right camera respectively. Using the camera model including parameters from the calibration, the points in the chosen interrogation grid are now mapped from the light sheet plane and onto the left and right image plane (camera sensor) respectively. The 2D vector maps are re-sampled in these new interrogation points using for example bilinear interpolation to estimate 2D vectors in these points based on the nearest neighbours. With a 2D displacement seen from both left and right camera estimated in the same point in physical space, the true 3D particle displacement can now be estimated by solving 4 equations with 3 unknowns in a least-squares manner.

Stereoscopic Reconstruction Overlap area with interrogation grid Resulting 3D vector map Left 2D vector map Right 2D vector map With Dantec Dynamics’ DynamicStudio a large number of matching 2D vector map pairs can quickly be recorded, yielding a corresponding number of 3D vector maps after a bit of post processing. A large number of vector maps are required to calculate reliable statistics such as 3D mean velocities, RMS values and cross-correlation coefficients.

Dantec Dynamics Stereoscopic PIV System Components Seeding PIV-Laser (Double-cavity Nd:YAG) Light guiding arm & Lightsheet optics 2 cameras on Scheimpflug mounts Calibration target DynamicStudio PIV software DynamicStudio stereoscopic PIV Add-on Issues common to 2D and stereoscopic PIV: Seeding Energy budget (Laser pulse energy vs. measuring area and light sheet thickness) Light sheet thickness (Significant flow through through the light sheet may require a thicker light sheet, and in turn this may require a more powerful laser). Issues specific to stereoscopic PIV: Calibration target for camera calibration 2 (off axis) cameras on Scheimpflug mounts to allow focusing Software: Dantec Dynamics’ DynamicStudio system has always included support for multiple cameras. The routines for camera calibration and stereoscopic evaluation is added to the DynamicStudio software package.

Recipe for a Stereoscopic PIV Experiment Carefully align the light sheet with the calibration target Record calibration images in the desired measuring position using both cameras (Target defines the co-ordinate system!) Perform camera calibration based on the calibration images Record particle images with the laser turned on Perform a Calibration Refinement to correct for the residual misalignment between calibration target and laser light sheet Record particle images from your flow using both cameras Calculate 2D-PIV vector maps Calculate 3D vectors based on the two 2D PIV vector maps and the (refined) camera calibration For camera calibration images of a calibration target are required. The plane target must be parallel with the light sheet, and the target is traversed perpendicular to its own surface to acquire calibration images covering the full thickness of the light sheet (Images are recorded in at least 3, but typically 5 or 7 positions). Calibration markers on the target identifies the X- & Y-axes of the coordinate system, and the traverse moving the target identifies the Z-axis. Since the calibration target and traverse identifies the coordinate system, care should be taken in aligning the target and the traverse with the experiment. Subsequent stereoscopic evaluation assumes implicitly that the coordinate system is cartesian, so it is also important that the traverse is normal to the calibration target surface. The stereoscopic evaluation assumes also that the central calibration image corresponds to the centre of the light sheet, so proper alignment of the laser and the calibration target is essential for reliable results. The imperfections of the alignment of the calibration target with the light sheet can be overcome by doing a Calibration Refinement using actual particle images.

Camera Calibration Each image pair must be related to a specific Z-co-ordinate, and the software will look in the log-field of the set-up for this. If no Z-co-ordinate is found there, it will look for co-ordinates in the log-fields of each individual image, and if a Z-co-ordinate can still not be found, the user is prompted to specify one. If the calibration images appear OK, the user can proceed to the actual calibration. This is a two-step procedure handled automatically by the software: First image processing is used to derive the position of calibration markers on the camera images with subpixel accuracy. This produces a list of nominal marker positions and corresponding image co-ordinates on the sensor of left and right camera respectively. This list is then used to estimate parameters in the chosen numerical imaging model, to give the best possible fit between nominal calibration marker positions and their corresponding pixel-positions on the camera images. A single project may include several sets of calibration images, and for each set, several calibrations can be performed using different imaging models for example.

Calibration Refinement Calibration Refinement improves the accuracy of an existing stereo calibration by using particle images acquired simultaneously from both cameras. Each of the original Imaging Model Fits (IMF's) refer to a coordinate system defined by the calibration target used. When using the imaging models for later analyses, it is generally assumed that the X/Y-plane where Z=0 corresponds to the center of the lightsheet, but in practice this assumption may not hold since it can be very difficult to properly align the calibration target with the light sheet. Provided the calibration target was reasonably aligned with the light sheet it is however possible to adjust the imaging model fits by analyzing a series of particle images acquired simultaneously by each of the two cameras. This adjustment is referred to as Calibration Refinement and changes the coordinate system used, so Z=0 does indeed correspond to the center of the lightsheet as assumed by subsequent analyses using the camera calibrations (IMF's). The particle images can be from actual measurements, they need not be acquired specifically for the purpose of calibration refinement. You may benefit from preprocessing the particle images f.ex. to remove the background, in which case the processed images can be chosen as input for the calibration refinement. Likewise preprocessing of the calibration images may be applied to get the best possible initial camera calibrations. The red and blue polygons illustrate the part of the lightsheet visible from each of the cameras according to the present calibrations. Analysis is only possible in the overlap area visible from both cameras, so in the example above the cameras fields of view could have been aligned better.

Calibration Refinement The resulting vector map represents the misalignment between the light sheet and the calibration target and is referred to as a 'Disparity Map'. Each vector must be distinct in the sense that the correlation peak must rise significantly above the noise floor. Vectors that do not fulfill this SNR criteria will be shown in red and excluded from further calculations. Interpreting correlation maps If your disparity vectors do not look as nice as in the example above inspection of the correlation map can be very helpful in understanding what the problem is. It is quite normal for the correlation peak to be elongated, it is a consequence of the light sheet having a finite thickness and the cameras looking at it from different angles. If the peak appears to be fragmented (multiple peaks along a common ridge) it is probably because too few particle images went into the calculation. This can be simply because there were not enough images or because you're at or near the edge of the light sheet where light intensity is low and only a few of the images contain particles big/bright enough to be detected by both cameras. If the correlation map contains no peaks at all it is possible that the misalignment between light sheet and calibration target was simply too big for the calibration refinement to recover the true position of the light sheet. In that case the peak we're looking for will be outside the interrogation area and you may try using a bigger one. Alternatively check if by mistake you're working on the basis of particle images that are not acquired simultaneously in which case they do of course not correlate at all. Finally you may consider the possibility that the cameras and/or the light sheet moved during acquisition; If for example mirrors are involved in bringing the light sheet to the intended measuring area small mechanical vibrations can cause the light sheet to move several mm. Similarly mechanical vibrations may cause the cameras to move, which will of course also have serious impact on the calibration and subsequent analysis of images. Interpreting Disparity Vectors Each disparity vector can be interpreted as a measure of the distance from the nominal Z=0 to where the lightsheet really is. The vectors will in general point in one direction if the light sheet is closer to the cameras than assumed and in the opposite direction if it is further away. Assuming the lightsheet is plane we can make a least squares fit of a plane through all of the disparity vectors and use the fitted plane as an estimate of where the light sheet really is. We can then define a transformation (rotation and translation) that will be able to move points back and forth between the original (target) coordinate system and a modified (lightsheet) coordinate system where Z=0 corresponds to the center. Applying this transformation to the calibration markers from the original calibration images we obtain a new list of corresponding image and object coordinates which can then be fed into the normal camera calibration routines to generate modified calibrations.

Calculating 2D Vector Maps

Stereoscopic Evaluation & Statistics Once calibration has been performed, and 2D vector maps calculated, stereoscopic evaluation can be performed on pairs of 2D vector maps. Using different calibrations several 3D vector maps can be derived from the same set of 2D vectors. Please note that the user is free to combine any calibration with any set of 2D vectors; There is no way to check if for example calibration images were recorded in the same position as the 2D vector maps. For a set of 3D vector maps, the user can finally calculate statistics corresponding to statistics calculation in DynamicStudio, but expanded from 2D to 3D. The statistics calculation thus produces: Mean velocities Standard deviations Cross-correlation coefficients Number of valid vectors included in each position (Invalid vectors in the 2D vector maps are inherited by the 3D vectors affected by them, and these invalid 3D vectors are excluded from the statistics calculation). Both individual 3D vector maps and 3D statistics can be exported as Tecplot or plain ASCII-files, and all graphics can be exported or copied via the clipboard.