Presentation is loading. Please wait.

Presentation is loading. Please wait.

Use of 3D Imaging for Information Product Development

Similar presentations


Presentation on theme: "Use of 3D Imaging for Information Product Development"— Presentation transcript:

1 Use of 3D Imaging for Information Product Development
David W. Messinger, Ph.D. Digital Imaging and Remote Sensing Laboratory Chester F. Carlson Center for Imaging Science Rochester Institute of Technology Feb. 7, 2008

2 RIT LADAR Research Areas
LADAR+HSI Target Detection LADAR Data Exploitation LADAR 3D Data Sets LADAR & MSI/HSI Fusion Assisted Scene Construction MSI Data Sets LADAR Feature Extraction HSI Data Sets NURI: Semi-Automated DIRSIG Scene Construction System Performance Trade Studies DIRSIG System Tasking Trade Studies Laser Radar System Simulation Scene Model Algorithm/Exploitation Testing

3 2005 Ford Explorer, red paint (spectral reflectance available)
IMINT versus MASINT Traditional Data Viewers “Fusion” of 2D imagery and 3D point data. 3D “fly around” and basic geometric measurements Featured Based Visualization Visualize rich data descriptions extracted from 2D imagery and 3D data sets Potential to render under different modalities, at different times of day Ability to perform signature analysis techniques because of availability of spectral information 2005 Ford Explorer, red paint (spectral reflectance available) Image courtesy Merrick & Company, Copyright 2004

4 Semi-Automated Process for Scene Generation
3D Data Sets Terrain Extraction Initial Tree/Building Segmentation Coarse Registration MSI Imagery HSI Imagery Background Feature Maps Spectra Retrieval Refined Tree/Building Segmentation Coarse Building Analysis Spectral Assignment Tree Reconstruction Refined Registration Refined Building Reconstruction DIRSIG Scene Description

5 Color Visualization of a Small Scene
Visualization of a 3D scene model that was automatically generated from 3D and 2D data sources. scene model can be visualized in other wavelengths, from other angles, at different times of day, with different atmospheres, etc. situational awareness, operational planning, etc. Real Imagery Quick-Look Color Simulation

6 Information Products Available
Terrain extraction and object characterization techniques Techniques for automated plane extraction and cultural 3D object reconstruction. building recognition, segmentation, extraction and reconstruction. Approaches to 3D object matching and filtering spin images, generalize ellipsoids, etc. techniques for automated tree finding and size estimation Approaches to 3D data to 2D image registration Approaches to 3D model to 2D image registration

7 3D Model and 2D Image Registration
Passive Imagery Project the 3D model onto the 2D image 3D model overlaid on 2D imagery 3D model derived from LADAR data

8 Potential for 3D Object Change Detection
There are objects in the image that are not in the model Were they missed by the sensor that created the model or “added”? Potential exists for object change detection based on shape detection methods Spin Images, described later

9 Potential Applications (Not Yet Developed)
Trafficability and Lines of Communication (LOC) Potential to semi-automatically detect roads (with occlusions), paths, pipes, etc. Estimate density of wooded areas and trafficability by vehicles. 3D based change detection and component dissection Line of sight analyses Path forward to tie 3D models to process models? Both natural processes and man-made processes. Improved MSI and HSI atmospheric compensation 3D feature extraction can improve relative solar angle estimation.

10 Fusion of LADAR and HSI for Improved Target Detection
Michael Foster, Ph.D. (USAF) John Schott, Ph.D. David Messinger, Ph.D.

11 Physics-Based Target Detection Algorithms for HSI
Approach leverages knowledge of the physics of the observable quantities to improve target detection under difficult observation / target state conditions targets under varying illumination targets with variable “contrast” targets with modified surface properties General methodology develop a physics-based model to predict the manifestations of the target observable signature include known sources of variability detect for family of signatures, called a “target space” Applied to detection in reflective and emissive spectral regimes

12 “Traditional” Target Detection
atmospheric compensation / TES target space Scene Image radiance space target property probability map target domain image domain

13 Physics-Based Signatures Detection
target properties target detection target manifestations radiance space Scene Image radiance space physics-based model probability map target domain image domain

14 Physics-Based Detection of Surface Targets in Reflective HSI
Physics Based Structured InFeasibility Target-detector (PB-SIFT) Work of Emmett Ientilucci under IC Postdoctoral fellowship Physics Based Orthogonal Subspace Projection (PBosp) Structured Infeasibility Projector (SIP) Overview Variability in target signature is due to atmospheric contributions and target illumination Captures variability in target space using endmembers Can isolate pixels that have a significant projection but are not target

15 Addition of LADAR Information
Physics-based forward modeling techniques for target detection typically use radiometric variability to describe the target manifestations possible Generally ignore or over-model geometric terms in the forward model IF WE HAD co-temporal, co-registered LADAR & HSI oversampled LADAR CAN WE use these data to constrain the geometric terms in the forward model and improve target detection?

16 Sub-pixel Target Radiometric Model
Predicts spectral radiance at a sensor based on mixture of target and background spectra for a specific atmosphere and geometry Inherent geometric terms Shadowing term – K Incident illumination angle – q Downwelled shape factor – F Pixel purity – M

17 Physics-Based Signatures Detection
target properties target detection target manifestations radiance space Scene Image radiance space physics-based model includes geometric information to constrain the model parameter space probability map target domain image domain

18 LADAR 3D Point Cloud Processing
Shadow estimate – K Shadow feeler Incident illumination angle – q Extract points associated with LADAR ground plane Estimate point normals using eigenvector analysis Calculate angle between point normals and solar direction Downwelling shape factor – F Assume clear sky and use LADAR skydome feeler technique Pixel purity – M Spin-image techniques to identify probable LADAR target points Subject to high false alarms Project point data into HSI FPA to create pixel maps

19 Microscene Spectral Data - DIRSIG Simulation
Gray Humvee Calibration panel Gray SUV Gray shed Red sedan Red SUV Gray SUV under tree Inclined gray SUV Inclined gray Humvee high spatial resolution for visualization only

20 Microscene Spectral Data - DIRSIG Simulation
Spectral cube has m GSD Spatially oversampled producing mixed pixels 0.4 – 1.2 mm RGB of cube used in processing

21 Microscene Spatial Data - DIRSIG Simulation
3D LADAR POINT CLOUD NADIR VIEW 3D LADAR POINT CLOUD OBLIQUE VIEW Post spacing of approximately 40 cm, with and without quantization and pointing error

22 Feature Maps - Estimate of K
Note that shadows “line up” with trees in RGB image SHADOW MAP

23

24 Feature Maps - Estimate of q
Illumination angle map for terrain (after tree removal) ILLUMINATION ANGLE MAP

25 Feature Maps - Estimate of F
Note full sky view on tops of trees and near zero sky visibility for ground surrounded by trees SHAPE FACTOR MAP

26 Target Detection in 3D Point Cloud: Spin-Images
2D parametric space image Capture 3D shape information about a single point in 3D point cloud Pose invariant Based on local geometry relative to a single point normal i.e., invariant to tip, tilt, pan Scale variant Estimate scale from ground plane/sensor position Graceful detection degradation in the presence of occlusion

27 Spin-Image Formation: Surface Point Coordinate Transformation
2D parameter space coordinates Radial distance   to local point normal Signed vertical distance  along basis normal

28 Spin-Image Examples 3 spin image pairs corresponding to 3 different points on the model Left image is high resolution spin image (small bin size) Right image is low resolution image (larger bin size) post bilinear interpolation

29 Spin Image Geometric Target Detection
for all surface points, construct spin image 3D target model identify points in image data that have high correspondence to a library model 3D image data for all data points, construct spin image

30 Library Matching Issues
Spin image library generated from 3D model Points on all sides of model High sampling density No occlusion Spin images generated from the scene Target and background present Points only from LADAR illumination direction Self-occlusion and background occlusion Not necessarily at same spatial sampling as model library

31 Spin Image Library Matching
Intelligent model library generation Typically model has many more points than scene data Normalize scene and model spin images Scene has points from only one direction Spin angle limits model points that can contribute to a spin image when building model library Compute normal for every point in model Pick spin image basis point p Allow only normals within 90° angle relative to spin image basis normal to contribute to model spin image This builds self-occlusion effects into model library

32 Feature Maps - Estimate of M
Results from spin-image detection of geometric target model PIXEL PURITY MAP

33

34 Creating the Target Space

35 Multi-Modal Target Detection Methodology & Advantages
Only those pixels on the focal plane that are likely to contain a target, as determined by the 3D geometric target detection algorithm, are interrogated on the HSI focal plane potential dramatic reduction in FAs based on the geometry information Spectral “background” information is derived from the pixels most likely to not contain the target (again, based on LADAR data) Per pixel, the physics-based target space is “customized” for the specific geometric conditions in that pixel “Fusion” occurs in the following sense: the geometric information, derived from the LADAR data, influences how we exploit the HSI

36 Target Detection Results
note the missing calibration panel with the actual target reflectance All features with gray paint have high scores Even the hidden SUV Detection statistic only calculated for those pixels with M > 0.3 Threshold at a = 0.2 eliminates all false alarms

37

38 (Partial) Application to Real LADAR Data
Leica LADAR collection of RIT campus Coverage of the simulated area Truck parked on the “berm” in the scene No co-temporal hyperspectral imagery Point cloud processing schemes applied to real data

39 Real Point Cloud Processing Results
illumination angle map shadow map shape factor map pixel purity map

40 Summary Demonstration of the feasibility of improving HSI target detection through the use of LADAR information products LADAR was used to derive / estimate: shadowing effects downwelling illumination factor target likelihood based on geometric target model sub-pixel mixing fraction direct illumination angle on a per-pixel basis in the HSI focal plane Estimation of other information products possible with existing tools designed to enhance scene building capabilities

41 Questions? David W. Messinger, Ph.D. messinger@cis.rit.edu
(585)

42 Back Up Charts


Download ppt "Use of 3D Imaging for Information Product Development"

Similar presentations


Ads by Google