Ideal Observer Approach for Assessment of Image Quality on Stereo Displays Devices for Medical Imaging. Fahad Zafar, Dr. Yaacov Yesha, Dr. Aldo Badano.

Slides:



Advertisements
Similar presentations
Dynamic View Selection for Time-Varying Volumes Guangfeng Ji* and Han-Wei Shen The Ohio State University *Now at Vital Images.
Advertisements

Artificial Intelligence 12. Two Layer ANNs
7.1 Vis_04 Data Visualization Lecture 7 3D Scalar Visualization Part 2 : Volume Rendering- Introduction.
16.1 Si23_03 SI23 Introduction to Computer Graphics Lecture 16 – Some Special Rendering Effects.
ENV 2006 CS4.1 Envisioning Information: Case Study 4 Focus and Context for Volume Visualization.
1 Photometric Stereo Reconstruction Dr. Maria E. Angelopoulou.
Impact of next generation display technologies on medical image viewing applications Fahad Zafar, Mina Choi, Joel Wang, Peter Liu, Wei Chung, Aldo Badano.
Reconstruction from Voxels (GATE-540)
Hearing Complex Sounds
Design and construction of a mid-IR SPIDER apparatus 09/10/2012 Malte Christian Brahms Imperial College London 09/10/20121.
Selecting Multiple Commodity Codes. 30/03/ Click on STATISTICS, then BUILD YOUR OWN TABLES from the drop-down menu. 2. Click on Data by Commodity.
Cloud Usability and End User Taxonomy Dr. Karuna P Joshi IAB Meeting Research Report June 13, /18/12CHMPR IAB
Probabilistic Reasoning over Time
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Graphics Pipeline.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Direct Volume Rendering. What is volume rendering? Accumulate information along 1 dimension line through volume.
Image Analysis: To utilize the information contained in the digital image data matrix for the purpose of quantification. 1)Particle Counts 2)Area measurements.
3D Graphics Rendering and Terrain Modeling
Hank Childs, University of Oregon November 15 th, 2013 Volume Rendering, Part 2.
1 Detection and Analysis of Impulse Point Sequences on Correlated Disturbance Phone G. Filaretov, A. Avshalumov Moscow Power Engineering Institute, Moscow.
(Includes references to Brian Clipp
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
A Multicamera Setup for Generating Stereo Panoramic Video Tzavidas, S., Katsaggelos, A.K. Multimedia, IEEE Transactions on Volume: 7, Issue:5 Publication.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Final Gathering on GPU Toshiya Hachisuka University of Tokyo Introduction Producing global illumination image without any noise.
Image Forgery Detection by Gamma Correction Differences.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #16 3/1/02 Taguchi’s Orthogonal Arrays.
Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV Presenter: Yi Shi & Saul Rodriguez March 26, 2008.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
Radial Basis Function Networks
Edges and Contours– Chapter 7. Visual perception We don’t need to see all the color detail to recognize the scene content of an image That is, some data.
New requirement of subjective video quality assessment methodologies for 3DTV Wei Chen(*) 1,2, Jérôme Fournier 1, Marcus Barkowsky 2, Patrick Le Callet.
CSS 522 Topics in Rendering March 01,2011 Scott and Lew.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Rendering Adaptive Resolution Data Models Daniel Bolan Abstract For the past several years, a model for large datasets has been developed and extended.
Lecture 3 : Direct Volume Rendering Bong-Soo Sohn School of Computer Science and Engineering Chung-Ang University Acknowledgement : Han-Wei Shen Lecture.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Scientific Visualization Module 6 Volumetric Algorithms (adapted by S.V. Moore – slides deleted, modified, and added) prof. dr. Alexandru (Alex) Telea.
Project Raytracing. Content Goals Idea of Raytracing Ray Casting – Therory – Practice Raytracing – Theory – Light model – Practice Output images Conclusion.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Ray Tracing Chapter CAP4730: Computational Structures in Computer Graphics.
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Introductory Topics Part 2.
Chapter 21 R(x) Algorithm a) Anomaly Detection b) Matched Filter.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Wideband Radar Simulator for Evaluation of Direction-of-Arrival Processing Sean M. Holloway Center for the Remote Sensing of Ice Sheets, University of.
Computer Graphics II University of Illinois at Chicago Volume Rendering Presentation for Computer Graphics II Prof. Andy Johnson By Raj Vikram Singh.
2 Introduction to Kalman Filters Michael Williams 5 June 2003.
Daniel A. Keim, Hans-Peter Kriegel Institute for Computer Science, University of Munich 3/23/ VisDB: Database exploration using Multidimensional.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Exploiting Group Recommendation Functions for Flexible Preferences.
Single Pass Point Rendering and Transparent Shading Paper by Yanci Zhang and Renato Pajarola Presentation by Harmen de Weerd and Hedde Bosman.
Volume Rendering A volume representation is an extension of a 2D image to 3D space. A pixel in 2D space become a voxel in 3D space. Voxel.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
CSE 681 Introduction to Ray Tracing. CSE 681 Ray Tracing Shoot a ray through each pixel; Find first object intersected by ray. Image plane Eye Compute.
A computational model of stereoscopic 3D visual saliency School of Electronic Information Engineering Tianjin University 1 Wang Bingren.
Volume Rendering Volumetric Presentation State Supplement 190 Public Comment Review Joe Luszcz, Philips Healthcare Chair, DICOM Working Group 11 January.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
1. 2 What worked: Automatic: Panorama Studio 2 Pro Manual: “MosaicJ” plugin ( for ImageJ (
Presenting: Shlomo Ben-Shoshan, Nir Straze Supervisors: Dr. Ofer Hadar, Dr. Evgeny Kaminsky.
3D Graphics Rendering PPT By Ricardo Veguilla.
Pei Qi ECE at UW-Madison
Learning Theory Reza Shadmehr
Presentation transcript:

Ideal Observer Approach for Assessment of Image Quality on Stereo Displays Devices for Medical Imaging. Fahad Zafar, Dr. Yaacov Yesha, Dr. Aldo Badano IAB Meeting Research Report June /18/12CHMPR IAB 20131

Introduction Evaluation of Stereo Displays for medical imaging diagnosis. Image quality assessment of stereo display devices for medical imaging applications. Investigate – Fundamental limitations and benefits. – Parameters crosstalk, display noise, luminance, stereo technology. 12/18/12CHMPR IAB 20132

Introduction The goal of this research is to assess image quality on stereo display devices for medical imaging applications which have great potential since they provide depth perception to the observer when looking at medical datasets. 12/18/12CHMPR IAB 20123

Objectives 12/18/12CHMPR IAB Mathematical Model for Image Quality Assessment on Stereo Displays Propose a Stereo Model Observer. Computationally investigate parameters Crosstalk, stereo angle, display noise. Effect on performance related to 3D display device. 3D display technology.

Objectives Human observer studies are slow and costly, we propose to use a computational model that can conduct signal detection tasks. Using this model we can explore multiple parameters for the display with many different settings which is very hard to do with human observers. Crosstalk: Luminance intended for one eye, leaking into the other. Stereo angle: The angle between the two stereo cameras and the center of the focus point. Display noise: Unintentional output luminance disturbance present in every display. 12/18/12CHMPR IAB 20125

Success Criteria Present an Ideal Linear Stereo Observer model that can be used to assess image quality for 3D displays using computational approach. Show applications of the stereo model to compare multiple 3D display devices with different technologies (active vs. passive) Present another use case for the stereo observer that does not directly relate to medical imaging. 12/18/12CHMPR IAB 20136

Success Criteria Show that the stereo observer we have formulated clearly assess image quality for different 3D display devices using a computational approach. Compare different types of 3D display technologies using our model. We would also like to use the stereo observer for other assessment tasks not related to medical image. One example is compression of a stereo stream and understanding the tradeoffs between quality and compression ratio. 12/18/12CHMPR IAB 20127

Existing Solutions 12/18/12CHMPR IAB Human Observer Studies o Content o Use of entertainment 3D content and no connection to medical imaging tasks. o Scope o Different scope (target the observer, not displays) o Limited in scope. o Results o Lengthy and expensive. o Results are often subjective and inconsistent. o Too many variations in 3D display technologies and multiple parameters to explore (MIP vs. Absorption model).

Existing Solutions Current work does not associate directly with medical imaging applications. Human observer studies conducted mostly user visual datasets from video games and the movie industry. The task performed by human observers in existing published research is focused on the user not the 3D display. We believe there are just too many parameters to explore and thus conducting a human observer study is infeasible. 12/18/12CHMPR IAB 20129

Development Approach: Ideal Linear Observer 12/18/12CHMPR IAB

Development Approach: Ideal Linear Stereo Observer gl, gr: pixel data in the left and right image respectively, after visualization. Sv: Physical Display emulation operator. Ll, Lr: Luminance reaching the left and the right eye respectively. g: discrete dataset. (eg. Simulated White Noise generated through a Random Number Generator.) 12/18/12CHMPR IAB

Development Approach Mathematical formulation of the observer. We use Signal to Noise ratio to assess performance of the observer. Essentially, the data, which is a simulated medical imaging dataset is visualized using a rendering algorithm (Sg) and then post processing is applied to emulate the display (Sv). SNR: Signal to noise ratio. KLg-1: Inverse covariance matrix of the background. (background could be White noise or Lumpy (which is a bunch of Gaussians super imposed on each other) ) S: signal image (3D Gaussian blob) 12/18/12CHMPR IAB

Development Approach Sg: The visualization operator. Could be Maximum Intensity Projection or Absorption Model. Alpha: Transparency. 12/18/12CHMPR IAB

Development Approach 12/18/12CHMPR IAB D/3D Observer Previous Results Sg 2D Observer 3D Observer Stereo Observer ? Orthographic, Voxel/Pixel = 1, Opacity = 1, Vision = Absorption or MIP Orthographic, Voxel/Pixel = 1, Opacity = 1, Vision = MIP Orthographic, Voxel/Pixel = 1, Opacity = X, Vision = Absorption Sg Our Results No visualizati-on

Development Approach With our approach you can emulate all sorts of displays including 2D, stereoscopic 3D, multi-view stereoscopic 3D etc. Our computational model is a generalized model that can be tweaked to any form of observer by simply changing a few parameters. You can use MIP or absorption for a single trial. 12/18/12CHMPR IAB

Development Approach Vision = the vision model used. It could be either Absorption or MIP. We have only used Absorption so far in our trials. Voxel/Pixel ratio: is set to one, so one voxel in the 3D dataset covers 1 exact pixel on the screen when viewing it head on. 12/18/12CHMPR IAB

Development Approach 2D ideal linear is from previous work, but we have generalized it using our approach and conducted experiments using it to compare with previous work. We compare and confirm that they are both equal. But, with our model we can swap parameters to do more observers like Ideal Linear 3D and stereo. So our approach is more robust Opacity = X means opacity is varied while everything else is kept constant. Opacity is the opposite of transparency. 12/18/12CHMPR IAB

Development Approach We have results for 2D, 3D and stereo. 2D and 3D are computed only to compare and establish our approach as equivalent to previous work but no one has done stereo. Our model can do all these three and more (such as multi-view stereo observer represented by the ? sign) 12/18/12CHMPR IAB

Development Approach -- MIP - both the meaning of the acronym and the definition Maximum Intensity Projection is a rendering model where we shoot rays from each pixel into the voxelized dataset. The ray traverses through all the voxels with in line of sight and the voxel with the highest intensity (or voxel value) with in that line of sight is selected as the final color of that pixel. 12/18/12CHMPR IAB

Development Approach Absorption model is an alternative rendering model, we shoot rays from the pixels and interpolate between the voxels as the line of sight intersects with them. While interpolating between voxel values, there is a weighting parameter applied to each voxel. That parameter is called the "alpha" or transparency. So the equation looks something like this.. color-voxel-1 * alpha1 + (color-voxel-2 * (1- alpha1)) 12/18/12CHMPR IAB

Development Approach And you do this for the first 2 voxels that the ray hits, then you traverse the ray and if it hits the third voxel, you use the previously generated color form voxel-1 and voxel-2 and interpolate with voxel-3 and so on. Sometimes you can fix the alpha for every slice of the volume dataset and every voxel has a fixed alpha. other times there are alpha maps that are generated with the dataset to accentuate some feature.. for instance one alpha map of a cube would highlight the lungs while another would highlight the heart etc.. but we dont do this. we use a constant for all voxel. 12/18/12CHMPR IAB

Development Approach Transparency is alpha Opacity is the opposite of alpha...opacity = 1- transparency 12/18/12CHMPR IAB

Comparing 2D Ideal methodologies: Our method vs. Theoretical 23

Comparing 2D Ideal methodologies: Our method vs. Theoretical So in this result, we perform the Ideal Linear 2D observer on the White Noise data. Our method outputs the exact same results as the theory suggests. For white noise the outputs can be theoretically calculated so that how we know our SNR outputs are correct. 12/18/12CHMPR IAB

Comparing 2D Ideal methodologies: Our method vs. Theoretical The x-axis is the signal amplitude, meaning the amplitude of the Gaussian blob which the observer was viewing with in the image. The y-axis is the final SNR. These results do not emulate any display device and no crosstalk was simulated for these results. Adding crosstalk is our next step 12/18/12CHMPR IAB

Our method quantifies stereo perception at different stereo angles

Now we use our model to calculate SNR at different stereo angles. No other method can potentially do that since they dont visualize their dataset, hence cannot generate projections at different angles. We quantify the different gain in observer performance when the stereo angle is changes. Beta = stereo angle in this case. 12/18/12CHMPR IAB

Our method quantifies stereo perception at different stereo angles The x-axis is the signal amplitude, meaning the amplitude of the Gaussian blob which the observer was viewing with in the image. The y-axis is the final SNR. These results do not emulate any display device and no crosstalk was simulated for these results 12/18/12CHMPR IAB

Evaluation 12/18/12CHMPR IAB sec900 sec Resolution: 32x32 Number of Images: 240,000 Dataset size: 1 GB Covariance size: 2 MB 10 sec Total Simulation time = ~19 minutes Successfully verified our model Successfully implemented the simulation pipeline

Evaluation Currently, we have completed the implementation of the pipeline and it takes about 19 minutes for a complete trial which includes dataset generation, visualization (creating stereo pairs), calculating data statistics and then calculating the SNR We compared 2D observer SNR for White noise with 2D observer SNR from previous works SNR and they were equal, within a range of <1% error. 12/18/12CHMPR IAB

Evaluation The equations applied are in the previous silde. 1. generate a dataset for some background (White noise or lumpy) 2. Then visualize using Absorption model volume rendering 3. Then create stereo pairs (about 240,000 image pairs). 4. Then calculate covariance for the images. 12/18/12CHMPR IAB

Evaluation 5. Then calculate inverse of the covariance matrix and multiply 3 things to get snr (signal image transpose * inverse covariance of background * signal image).6 proof of this methodology is present in the book Foundations of Image Science by: Harrison Barrett and Kyle Myers. 12/18/12CHMPR IAB

Evaluation 12/18/12CHMPR IAB Extend the model Measure physical display crosstalk Add additional stage to the pipeline and emulate the display Conduct comparative analysis. WorkflowExperimental setup

Evaluation Next, we are working on exploring the crosstalk parameter to see how it affects observer performance. We measure the crosstalk from actual 3D display device, experimental setup is presented in the slide. Then we use that data as a post processing function with in the model to emulate the display. Crosstalk: Luminance intended for one eye leaking into the other. 12/18/12CHMPR IAB

Evaluation We measure the luminance response for both the left and the right displays independently through the 3D glasses. Then we measure the crosstalk through the glasses independently and combine all this data to create a luminance response for the display. This luminance response is used as a 2D lookup table with in the code and considered to be its emulation. SNR calculated for the stereo observer for this 3D display device vs. SNR calculated without any display (which is the ideal since there is no display emulation). This comparision will show the quantitative decrease in perception because the device contains crosstalk and crosstalk negatively affects signal perception. 12/18/12CHMPR IAB

Status Have a working pipeline that simulates display devices. Have collected luminance data from one passive 3D display device. Next step – Emulate the device and analyze performance. – Get luminance measurements of active 3D displays and compare. 12/18/12CHMPR IAB

Status Have a working pipeline. We are currently trying to emulate an active and passive stereo device and compare their performance. We would like to measure luminance from many displays and then add do comparative analysis using our model. We measure the luminance response for both the left and the right displays independently through the 3D glasses. Then we measure the crosstalk through the glasses independently and combine all this data to create a luminance response for the display. This luminance response is used as a 2D lookup table with in the code and considered to be its emulation. 12/18/12CHMPR IAB

Thank You! 12/18/12CHMPR IAB