Where does volume and point data come from?

Slides:



Advertisements
Similar presentations
Today’s Topic (Slight ) correction to last week’s lecture
Advertisements

Nick Beavers Project Manager Deconvolution from Andy Molnar Software Engineer.
Image Reconstruction T , Biomedical Image Analysis Seminar Presentation Seppo Mattila & Mika Pollari.
Deconvolution of Laser Scanning Confocal Microscope Volumes: Empirical determination of the point spread function Eyal Bar-Kochba ENGN2500: Medical Imaging.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
CT Physics V.G.Wimalasena Principal School of radiography.
Short pulses in optical microscopy Ivan Scheblykin, Chemical Physics, LU Outline: Introduction to traditional optical microscopy based on single photon.
Department of Computer Science
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Tomography Part 3.
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Tomography Part 4.
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Introductory Topics Part 1.
Project Overview Reconstruction in Diffracted Ultrasound Tomography Tali Meiri & Tali Saul Supervised by: Dr. Michael Zibulevsky Dr. Haim Azhari Alexander.
Immagini e filtri lineari. Image Filtering Modifying the pixels in an image based on some function of a local neighborhood of the pixels
Direct Volume Rendering Joe Michael Kniss Scientific Computing and Imaging Institute University of Utah.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Imaging in the Sciences (and Medicine) Marc Levoy Computer Science Department Stanford University.
Light field microscopy Marc Levoy, Ren Ng, Andrew Adams Matthew Footer, Mark Horowitz Stanford Computer Graphics Laboratory.
 Marc Levoy Synthetic aperture confocal imaging Marc Levoy Billy Chen Vaibhav Vaish Mark Horowitz Ian McDowall Mark Bolas.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Back Projection Reconstruction for CT, MRI and Nuclear Medicine
Study of Protein Association by Fluorescence-based Methods Kristin Michalski UWM RET Intern In association with Professor Vali Raicu.
Light field photography and microscopy Marc Levoy Computer Science Department Stanford University.
Application of Digital Signal Processing in Computed tomography (CT)
Maurizio Conti, Siemens Molecular Imaging, Knoxville, Tennessee, USA
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Medical Imaging Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Light Field. Modeling a desktop Image Based Rendering  Fast Realistic Rendering without 3D models.
LEC ( 2 ) RAD 323. Reconstruction techniques dates back to (1917), when scientist (Radon) developed mathematical solutions to the problem of reconstructing.
1 Holography Mon. Dec. 2, History of Holography Invented in 1948 by Dennis Gabor for use in electron microscopy, before the invention of the laser.
Topic 7 - Fourier Transforms DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Cameras Course web page: vision.cis.udel.edu/cv March 22, 2003  Lecture 16.
Computational imaging in the sciences Marc Levoy Computer Science Department Stanford University.
14 Sep 1998R D Ekers - Synth Image Workshop: INTRODUCTION 1 Synthesis Imaging Workshop Introduction R. D. Ekers 14 Sep 1998.
August, 1999A.J. Devaney Stanford Lectures-- Lecture I 1 Introduction to Inverse Scattering Theory Anthony J. Devaney Department of Electrical and Computer.
Quantitative Phase Amplitude Microscopy of Three Dimensional Objects C.J. Bellair §,+, C.L. Curl #, B.E.Allman*, P.J.Harris #, A. Roberts §, L.M.D.Delbridge.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
10/17/97Optical Diffraction Tomography1 A.J. Devaney Department of Electrical Engineering Northeastern University Boston, MA USA
Travel-time Tomography For High Contrast Media based on Sparse Data Yenting Lin Signal and Image Processing Institute Department of Electrical Engineering.
Optical Models Jian Huang, CS 594, Spring 2002 This set of slides are modified from slides used by Prof. Torsten Moeller, at Simon Fraser University, BC,
Design and simulation of micro-SPECT: A small animal imaging system Freek Beekman and Brendan Vastenhouw Section tomographic reconstruction and instrumentation.
These slides based almost entirely on a set provided by Prof. Eric Miller Imaging from Projections Eric Miller With minor modifications by Dana Brooks.
The isosurface is a 3D reconstruction of the DiO dataset. The surface structure exhibits the shape of the dendritic spine and color exhibits the concentration.
October 21, 2005A.J. Devaney IMA Lecture1 Introduction to Wavefield Imaging and Inverse Scattering Anthony J. Devaney Department of Electrical and Computer.
BMI I FS05 – Class 9 “Ultrasound Imaging” Slide 1 Biomedical Imaging I Class 9 – Ultrasound Imaging Doppler Ultrasonography; Image Reconstruction 11/09/05.
BMME 560 & BME 590I Medical Imaging: X-ray, CT, and Nuclear Methods Tomography Part 1.
GISMO Simulation Study Objective Key instrument and geometry parameters Surface and base DEMs Ice mass reflection and refraction modeling Algorithms used.
Tutorial on Computational Optical Imaging University of Minnesota September David J. Brady Duke University
T-Ray Reflection Computed Tomography Jeremy Pearce Electrical & Computer Engineering.
Medical Image Analysis Image Reconstruction Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
CSci 6971: Image Registration Lecture 3: Images and Transformations March 1, 2005 Prof. Charlene Tsai.
IMAGE SYNTHESIS 1 Image Synthesis Image synthesis operations create images from other images or non-image data Used when a desired image is either physically.
David E. Pitts CSCI 5532 Overview of Image Processing by David E. Pitts Aug 22, 2010 copyright 2005, 2006, 2007, 2008, 2009, 2010.
Image Reconstruction from Projections Antti Tuomas Jalava Jaime Garrido Ceca.
Digital two photon microscopy for multiple fast signals acquisition Laboratory of Advanced Biological Spectroscopy (L.A.B.S.) University of Milan - Bicocca.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Optical Holography Martin Janda, Ivo Hanák Introduction Wave Optics Principles Optical holograms Optical Holography Martin Janda, Ivo Hanák Introduction.
Reconstruction of Solid Models from Oriented Point Sets Misha Kazhdan Johns Hopkins University.
Imaging Technology and Staining Techniques CHAPTER 1.3.
Single-Slice Rebinning Method for Helical Cone-Beam CT
IMAGE SYNTHESIS 1 Image Synthesis Image synthesis operations create images from other images or non-image data Used when a desired image is either physically.
Ultrasound Computed Tomography 何祚明 陳彥甫 2002/06/12.
To move or not to move: A Medical Imaging Perspective Debasis Mitra & Daniel Eiland Thomas Welsh Antall Fernanades Mahmoud Abdalah Department.
More about Light “Let there be Light”—Gen.1:3 SPH4U – Grade 12 Physics Unit 2.
Introduction to Digital Image Analysis Kurt Thorn NIC.
Computational imaging in the sciences
Introduction to Diffraction Tomography
Digital Holographic Microscopy for Quantitative Visualization
FRED Optical Engineering Software
Theoretical Background Challenges and Significance
Acoustic Holography Sean Douglass.
Presentation transcript:

Where does volume and point data come from? Marc Levoy Computer Science Department Stanford University 34:15 total + 30% = ~45 minutes Time =

Three theses Thesis #1: Many sciences lack good visualization tools. with homage to Fred Brooks “Interactive Graphics Can Double America’s Supercomputers: Theses for Debate Offered by Fred Brooks”, I3D ’86, Chapel Hill Thesis #1: Many sciences lack good visualization tools. Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

Success story #1: volume rendering of medical data Hoehne Visible Man dataset appeared in Scientific American, 2001 careful segmentation, effective juxtaposition of different segmentations and cut planes shadow line appears faked, probably by a Sci Am artist Resolution Sciences (now Microscience Group) mouse tail, also sliced and photographed, like Visible Man effective control over surface appearance, multiple lights, shadows not interactive, but could be (on today’s GPUs) Karl-Heinz Hoehne Resolution Sciences Time =

Success story #1: volume rendering of medical data Kaufman taken from research to practice obviously worked with practitioners took 12 years Arie Kaufman et al. Time =

Success story #2: point rendering of dense polygon meshes Levoy and Whitted if you’re interested in my analysis of why this paper was a failure, read my chapter in Markus Gross’s upcoming book, Point-Based Computer Graphics QSplat enabled us to interactively manipulate an unstructured mesh containing half a million polygons something that hadn’t been done to that point Szymon Rusinkiewicz’s QSplat (2000) Levoy and Whitted (1985) Time =

Failure: volume rendering in the biological sciences a leading software package after 15 years of research in volume rendering, a single threshold is still the most common control priced like medical imaging packages, which makes them too expensive for researchers in the basic sciences Photoshop discussion at Advanced Quantitative Light Microscopy (AQLM) course PhotoMerge, for combining images from motorized X-Y stages, programmable array microscopes, etc. almost any other package performs better: PanaVue, Microsoft’s Digital Image Studio, Matt Brown’s Autostitch but there aren’t good replacements tailored for the biological sciences (a leading software package) limited control over opacity transfer function no control over surface appearance or lighting no quantitative 3D probes Photoshop converting 16-bit to 8-bit dithers the low-order bit PhotoMerge (image mosaicing) performs poorly no support for image stacks, volumes, n-D images Time =

What’s going on in the basic sciences? great source for volume and point data and more importantly... they need our help – lack of good visualization tools is holding them back all the arguments made for ViSC (Visualization in Scientific Computing) in the 1980s now apply to computational imaging new instruments  scientific discoveries most important new instrument in the last 50 years: the digital computer computers + digital sensors = computational imaging Def: imaging methods in which computation is inherent in image formation. – B.K. Horn the revolution in medical imaging (CT, MR, PET, etc.) is now happening all across the basic sciences (It’s also a great source for volume and point data!) Time =

Examples of computational imaging in the sciences medical imaging rebinning transmission tomography reflection tomography (for ultrasound) geophysics borehole tomography seismic reflection surveying applied physics diffuse optical tomography diffraction tomography scattering and inverse scattering inspiration for light field rendering in this talk Time =

biology astronomy airborne sensing confocal microscopy deconvolution microscopy astronomy coded-aperture imaging interferometric imaging airborne sensing multi-perspective panoramas synthetic aperture radar applicable at macro scale too Time =

optics holography wavefront coding Time =

Computational imaging technologies used in neuroscience Magnetic Resonance Imaging (MRI) Positron Emission Tomography (PET) Magnetoencephalography (MEG) Electroencephalography (EEG) Intrinsic Optical Signal (IOS) In Vivo Two-Photon (IVTP) Microscopy Microendoscopy Luminescence Tomography New Neuroanatomical Methods (3DEM, 3DLM) Time =

The Fourier projection-slice theorem (a. k. a The Fourier projection-slice theorem (a.k.a. the central section theorem) [Bracewell 1956] integrals of attenuation (e.g. of X-rays or visible light), of emission (e.g. fluorescence), etc. Ref: Bracewell, R. N. Strip Integration in Radio Astronomy, Australian Journal of Physics, Vol. 9, 1956, No. 2, pp. 198-217. Kak, A, Slaney, M., Principles of Computerized Tomographic Imaging, IEEE Press, 1988. P(t) G(ω) (from Kak) P(t) is the integral of g(x,y) in the direction  G(u,v) is the 2D Fourier transform of g(x,y) G(ω) is a 1D slice of this transform taken at  -1 { G(ω) } = P(t) ! Time =

Reconstruction of g(x,y) from its projections P(t) P(t, s) G(ω) (from Kak) add slices G(ω) into u,v at all angles  and inverse transform to yield g(x,y), or add 2D backprojections P(t, s) into x,y at all angles  Time =

The need for filtering before (or after) backprojection omega distance from the origin of frequency space convolving by F-1(|w|) is similar to taking a 2nd derivative u v ω 1/ω ω |ω| hot spot correction sum of slices would create 1/ω hot spot at origin correct by multiplying each slice by |ω|, or convolve P(t) by -1 { |ω| } before backprojecting this is called filtered backprojection Time =

Summing filtered backprojections (from Kak) Time =

Example of reconstruction by filtered backprojection Ref: Herman, G.T., Image Reconstruction from Projections, Academic Press, 1980. X-ray sinugram (from Herman) filtered sinugram reconstruction Time =

More examples CT scan of head volume renderings occlusions violate the assumption that pixels are line integrals, so reconstruction fails thus, you cannot reconstruct volumetric models of opaque scenes using digital photography followed by tomography (although many have tried) volume renderings CT scan of head the effect of occlusions Time =

Limited-angle projections here’s another violation of the assumptions on tomography, but one that is more easily addressed artifacts elongation in direction of available projections object-dependent sinusoidal variations Ref: Olson, T., Jaffe, J.S., An explanation of the effects of squashing in limited angle tomography, IEEE Trans. Medical Imaging, Vol. 9, No. 3., September 1990. [Olson 1990] Time =

Reconstruction using the Algebraic Reconstruction Technique (ART) M projection rays N image cells along a ray pi = projection along ray i fj = value of image cell j (n2 cells) wij = contribution by cell j to ray i (a.k.a. resampling filter) (from Kak) applicable when projection angles are limited or non-uniformly distributed around the object can be under- or over-constrained, depending on N and M Time =

make an initial guess, e.g. assign zeros to all cells Formula is derived in Kak, chapter 7, p. 278 Procedure make an initial guess, e.g. assign zeros to all cells project onto p1 by increasing cells along ray 1 until Σ = p1 project onto p2 by modifying cells along ray 2 until Σ = p2, etc. to reduce noise, reduce by for α < 1 Time =

linear system, but big, sparse, and noisy SIRT = Simultaneous Itertative Reconstruction Technique SART = Simultaneous ART linear system, but big, sparse, and noisy ART is solution by method of projections [Kaczmarz 1937] to increase angle between successive hyperplanes, jump by 90° SART modifies all cells using f (k-1), then increments k overdetermined if M > N, underdetermined if missing rays optional additional constraints: f > 0 everywhere (positivity) f = 0 outside a certain area Procedure make an initial guess, e.g. assign zeros to all cells project onto p1 by increasing cells along ray 1 until Σ = p1 project onto p2 by modifying cells along ray 2 until Σ = p2, etc. to reduce noise, reduce by for α < 1 Time =

linear system, but big, sparse, and noisy Ref: Olson, T., A stabilized inversion for limited angle tomography. Manuscript. 35 degrees missing linear system, but big, sparse, and noisy ART is solution by method of projections [Kaczmarz 1937] to increase angle between successive hyperplanes, jump by 90° SART modifies all cells using f (k-1), then increments k overdetermined if M > N, underdetermined if missing rays optional additional constraints: f > 0 everywhere (positivity) f = 0 outside a certain area [Olson] Time =

Nonlinear constraints f = 0 outside of circle (oval?) [Olson] Time =

Borehole tomography receivers measure end-to-end travel time Ref: Reynolds, J.M., An Introduction to Applied and Environmental Geophysics, Wiley, 1997. (from Reynolds) receivers measure end-to-end travel time reconstruct to find velocities in intervening cells must use limited-angle reconstruction methods (like ART) Time =

Applications obvious applications: looking for oil Left picture is from Reynolds, right picture is from Stanford’s Forma Urbis Romae project mapping ancient Rome using explosions in the subways and microphones along the streets? mapping a seismosaurus in sandstone using microphones in 4 boreholes and explosions along radial lines Time =

Optical diffraction tomography (ODT) like any tomographic technique, it is applicable only to semi-transparent objects useful when the object refracts, or diffracts, making Fourier Projection Slice Theorem (and backprojection) invalid forward scattered field the shape taken by the initially planar waves after refraction or diffraction which is equivalent to measuring their amplitude and phase volume dataset index of refraction as a function of x and y broadband hologram of a semi-transparent object alternatively, you could hold wavelength constant and vary incident direction, thus filling frequency space with arcs of the same radius Ref: Wolf E 1969, Three-dimensional structure determination of semi-transparent objects from holographic data, Opt. Commun. 1 153–6 limit as λ → 0 (relative to object size) is Fourier projection-slice theorem (from Kak) for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

for weakly refractive media and coherent plane illumination [Devaney 2005] example (from Devaney) PSF of a single point scatterer (real part, as a function of x and y) measuring phase typically requires a reference beam and interference between it and the main beam, i.e. a holographic procedure arc is sometimes called Ewald circle Ref: Devaney, A., Inverse scattering and optical diffraction tomography, Powerpoint presentation. limit as λ → 0 (relative to object size) is Fourier projection-slice theorem (from Kak) for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

for weakly refractive media and coherent plane illumination [Devaney 2005] limit as λ → 0 (relative to object size) is Fourier projection-slice theorem for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

Inversion by filtered backpropagation Ref: Jebali, A., Numerical Reconstruction of semi-transparent objects in Optical Diffraction Tomography, Diploma Project, Ecole Polytechnique, Lausanne, 2002. backprojection backpropagation [Jebali 2002] depth-variant filter, so more expensive than tomographic backprojection, also more expensive than Fourier method applications in medical imaging, geophysics, optics Time =

Diffuse optical tomography (DOT) so what happens if the object is strongly scattering? then it becomes a problem in inverse scattering Refs: Arridge, S.R., Methods for the Inverse Problem in Optical Tomography, Proc. Waves and Imaging Through Complex Media, Kluwer, 307-329, 2001. Schweiger, M., Gibson, A., Arridge, S.R., “Computational Aspects of Diffuse Optical Tomography,” IEEE Computing, Vol. 5, No. 6, Nov./Dec., 2003. (for image) Jensen, H.W., Marschner, S., Levoy, M., Hanrahan, P., A Practical Model for Subsurface Light Transport, Proc. SIGGRAPH 2001. [Arridge 2003] assumes light propagation by multiple scattering model as diffusion process Time =

Diffuse optical tomography acquisition 81 source positions, 81 detector positions for each source position, measure light at all detector positions use time-of-flight measurement to estimate initial guess for absorption, to reduce cross-talk between absoprtion and scattering [Arridge 2003] female breast with sources (red) and detectors (blue) absorption (yellow is high) scattering (yellow is high) assumes light propagation by multiple scattering model as diffusion process inversion is non-linear and ill-posed solve using optimization with regularization (smoothing) Time =

Computing vector light fields 1:30 we computer graphics researchers know a lot about scattering simulations to produce images here’s a variant you might not be familiar with this isn’t computational imaging in the sense I’ve defined it, i.e. there is no digital sensor but it is a computational method for producing volume data light as a vector field proposed by Michael Faraday formalized by James Clerk Maxwell studied in the context of illumination engineering by Arun Gershun field theory (Maxwell 1873) adding two light vectors (Gershun 1936) the vector light field produced by a luminous strip Time =

Computing vector light fields flatland scene lines and arcs of varying opacity illumination (yellow haze) impinging uniformly from all directions (red circle) magnitude taking into account (partial) occlusion direction visualized using Brian Cabral’s line integral convolution (LIC) (Proc. SIGGRAPH 1993) direction gives orientation of a flat surface for maximum brightness in saddle between axis-aligned squares, surface could be oriented either way vector light field (optional) equivalent (under the particular illumination condition defined here) to the ambient occlusion map magnitude is fraction of circle seen from each point, and direction is average direction to these unoccluded points robot placed in scene should follow field lines to escape with least chance of collision flatland scene with partially opaque blockers under uniform illumination light field magnitude (a.k.a. irradiance) light field vector direction Time =

From microscope light fields to volumes 4D light field → digital refocusing → 3D focal stack → deconvolution microscopy → 3D volume data (DeltaVision) Time =

3D deconvolution object * PSF assumes object is semi-transparent, hence scattering is minimal each pixel is a line integral of attenuation (if backlit) or emission (if frontlit and object is fluorescent) missing rays not as bad as a photographic lens, because microscope lenses have a very wide aperture, for example, 0.95NA = about 70 degrees each side of the optical axis, as shown in the PSF above Ref: McNally, J.G., Karpova, T., Cooper, J., Conchello, J.A., Three-Dimensional Imaging by Deconvolution Microscopy, Methods, Vol. 19, 1999. [McNally 1999] focus stack of a point in 3-space is the 3D PSF of that imaging system object * PSF → focus stack  {object} ×  {PSF} →  {focus stack}  {focus stack}   {PSF} →  {object} spectrum contains zeros, due to missing rays imaging noise is amplified by division by ~zeros reduce by regularization (smoothing) or completion of spectrum improve convergence using constraints, e.g. object > 0  {PSF} Time =

Silkworm mouth (40x / 1.3NA oil immersion) slice of focal stack slice of volume volume rendering Time =

From microscope light fields to volumes tomographic reconstruction backprojection doesn’t work well, because we don’t have all rays but we could use ART it turns out that tomographic reconstruction using SART with a positivity constraint is the same as digital refocusing followed by deconvolution microscopy with the same positivity constraint we prove this in our SIGGRAPH 2006 paper, Light Field Microscopy 4D light field → digital refocusing → 3D focal stack → deconvolution microscopy → 3D volume data 4D light field → tomographic reconstruction → 3D volume data (DeltaVision) (from Kak) Time =

Optical Projection Tomography (OPT) reconstruction of semi-transparent objects from digital photographs object must be immersed in refraction index-matched fluid James Sharpe, Science, 2002 object was fluorescent, so each pixel is a line integral of emission and microscopes are orthographic, hence tomography applies Trifonov EGSR 2006 object was fully transparent, immersed in colored liquid and backlit, so each pixel is a line integral of attenuation digital cameras are perspective, hence he had to use ART if this works, then what good is the extra dimension in the light field? see my SIGGRAPH 2006 paper on light field microscopy [Trifonov 2006] [Sharpe 2002] Time =

Confocal scanning microscopy an alternative to 3D deconvolution for producing cross sectional images, hence volume data shown here for microscopy, but can also be applied at the large scale, as I showed in my SIGGRAPH 2004 paper on synthetic aperture confocal imaging Ref: Corle, T.R.. Kino, G.S. Confocal Scanning Optical Microscopy and Related Imaging Systems, Academic Press, 1996. if you introduce a pinhole only one point on the focal plane will be illuminated light source pinhole Time =

Confocal scanning microscopy ...and a matching optical system, hence the word confocal this green dot will be both strongly illuminated and sharply imaged while this red dot will have less light falling on it by the square of distance r, because the light is spread over a disk and it will also be more weakly imaged by the square of distance r, because its image is blurred out over an disk on the pinhole mask, and only a little bit is permitted through so the extent to which the red dot contributes to the final image falls off as the fourth power of r, the distance from the focal plane pinhole light source r photocell pinhole Time =

Confocal scanning microscopy of course, you’ve only imaged one point so you need to move the pinholes and scan across the focal plane light source pinhole pinhole photocell Time =

Confocal scanning microscopy light source pinhole pinhole photocell Time =

[UMIC SUNY/Stonybrook] the object in the lower-right image is actually spherical, but portions of it that are off the focal plane are both blurry and dark, effectively disappearing [UMIC SUNY/Stonybrook] Time =

Synthetic aperture confocal imaging [Levoy et al., SIGGRAPH 2004] light source → 5 beams → 0 or 1 beams Time =

Seeing through turbid water Time =

Seeing through turbid water scanned tile if there were objects at multiple depths, I could repeat this procedure for each depth, producing a volume dataset floodlit scanned tile Time =

Coded aperture imaging Ref: Zand, J., Coded aperture imaging in high energy astronomy, http://lheawww.gsfc.nasa.gov/docs/cai/coded_intr.html (from Zand) optics cannot bend X-rays, so they cannot be focused pinhole imaging needs no optics, but collects too little light use multiple pinholes and a single sensor produces superimposed shifted copies of source Time =

Reconstruction by backprojection assumes non-occluding source otherwise it’s the voxel coloring problem (from Zand) backproject each detected pixel through each hole in mask superimposition of projections reconstructs source + a bias essentially a cross correlation of detected image with mask also works for non-infinite sources; use voxel grid assumes non-occluding source Time =

Example using 2D images (Paul Carlisle) A simple example: cross correlation is just convolution (of detected image by mask) without first reversing detected image in x and y conversion of blacks to -1’s in “decoding matrix” just serves to avoid normalization of resulting reconstruction (to remove the bias) performing this on an image of gumballs, rather than a 3D gumball scene, is equivalent to assuming the gumballs cover the sky at infinity, i.e. they are an angular function can this be reproduced in the lab? using a mask with holes, a screen behind it, and a camera to photograph the image falling on the screen Ref: Paul Carlisle, Coded Aperture Imaging, http://www.paulcarlisle.net/old/codedaperture.html * = Time =

New sources for point data Molecular probes 50-nm fluorescent microspheres smaller than the wavelength of light in fact their size controls their color too small to image? No... Gustafsson uses structured illumination and non-linear (saturation) imaging to beat the diffraction limit by 10x (in X and Y)! Ref: Gustafsson, M.G.L., Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution, Proc. National Academy of Sciences (PNAS), Vol. 102, No. 37, Sept. 13, 2005. [Gustaffson 2005] (Molecular Probes) Time =

Three theses Thesis #1: Many sciences lack good visualization tools. Computer scientists need to learn these sciences books I was required to read for my PhD: Gabor Herman, Image Reconstruction from Projections Glusker and Trueblood, Crystal Structure Analysis Thorpe, Elementary Topics in Differential Geometry Thesis #1: Many sciences lack good visualization tools. Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

The best visualizations are often created by domain scientists Vesalius On the fabric of the human body being an anatomist, Vesalius knew that muscle striations were important, but looked a lot like hatching for shading so he and his illustrator (Stephen van Calcar) developed a style of drawing in which the two could be distinguished pointed out this distinction in his written narrative revolutionized anatomy and scientific illustration Ref: H. Robin, The Scientific Image, pp. 41, B. Baigrie, Picturing Knowledge, p. 57. Andreas Vesalius (1543) Time =

Three theses Thesis #1: Many sciences lack good visualization tools. Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

Visualizing raw data helps debug the capture process deconvolution to remove out of focus blur wasn’t working well making a vertical cross-section of my raw data explained the problem these are 1-micron shifts, each triggered by a person walking across the raised floor! X-Z cross-sectional slice of same stack hollow fluorescent 15-micron sphere, manually captured Z-stack, 1-micron increments, 40×/1.3NA oil objective Time =

...or may force improvements in the capture technology Shinya Inoue one of the microscopists I worked with at MBL prior to Inoue, nobody understood how chromosomes moved apart in an orderly fashion during mitosis people had seen spindle fibers in stained (dead) cells, but didn’t understand their purpose, and nobody could see these spindle fibers in the cell when it was alive stress-free (“rectified”) microscope objectives improved the polarization microscope, permitting spindle fibers to be seen in an (unstained) living cell video microscopy permitted capture of live cells during mitosis (and meiosis of gametes, which leaves half the genetic material in each offspring cell) Ref: Dell, K.R., Vale, R.D., A tribute to Shinya Inoue and innovation in light microscopy, Journal of Cell Biology, Vol. 165, NO. 1, April 12, 2004, pp. 21-25. crane fly spermatocyte undergoing meiosis, image and video by Rudolf Oldenbourg Shinya Inoué at his polarization microscope Time =

Final thought: the importance of building useful tools Fred Brooks from his SIGGRAPH 1994 keynote address we need to build tools real scientists use we need to understand their science we need to understand their capture technology “A toolmaker succeeds as, and only as, the users of his tool succeed with his aid. However shining the blade, however jeweled the hilt, however perfect the heft, a sword is tested only by cutting. That swordsmith is successful whose clients die of old age.” – Fred Brooks, Computer Scientist as Toolsmith – II, Proc. CACM 1996 Time =

Acknowledgements Fred Brooks (“Computer Scientist as Toolsmith”) Pat Hanrahan (“Self-Illustrating Phenomena”) Bill Lorensen (“The Death of Visualization”) Shinya Inoué (“History of Polarization Microscopy”) Time =