Download presentation
Presentation is loading. Please wait.
Published byEleanore Lewis Modified over 9 years ago
2
A Photon Accurate Model Of The Human Eye Michael F. Deering
3
Use Graphics Theory To Simulate Vision
4
Motivation Understanding the interactions between rendering algorithms, cameras, and display devices with the human eye and visual perception. Use this to improve (or not improve) rendering algorithms, cameras, and displays.
5
Graphics/Vision System Display Image Generation Post Production Neural Processing Display Photons Eye
6
Concrete Software Deliverable A computer program to: Simulate, photon by photon, several frames of video display into an anatomically accurate human eye retinal sampling array.
7
Overview Photon counts Display device pixel model Eye optical model Rotation of eye due to “drifts” Retinal synthesizer Diffraction computation Results: rendering video photons into eye
8
Photons In This Room: 4K Lumens: 10 19 P hotons/Sec 14’ 17’ 75’ ~600 photons/60 th sec per pixel per cone
9
Display Pixel Model Each pixel color sub-component has: Spatial envelope (shape, including fill factor) Spectral envelope (color) Temporal envelope (time)
10
Trinitron™ CRT Pixel
11
Direct View LCD Pixel
12
3 Chip DLP™ Pixel
13
1 Chip DLP™ Pixel
14
1 Chip DLP™ In This Room
15
Optical Model Of The Eye: Schematic Eyes Historically comprised of 6 quadric surfaces. Real human eyes are quite a bit more complex. My model is based on: “Off-axis aberrations of a wide-angle schematic eye Model”, Escudero-Sanz & Navarro, 1999.
16
Eye Model
17
Rotation Of The Eye Due To “Drift” When seeing, the eye almost always is slowly drifting at 6 to 30 minutes of arc per second relative to the point of fixation. The induced motion blur is important for perception, but rarely modeled. (The eye also has tremor, micro-saccades, saccades, pursuit motions, etc.)
18
Why The Eye Sampling Pattern Matters
19
Roorda And Williams Image
20
Synthetic Retina Generation Some existing efforts take real retinal images as representative patches then flip and repeat. Others just perturb a triangular lattice. I want all 5 million cones – New computer model to generate retinas to order (not synthesizing rods yet).
21
Retina Generation Algorithm For more details, attend implementation sketch “A Human Eye Cone Retinal Synthesizer” Wednesday 8/3 10:30 am session Room 515B (~11:25 am)
22
Growth Sequence Movie
23
Growth Movie Zoom
24
Retinal Zoom Out Movie
25
3D Fly By Movie
26
Roorda Blood Vessel
27
Roorda Verses Synthetic
28
The Human Eye Verses Simple Optics Theory All eye optical axes unaligned: Fovea is 5 degrees off axis Pupil is offset ~0.5 mm Lens is tilted (no agreement on amount) Rotational center: 13 mm back, 0.5 mm nasal Eye image surface is spherical
29
Blur And Diffraction Just blurBlur and diffraction
30
Generating a Diffracted Point Spread Function (DPSF) Trace the wavefront of a point source as 16 million rays. Repeat for 45 spectral wavelengths. Repeat for every retinal patch swept out by one degree of arc in both directions.
31
Diffracted Point Spread Functions Movie
32
Putting It All Together Generate synthetic retina. Compute diffracted point spread functions by tracing wavefronts through optical model. Simulate, photon by photon, a video sequence into eye cones. Display cone photon counts as colored images.
33
Simulating Display And Eye For each frame of the video sequence: For each pixel in each frame: For each color primary in each pixel: From the color primary intensity, compute the number of photons that enter the eye For each simulated photon, generate a sub-pixel position, sub-frame time, and wavelength
34
Simulating Display And Eye From the sub-frame time of the photon, interpolate the eye rotation due to “drift”. From the position and wavelength of the photon, interpolate the diffracted point spread function. Interpolate and compute the effect of pre- receptoral filters: culls ~80% of photons.
35
Simulating Display And Eye Materialize the photon at a point within the DPSF parameterized by a random value. Compute cone hit, cull photons that miss. Apply Stiles-Crawford Effect (I), cull photons. Compute cone photopigment absorptance; cull photons not absorbed. Increment cone photon count by 1.
36
30x30 Pixel Face Input
37
Retinal Image Results
38
Lumen Ramp Movie
39
30x30 Pixel Movie
40
Result Movie
41
How To Test Model?
42
Test it the same way we test real eyes.
43
20/27 20/20 20/15 20/12 20/9 20/20 20/27 20/15 20/12 20/9
44
Sine Frequency Ramp 20 cycles 80 cycles 40 cycles
45
Maximum Drift Movie
46
Maximum Track Movie
47
Next Steps Continue validation of model and adding features. Simulate deeper into the visual system: – Retinal receptor fields – Lateral geniculate nucleus – Simple and complex cells of the visual cortex.
48
Acknowledgements Michael Wahrman for the RenderMan™ rendering of the cone data. Julian Gómez and the anonymous SIGGRAPH reviewers for their comments on the paper.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.