Download presentation
Presentation is loading. Please wait.
Published byMatilda Campbell Modified over 9 years ago
1
Computer Graphics Research at Virginia David Luebke Department of Computer Science
2
Outline My current research –Perceptually Driven Interactive Rendering Perceptual level of detail control Wacky new algorithms –Scanning Monticello Graphics resources –Building an immersive display –Building a rendering cluster?
3
Perceptual Rendering Next few slides from a recent talk Apologies to UVA vision group
4
Perceptually Guided Interactive Rendering David Luebke University of Virginia
5
Motivation: Stating The Obvious Interactive rendering of large-scale geometric datasets is important –Scientific and medical visualization –Architectural and industrial CAD –Training (military and otherwise) –Entertainment
6
Motivation: Model Size Incredibly, 3-D models are getting bigger as fast as hardware is getting faster…
7
Courtesy General Dynamics, Electric Boat Div. Big Models: Submarine Torpedo Room 1994: 700,000 polygons
8
(Anonymous) Big Models: Coal-fired Power Plant 1997: 13 million polygons
9
1998: 16.7 million polygons Big Models: Plant Ecosystem Simulation Deussen et al: Realistic Modeling of Plant Ecosystems
10
Big Models: Double Eagle Container Ship 2000: 82 million polygons Courtesy Newport News Shipbuilding
11
Big Models: The Digital Michelangelo Project 2000 (David): 56 million polygons 2001 (St. Matthew): 372 million polygons Courtesy Digital Michelangelo Project
12
(Part Of) The Solution: Level of Detail Clearly, much of this geometry is redundant for a given view The idea: simplify complex models by reducing the level of detail used for small, distant, or unimportant regions
13
Traditional Level of Detail In A Nutshell… 249,924 polys62,480 polys7,809 polys975 polys Courtesy Jon Cohen l Create levels of detail (LODs) of objects:
14
l Distant objects use coarser LODs: Traditional Level of Detail In A Nutshell…
15
The Big Question How should we evaluate and regulate the visual fidelity of our simplifications?
16
Measuring Fidelity Fidelity of a simplification to the original model is often measured geometrically: METRO by Visual Computing Group, CNR-Pisa
17
Measuring Visual Fidelity l However… – The most important measure of fidelity is usually not geometric but perceptual: does the simplification look like the original? l Therefore: – We are developing a principled framework for LOD in interactive rendering, based on perceptual measures of visual fidelity
18
Perceptually Guided LOD: Questions And Issues Several interesting offshoots: –Imperceptible simplification When can we claim simplification is undetectable? –Best-effort simplification How best to spend a limited time/polygon budget? –Silhouette preservation Silhouettes are important. How important? –Gaze-directed rendering When can we exploit reduced visual acuity
19
Related Work: Perceptually Guided Rendering Lots of excellent research on perceptually guided rendering But most work has focused on offline rendering algorithms (e.g., path tracing) –Different time frame! Seconds or minutes vs. milliseconds –Sophisticated metrics: Visual masking, background adaptation, etc…
20
Perceptually Guided LOD: Our Approach Approach: test folds (local simplification operations) against a perceptual model to determine if they would be perceptible 3 1 2 9 87 10 54 6 9 8 54 6 A 3 Fold Unfold A
21
Perception 101: The Contrast Sensitivity Function Perceptual scientists have long used contrast gratings to measure limits of vision: –Bars of sinusoidally varying intensity –Can vary: Contrast Spatial frequency Eccentricity Velocity Etc…
22
Perception 101: The Contrast Sensitivity Function Contrast grating tests produce a contrast sensitivity function –Threshold contrast vs. spatial frequency –CSF predicts the minimum detectable static stimuli
23
Campbell-Robson Chart by Izumi Ohzawa Your Personal CSF
24
Framework: View-Dependent Simplification Next: need a framework for simplification –We use view-dependent simplification for LOD management Traditional LOD: create several discrete LODs in a preprocess, pick one at run time View-dependent LOD: create data structure in preprocess, extract an LOD for the given view
25
View-Dependent LOD: Examples Show nearby portions of object at higher resolution than distant portions View from eyepoint Birds-eye view
26
View-Dependent LOD: Examples Show silhouette regions of object at higher resolution than interior regions
27
View-Dependent LOD: Examples Show more detail where the user is looking than in their peripheral vision: 34,321 triangles
28
View-Dependent LOD: Examples Show more detail where the user is looking than in their peripheral vision: 11,726 triangles
29
View-Dependent LOD: Implementation We use VDSlib, our public-domain library for view-dependent simplification Briefly, VDSlib uses a big data structure called the vertex tree –Hierarchical clustering of model vertices –Updated each frame for current simplification
30
The Vertex Tree: Region Of Effect 3 1 2 9 87 10 54 6 9 8 54 6 A 3 Fold Node A Unfold Node A Folding a node affects a limited region: –Some triangles change shape upon folding –Some triangles disappear completely
31
Wacky New Algorithms I am interested in exploring new perceptually-driven rendering algorithms –Don’t necessarily fit constraints of today’s hardware Ex: frameless rendering Ex: I/O differencing (time permitting) –Give the demo, show the movie…
32
Non-Photorealistic Rendering (time permitting) Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
33
Non-Photorealistic Rendering Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
34
Non-Photorealistic Rendering Fancy name, simple idea: Make computer graphics that don’t look like computer graphics
35
NPRlib NPRlib: flexible callback-driven NP rendering Bunny: Traditional CG Rendering
36
Non-Photorealistic Rendering Bunny: Pencil-Sketch Rendering NPRlib: flexible callback-driven NP rendering
37
Non-Photorealistic Rendering Bunny: Charcoal Smudge Rendering NPRlib: flexible callback-driven NP rendering
38
Non-Photorealistic Rendering Bunny: Two-Tone Rendering NPRlib: flexible callback-driven NP rendering
39
Non-Photorealistic Rendering Bunny: Two-Tone Rendering NPRlib: flexible callback-driven NP rendering
40
Scanning Monticello Fairly new technology: scanning the world
41
Scanning Monticello Want a flagship project to showcase this Idea: scan Thomas Jefferson’s Monticello –Historic preservation –Virtual tours –Archeological and architectural research, documentation, and dissemination –Great driving problem for scanning & rendering research Results from first pilot project. –Show some data…
42
Scanning Monticello
45
Graphics Resources 2 SGI Octanes –Midrange graphics hardware SGI InfiniteReality 2 –2 x 225 MHz R10K, 1 Gb, 4 Mb cache –High-end graphics hardware: 13 million triangles/sec, 64 Mb texture memory Hot new PC platforms (P3s and P4s) –High-end cards built on nVidia’s best chipsets –Stereo glasses, digital video card, miniDV stuff –Quad Xeon on loan Software! –Maya, Renderman, Lightscape, Multigen, etc.
46
Graphics Resources Building an immersive display –NSF grant to build a state-of-the-art immersive display: 6 projectors, 3 screens, passive stereo High-end wide-area head tracker 8 channel spatial audio PCs to drive it all –Need some help building it…
47
Graphics Resources Building a rendering cluster? –Trying to get money to build a high-end rendering cluster for wacky algorithms 12 dual-Xeon PCs: 1 Gb RAM 72 Gb striped RAID nVidia GeForce3 Gigabit interconnect –Don’t have the money yet, but do have 6 hot Athlon machines
48
More Information I only take students who’ve worked with or impressed me somehow –Summer work: best –Semester work: fine, but harder Interested in graphics? –Graphics Lunch: Fridays @ noon, OLS 228E –An informal seminar/look at cool graphics papers –Everyone welcome, bring your own lunch
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.