Jonathan Blow Bolt Action Software

Slides:



Advertisements
Similar presentations
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Advertisements

Computer graphics & visualization Global Illumination Effects.
Direct Volume Rendering. What is volume rendering? Accumulate information along 1 dimension line through volume.
Week 7 - Monday.  What did we talk about last time?  Specular shading  Aliasing and antialiasing.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Photon Mapping. How did I use it 10 years ago? Why might you want to use it tomorrow?
Developer’s Survey of Polygonal Simplification Algorithms Based on David Luebke’s IEEE CG&A survey paper.
Real-Time Rendering POLYGONAL TECHNIQUES Lecture 05 Marina Gavrilova.
Computer Graphics Hardware Acceleration for Embedded Level Systems Brian Murray
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Level of Detail CS184-Sp05 Section. Level of Detail Basic Idea –Use simpler versions of an object as it makes less of a contribution Generation –How to.
The Graphics Pipeline CS2150 Anthony Jones. Introduction What is this lecture about? – The graphics pipeline as a whole – With examples from the video.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
9/20/2001CS 638, Fall 2001 Today Finishing Up Reflections More Multi-Pass Algorithms Shadows.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Computer graphics & visualization REYES Render Everything Your Eyes Ever Saw.
CSE 381 – Advanced Game Programming Basic 3D Graphics
Computer Graphics Texture Mapping
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Interactive Rendering of Meso-structure Surface Details using Semi-transparent 3D Textures Vision, Modeling, Visualization Erlangen, Germany November 16-18,
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Geometry Textures Rodrigo de Toledo, ( PhD candidate at LORIA-INRIA) (Researcher at Tecgraf, PUC-Rio) Bin Wang and Bruno Levy.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
CS-378: Game Technology Lecture #4: Texture and Other Maps Prof. Okan Arikan University of Texas, Austin V Lecture #4: Texture and Other Maps.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
Real-time Graphics for VR Chapter 23. What is it about? In this part of the course we will look at how to render images given the constrains of VR: –we.
1 KIPA Game Engine Seminars Jonathan Blow Ajou University December 2, 2002 Day 6.
09/16/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Environment mapping Light mapping Project Goals for Stage 1.
Efficient Streaming of 3D Scenes with Complex Geometry and Complex Lighting Romain Pacanowski and M. Raynaud X. Granier P. Reuter C. Schlick P. Poulin.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
PMR: Point to Mesh Rendering, A Feature-Based Approach Tamal K. Dey and James Hudson
Graphics Graphics Korea University cgvr.korea.ac.kr 1 7. Speed-up Techniques Presented by SooKyun Kim.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
1 Angel: Interactive Computer Graphics5E © Addison- Wesley 2009 Image Formation Fundamental imaging notions Fundamental imaging notions Physical basis.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
Rendering Large Models (in real time)
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Image Fusion In Real-time, on a PC. Goals Interactive display of volume data in 3D –Allow more than one data set –Allow fusion of different modalities.
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Applications and Rendering pipeline
Model Optimization Wed Nov 16th 2016 Garrett Morrison.
- Introduction - Graphics Pipeline
Ying Zhu Georgia State University
Week 7 - Wednesday CS361.
Week 7 - Monday CS361.
Advanced Computer Graphics
CS Computer Graphics II
Hank Childs, University of Oregon
Graphics Processing Unit
CS475 3D Game Development Level Of Detail Nodes (LOD)
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Models and Architectures
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
Illumination Model How to compute color to represent a scene
Chapter IX Bump Mapping
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Models and Architectures
Geometric Objects and Transformations (II)
CS-378: Game Technology Lecture #4: Texture and Other Maps
Models and Architectures
Last Time Presentation of your game and idea Environment mapping
Run-Time LOD Run-time algorithms may use static or dynamic LOD models:
Computer Graphics (Fall 2003)
Interactive Sampling and Rendering for Complex and Procedural Geometry
Presentation transcript:

Jonathan Blow Bolt Action Software jon@bolt-action.com LOD Metrics Jonathan Blow Bolt Action Software jon@bolt-action.com

Motivation Spent a lot of time on terrain research Goal: spend energy wisely Goal: Maximize simplicity and benefit Communicate what I see in the academic papers from being real far into it. Disclose the things you won’t read in anyone’s LOD paper.

Lecture Structure Goal of LOD Choosing a metric to meet that goal See how implementation details get in the way Look deeper at what our algorithms are doing Examine the role of analysis in formulating these algorithms.

I. The Goal of LOD

LOD is about reducing resource usage Usually triangle, but also textures, etc We substitute cheaper models where people wouldn’t notice the difference This is not a very formal definition

No Objective Quality Goal Scientific method: test hypothesis against world to determine truth or falsehood LOD researchers are pulling this stuff out of their ass (e.g. metric). The more complex the algorithm, the more ass-pulling is involved -> more likely it is wrong.

Objective Goal Analogy - PSNR Used widely in image processing Definition: L2-norm of the N-vector between two images.

Drawbacks of PSNR Doesn’t match human visual system e.g. Each pixel independent People are working on replacements, nobody agrees on one

The Order in which Bugs Get Fixed Things that don’t compile Things that crash Obvious functional errors Subtle functional errors that require careful analysis to diagnose and that still leave the software “pretty much working”.

II. Choosing a metric and using it

Metrics in the small vs. in the large A common metric is projected pixel error We don’t have anything in the large (analogy of PSNR)

To be fast, algs degrade tess efficiency EQS or progressive mesh - mega conservative regular samples for BTT vs TIN structure of BTT forces extra splits (crack fixing) top-down view-dependant algorithms give up tess efficiency (often without realizing it!)

Measurement of projected pixel error computation is arbitrary GH: along normal LK: along z direction Why not measure terrain along normal? Or a general mesh along local verticality? What about texture popping? Etc.

Metric as simplifier breaks down lots of complex spatial relations into e.g. a scalar algorithms try to use vertex correlation to speed things up they usually use the *scalar* and forget that all this info is available Instead use isosurfaces, big speed improvement.

III. Implementation details get in the way

Icky non-cooperation of different data types (tri, tex, light) Changes in vert positions geometry Changes in normals texture irradiance “how the scene looks”

How might this be simplified? Example of LOD’d voxel space galactic armada of signal processing

Bump mapping (and other anisotropy eg BRDF) screws us Our hardware and API implementations give us less flexibility with this kind of lighting than with old-school lambertian stuff tangent frames can only be pinned to vertices This sucks ass when combined w/ LOD.

IV. Looking deeper at our algorithms’ processes.

Edge collapse is a linear interpolation between samples. When we look at this as a filter what does it tell us? Translate bilinear filter into sequence of fundamental signal processing operations. What this does to frequency content: fold, then mirror, then transfer by 1.5*cos(x).

V. Better planning of future algorithms

Metric defines system behavior So we can tell a lot about what a system will do by thinking about the metric and the data it operates on. This can help us understand where to best focus our effort.

Example: What it means to limit projected pixel error Metric: projected pixel error Algorithm: “keep projected error near some constant” Effect: Screen-space triangles are roughly the same size

In an LOD’d scene, polygons tend to be roughly the same size in screen pixels.

A large percentage of polygons are small and close (50%? 60%?)

Why polygons tend to be the same size in screen pixels. Projected size of any delta value is roughly constant (stabilized point of algorithm’s action). /z = k Big delta values tend to be attached to big edges, small deltas to small edges.  = md; d = edge length, m = some constant

Why polygons tend to be the same size in screen pixels.  /z = k   1/z1 = k = 2/z2  = md  md1/z1 = k = md2/z2 Screen area  1/2(d/z)2 = 1/2(k/m)2 This is a constant.

Conclusions Push “metric in the small” into the realm of the frame buffer sample? (Video cards already screw the pooch at this scale so maybe we would be just hiding all our error in there) Thank-you and good night.