Download presentation
Presentation is loading. Please wait.
Published byBenedict Oliver Modified over 9 years ago
1
Visualization Process sensorssimulationdatabase raw data images Transformation
2
Visualization Pipeline sensorssimulationdatabase raw data vis data Renderable primitives images filter mapping rendering
3
Visualization Pipeline sensorssimulationdatabase raw data vis data Renderable primitives images filter Denoising Decimation Multiresolution Mesh generation etc
4
Visualization Pipeline sensorssimulationdatabase raw data vis data Renderable primitives images mapping Geometry: Line Surface Voxel Attributes: Color Opacity Texture
5
Visualization Pipeline sensorssimulationdatabase raw data vis data Renderable primitives images rendering Surface rendering Volume rendering Point based rendering Image based rendering NPR
6
Volume Rendering Goal: visualize three-dimensional functions Measurements (medical imaging) Numerical simulation output Analytic functions
7
Volume Rendering
8
Important Steps Reconstruction Classification Optical model Shading
9
Reconstruction Recover the original continuous function from discrete samples c1 c2 c3
10
Reconstruction Recover the original continuous function from discrete samples Box filter Hat filter Sinc filter
11
Classification Map from numerical values to visual attributes Color Transparency Transfer functions Color function: c(s) Opacity function: a(s) 21.0527.05 24.0320.05
12
Classification Order Pre classification: Classification first, and then filter Post classification: filter first, then classification 21.0527.05 24.0320.05 25.3 post pre
13
Optimal model Assume a denity-emitter model (approximation) Each data point in the volume can emit and absort light Light emission – color classification Light absortion – block light from behind due to non-zero opacity
14
Optical Model Ray tracing is one method used to construct the final image x(t) : ray, parameterized by t s(x(t)) : scalar value c(s(x(t)): color; emitted light a(s(x(t)): absortion coefficient
15
Ray Integration Calculate how much light can enter the eye for each ray C = c(s(x(t)) e dt - a(s(x(t’)))dt’ 0 D 0 t C 0 D
16
Discrete Ray Integration C 0 D C = C (1- A ) i i 0 n 0 i-1 C’ = C + (1-A ) C’ i i i+1 Back to front blending: step from n-1 to 0
17
Shading Common shading model – Phong model For each sample, evaluate: C = ambient + diffuse + specular = constant + Ip Kd (N.L) + Ip Ks (N.H) Ip: emission color at the sample N: normal at the sample i n
18
Early attempts (pre-MCs) The Cuberille Approach (1979) - Each voxel has a value - Each voxel has 6 faces - Applying a threshold to perform binary classification - Draw the visible faces of the boundary voxels as polygons voxel (Fairly jagged images)
19
Contour Tracking Extract contours at each section and connect them together (1976 and after)
20
New Methods Were Needed Better image quality is necessary Finding object’s boundary sometimes can be difficult) The process of connecting boundary contours is also very complicated The intermediate geometry size can be huge
21
Direct 2-D Display of 3D Objects Tuy and Tuy 1984, IEEE CG & A (one of the earliest volume rendering techniques) Direct: No conversion from data to geometry
22
Basic Idea Based on the idea of ray tracing Treat each pixel as a light source Emit light from the image to the object space The ray stops at the object boundary Calculate shading at the boundary point Assign the value to the pixel
23
Algorithm details Data Representation: (establish 3D volume and 2D screen space) Viewing Sampling Shading
24
Data Representation 3D volume data are represented by a finite number of cross sectional slices (a stack of images) N x 2D arraies = 3D array
25
Data Representation (2) What is a Voxel? – Two definitions A voxel is a cubic cell, which has a single value cover the entire cubic region A voxel is a data point at a corner of the cubic cell The value of a point inside the cell is determined by interpolation
26
Viewing Ray Casting Where to position the volume and image plane What is a ‘ray’ How to march a ray
27
Viewing (1) 1. Position the volume Assuming the volume dimensions is w x w x w We position the center of the volume at the world origin y (0,0,0)x z Volume center = [w/2,w/2,w/2] (local space) Translate T(-w/2,-w/2,-w/2) (data to world matrix? world to data matrix )
28
Viewing (2) 2. Position the image plane Assuming the distance between the image plane and the volume center is D, and initially the center of the image plane is (0,0,-D) y (0,0,0)x z Image plane
29
Viewing (3) 3. Rotate the image plane A new position of the image plane can be defined in terms of three rotation angle with respect to x,y,z axes Assuming the original view vector is [0,0,1], then the new view vector g becomes: cos sin cos sin g = [0,0,1] 0 1 0 0 cos sin sin cos sin 0 cos sin cos
30
Viewing (4) y (0,0,0)x z u v E S E0u0 v0 + S0 B B = [0,0,0] S 0 = [0,0,-D] u 0 = [1,0,0] v 0 = [0,1,0] Now, R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R
31
Viewing (5) R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R + S Image Plane: L x L pixels E Then E = S – L/2 x u – L/2 x v So Each pixel (i,j) has coordinates P = E + i x u + j x v u v We enumerate the pixels by changing i and j (0..L-1)
32
Viewing (6) 4. Cast rays Remember for each pixel on the image plane P = E + i x u + j x v and the view vector g = [0,0,1] x R So the ray has the equation: Q = P + k (d x g) d: the sampling distance at each step x x d x x p Q K = 0,1,2,…
33
Sampling At each step of the ray, we sample the volume data What do you mean ?
34
Sampling (1) In tuys’ paper Q = P + K x V (v=dxg) At each step k, Q is rounded off to the nearest voxel (like the DDA algorithm) Check if the voxel is on the boundary or not (compare against a threshold) If yes, perform shading
35
Shading Take into account the voxel position, distance to the image plane, the object normal, and the light position The paper does not describe in detail, but you can imagine we can easily perform local illumination (diffuse or even specular). The distance can be used alone to provide An 3D depth cue (e.g. distant voxels are dimmer)
36
Pros and Cons + Require no boundary estimation/hidden surface removal + No display holes - Binary object representation - Flat lighting (head on illumination) - Jagged surface - No semi-transparencies A more sophisticated classification and lighting model in [Levoy 88]
37
Remember … The paper we discussed last time Discrete Sampling (jagged edges) Binary Classification (no fuzzy objects) Shading is based on binary classification (quality is bad)
38
Levoy’s 1988 paper Tried to improve the above problems Node-Center Voxel Floating point sampling No explicit surface detection Shading and classification are done separately
39
Basic Idea - Data are defined at the corners of each cell (voxel) - The data value inside the voxel is determined using tri-linear interpolation - No ray position round-off is needed when sampling -Accumulate colors and opacities along the ray path c1 c2 c3
40
Volume Rendering Pipeline Acquired values f0(xi) Prepared values f1(xi) Compositing Image Pixels shading voxel colors Ci(x) ray sampling sample colors Cs(x) classification voxel opacity a(x) ray sampling sample opacities Cs(x)
41
Shading and Classification - Shading: compute a color for each data point in the volume - Classification: Compute an opacity for each data point in the volume -Done by table lookup (transfer function) - Levoy preshaded the entire volume f(xi) C(xi), a(xi)
42
Shading -Use a phong illumination model Light (color) = ambient + diffuse + specular C(x) = Cp * Ka + Cp / (K1 + K2* d(x)) * (Kd * (N(x). L + Ks * (N(x). H )^n ) Cp: color of light ka,kd,ks: ambient, diffusive, specular coefficient K1,K2: constant (used for depth attenuation) N(x): normal at x
43
Normal estimation How to compute N(x)? 1.Compute the gradient at each corner 2.Interpolate the normal using central difference N(x,y,z) = [ (f(x+1)-f(x-1))/2, (f(y+1)-f(y-1))/2, (f(z+1)-f(z-1))/2 ] X+1 x-1,y-1, Y+1 z+1 z-1
44
Classification Classification: Mapping from data to opacities Region of interest: high opaicity (more opaque) Rest: translucent or transparent The opacity function is typically specified by the user Levoy came up with two formula to compute opacity 1. Isosurface 2. Region boundary (e.g. between bone and fresh)
45
Opacity function (1) Goal: visualize voxels that have a selected threshold value fv - No intermediate geometry is extracted - The idea is to assign voxels that have value fv the maximum opacity (say ) - And then create a smooth transition for the surrounding area from 1 to 0 -Levoy wants to maintain a constant thickness for the transition area.
46
Opacity function (2) Maintain a constant isosurface thickness opacity = opacity = 0 Can we assign opacity based on function value instead of distance? (local operation: we don’t know where the isosurface is) Yes – we can based on the value distance f – fv but we need to take into account the local gradient
47
Opacity function (3) Assign opacity based on value difference (f-fv) and local gradient gradient: the value fall-off rate grad = f s Assuming a region has a constant gradient and the isosurface transition has a thickness R opacity = F = fv opacity = 0 F = fv – grad * R thickness = R F = f(x) Then we interpolate the opacity opacity = – * ( fv-f(x))/ (grad * R)
48
Continuous Sampling c1 c2 c3 We sample the volume at discrete points along the ray (Levoy sampled color and opacity, but you can sample the value and then assign color and opacity) No integer round-off Use trilinear interpolation Compositing (front-to-back or back-to-front)
49
Tri-linear Interpolation c2 - Use 7 linear interpolations - Interpolate both value and gradient -Levoy interpolate color and opacity
50
Compositing c1 c2 c3 The initial pixel color = Black Back-to-Front compositing: use ‘under’ operator C = C1 ‘under’ background C = C2 ‘under C C = C3 ‘under C … Cout = Cin * (1- (x)) + C(x)* (x)
51
Compositing (2) c1 c2 c3 Or you can use ‘Front-to-Back’ Compositing formula Front-to-Back compositing: use ‘over’ operator C = backgrond ‘over’ C1 C = C ‘over’ C2 C = C ‘over’ C3 … Cout = Cin + C(x)*(1- in out = in + (x) *(1- in)
52
Texture Mapping 2D image2D polygon + Textured-mapped polygon
53
Tex. Mapping for Volume Rendering Consider ray casting … x y z (top view)
54
Texture based volume rendering x z y Render every xz slice in the volume as a texture-mapped polygon The proxy polygon will sample the volume data Per-fragment RGBA (color and opacity) as classification results The polygons are blended from back to front Use pProxy geometry for sampling
55
Texture based volume rendering
56
Changing Viewing Direction What if we change the viewing position? That is okay, we just change the eye position (or rotate the polygons and re-render), Until … x y
57
Changing View Direction (2) Until … You are not going to see anything this way … This is because the view direction now is Parallel to the slice planes What do we do? x y
58
Switch Slicing Planes What do we do? Change the orientation of slicing planes Now the slice polygons are parallel to YZ plane in the object space x y
59
Some Considerations… (5) When do we need to change the slicing orientation? When the major component of view vector changes from y to -x x y
60
Major component of view vector? Given the view vector (x,y,z) -> get the maximal component If x: then the slicing planes are parallel to yz plane If y: then the slicing planes are parallel to xz plane If z: then the slicing planes are parallel to xy plane -> This is called (object-space) axis-aligned method. Some Considerations… (6)
61
Three copies of data needed x z y xz slicesyz slicesxy slices We need to reorganize the input textures for diff. View directions. Reorganize the textures on the fly is too time consuming. We want to prepare the texture sets beforehand
62
Texture based volume rendering Algorithm: (using 2D texture mapping hardware) Turn off the depth test; Enable blending For (each slice from back to front) { - Load the 2D slice of data into texture memory - Create a polygon corresponding to the slice - Assign texture coordinates to four corners of the polygon - Render and blend the polygon (use OpenGL alpha blending) to the frame buffer }
63
Problem (1) Non-even sampling rate dd’d’’ d’’ > d’ > d Sampling artifact will become visible
64
Problem (2) Object-space axis-alighed method can create artifact: Popping Effect There is a sudden change of slicing direction when the view vector transits from one major direction to another. The change in the image intensity can be quite visible x y
65
Solution (1) Insert intermediate slides to maintain the sampling rate Chaoli and Liya will present the paper dd’ d’’
66
Solution (2) Use Image-space axis-aligned slicing plane: the slicing planes are always parallel to the view plane
67
3D Texture Based Volume Rendering
68
Image-Space Axis-Aligned Arbitrary slicing through the volume and texture mapping capabilities are needed - Arbitrary slicing: this can be computed using software in real time This is basically polygon-volume clipping
69
Image-Space Axis-Aligned (2) Texture mapping to the arbitrary slices This requires 3D Solid texture mapping Input texture: A bunch of slices (volume) Depending on the position of the polygon, appropriate textures are mapped to the polygon.
70
3D Texture Mapping Now the input texture space is 3D Texture coordinates (r,s) -> (r,s,t) (0,0,0) (1,0,0) (0,1,0) (1,1,0) (0,1,1)(1,1,1) (r0,s0,t0) (r1,s1,t1) (r2,s2,t2)(r3,s3,t3)
71
Pros and Cons 2D textured object-aligned slices + Very high performance + High availability - High memory requirement - Bi-linear interpolation - inconsistent sampling rates - popping artifacts
72
Pros and Cons + Higher quality + No popping effect - Need to compute the slicing planes for every view angle - Limited availability 3D textured view-aligned slices
73
Classification Implementation v Red v Green v Blue v Alpha value (R, G, B, A)
74
Classification Implementation (2) Pre-classification – using color palette glColorTableExt( GL_SHARED_TEXTURE_PALETTE_EXT, GL_RGBA8, 256*4, GL_RGBA, GL_UNSIGNED_BYTE, color_palette); Post-classification – using 1D(2D,3D) texture glTexImage1D(Gl_TEXTURE_1D, 0, GL_RBGA8, 256*4, 0, GL_RGBA, GL_UNSIGNED_BYTE, color_palette);
75
Classification implementation (3) Post-classification – dependent texture Texture Unit 0 (volume intensity) v Texture Unit 1 (transfer function) (s, t, r) (R, G, B, A)
76
Shading User OpenGL 1.2 Pre-compute normalized gradient for every voxel node and store as color components in an RGB texture Light direction stored as primary color Use GL_DOT3_RGB_EXT to combine the primary color and texture color using dot product 3 Only allow single light source
77
Shading (2) Use per-fragment shader Store the pre-computed gradient into a RGBA texture Light 1 direction as constant color 0 Light 1 color as primary color Light 2 direction as constant color 1 Light 2 color as secondary color
78
Shading (3) nVIDIA Register Combiner
79
Non-polygonal isosurface Store voxel gradient as GRB texture Store the volume density as alpha Use OpenGL alpha test to discard volume density not equal to the threshold Use the gradient texture to perform shading
80
- Isosurface rendering results No shading diffuse diffuse + specular Non-polygonal isosurface (2)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.