Direct Volume Rendering Acknowledgement : Han-Wei Shen Lecture Notes 사용
Direct Volume Rendering Direct : no conversion to surface geometry Three methods Ray-Casting Splatting 3D Texture-Based Method
Data Representation N x 2D arraies = 3D array 3D volume data are represented by a finite number of cross sectional slices (hence a 3D raster) On each volume element (voxel), stores a data value (if it uses only a single bit, then it is a binary data set. Normally, we see a gray value of 8 to 16 bits on each voxel.) N x 2D arraies = 3D array
Data Representation (2) What is a Voxel? – Two definitions A voxel is a cubic cell, which has a single value cover the entire cubic region A voxel is a data point at a corner of the cubic cell The value of a point inside the cell is determined by interpolation
Basic Idea Based on the idea of ray tracing Trace from eat each pixel as a ray into object space Compute color value along the ray Assign the value to the pixel
Viewing Ray Casting Where to position the volume and image plane What is a ‘ray’ How to march a ray
Viewing (1) 1. Position the volume Volume center = [w/2,w/2,w/2] Assuming the volume dimensions is w x w x w We position the center of the volume at the world origin Volume center = [w/2,w/2,w/2] (local space) Translate T(-w/2,-w/2,-w/2) (0,0,0) x (data to world matrix? world to data matrix ) y z
Viewing (2) 2. Position the image plane Image plane y (0,0,0) x z Assuming the distance between the image plane and the volume center is D, and initially the center of the image plane is (0,0,-D) Image plane y (0,0,0) x z
Viewing (3) 3. Rotate the image plane A new position of the image plane can be defined in terms of three rotation angle a,b,g with respect to x,y,z axes Assuming the original view vector is [0,0,1], then the new view vector g becomes: cosb 0 -sinb 1 0 0 cosg sing 0 g = [0,0,1] 0 1 0 0 cosa sina -sing cosg 0 sinb 0 cosb 0 -sina cosa 0 0 1
Viewing (4) E0 u0 v0 S0 u v E S y (0,0,0) x z B Now, + S0 u v E S B = [0,0,0] S0 = [0,0,-D] u0 = [1,0,0] v0 = [0,1,0] y (0,0,0) x z B Now, R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R
Viewing (5) Image Plane: L x L pixels Then E = S – L/2 x u – L/2 x v So Each pixel (i,j) has coordinates P = E + i x u + j x v S v + u E R: the rotation matrix S = B – D x g U = [1,0,0] x R V = [0,1,0] x R We enumerate the pixels by changing i and j (0..L-1)
Viewing (6) 4. Cast rays Remember for each pixel on the image plane P = E + i x u + j x v and the view vector g = [0,0,1] x R So the ray has the equation: Q = P + k (d x g) d: the sampling distance at each step x d p Q K = 0,1,2,…
Old Methods Before 1988 Did not consider transparency did not consider sophisticated light transportation theory were concerned with quick solutions hence more or less applied to binary data non-binary data - require sophisticated classification/compositing methods!
Ray Tracing -> Ray Casting “another” typical method from traditional graphics Typically we only deal with primary rays - hence: ray-casting a natural image-order technique as opposed to surface graphics - how do we calculate the ray/surface intersection??? Since we have no surfaces - we need to carefully step through the volume
Ray Casting Stepping through the volume: a ray is cast into the volume, sampling the volume at certain intervals The sampling intervals are usually equi-distant, but don’t have to be (e.g. importance sampling) At each sampling location, a sample is interpolated / reconstructed from the grid voxels popular filters are: nearest neighbor (box), trilinear (tent), Gaussian, cubic spline Along the ray - what are we looking for?
Example: Using the nearest neighbor kernel Q = P + K x V (v=dxg) At each step k, Q is rounded off to the nearest voxel (like the DDA algorithm) Check if the voxel is on the boundary or not (compare against a threshold) If yes, perform shading In tuys’ paper
Basic Idea of Ray-casting Pipeline Data are defined at the corners of each cell (voxel) The data value inside the voxel is determined using interpolation (e.g. tri-linear) Composite colors and opacities along the ray path Can use other ray-traversal schemes as well c1 c2 c3
Ray Traversal Schemes Intensity Max Average Accumulate First Depth
Ray Traversal - First Intensity First Depth First: extracts iso-surfaces (again!) done by Tuy&Tuy ’84
Ray Traversal - Average Intensity Average Depth Average: produces basically an X-ray picture
Ray Traversal - MIP Intensity Max Depth Max: Maximum Intensity Projection used for Magnetic Resonance Angiogram
Ray Traversal - Accumulate Intensity Accumulate Depth Accumulate opacity while compositing colors: make transparent layers visible! Levoy ‘88
Raycasting 1.0 volumetric compositing color opacity object (color, opacity)
Raycasting 1.0 Interpolation kernel volumetric compositing color opacity 1.0 object (color, opacity)
Raycasting 1.0 Interpolation kernel volumetric compositing color c = c s s(1 - ) + c opacity = s (1 - ) + 1.0 object (color, opacity)
Raycasting 1.0 volumetric compositing color opacity object (color, opacity)
Raycasting 1.0 volumetric compositing color opacity object (color, opacity)
Raycasting 1.0 volumetric compositing color opacity object (color, opacity)
Raycasting 1.0 volumetric compositing color opacity object (color, opacity)
Raycasting volumetric compositing color opacity object (color, opacity)
Volume Rendering Pipeline Acquired values Data preparation Prepared values shading classification Voxel colors Voxel opacities Ray-tracing / resampling Ray-tracing / resampling Sample colors Sample opacities compositing Image Pixels
DCH DVR Pipeline
Common Components of General Pipeline Interpolation/reconstruction Classification or transfer function Gradient/normal estimation for shading Question: are normals also interpolated?
Shading and Classification - Shading: compute a color(lighting) for each data point in the volume - Classification: Compute color and opacity for each data point in the volume Done by table lookup (transfer function) Levoy preshaded the entire volume f(xi) C(xi), a(xi)
Shading Common shading model – Phong model For each sample, evaluate: C = ambient + diffuse + specular = constant + Ip Kd (N.L) + Ip Ks (N.H) Ip: emission color at the sample N: normal at the sample i n
Gradient/Normals (Levoy 1988) Central difference per voxel Y+1 y-1, z-1 X+1
Shading (Levoy 1988) Phong Shading + Depth Cueing Cp = color of parallel light source ka / kd / ks = ambient / diffuse / specular light coefficient k1, k2 = fall-off constants d(x) = distance to picture plane L = normalized vector to light H = normalized vector for maximum highlight N(xi) = surface normal at voxel xi
Compositing you can use ‘Front-to-Back’ Compositing formula use ‘over’ operator C = backgrond ‘over’ C1 C = C ‘over’ C2 C = C ‘over’ C3 … Cout = Cin + C(x)*(1- ain); aout = ain + a(x) *(1-ain) c1 c2 c3
Classification Map from numerical values to visual attributes Color Transparency Transfer functions Color function: c(s) Opacity function: a(s) 21.05 27.05 24.03 20.05
Classification/Transfer Function Maps raw voxel value into presentable entities: color, intensity, opacity, etc. Raw-data material (R, G, B, a, Ka, Kd, Ks, ...) May require probabilistic methods (Drebin). Derive material volume from input. Estimate % of each material in all voxels. Pre-computed. AKA segmentation. Often use look-up tables (LUT) to store the transfer function that are discovered
Levoy - Classification Usually not only interested in a particular iso-surface but also in regions of “change” Feature extraction - High value of opacity exists in regions of change Transfer function (Levoy) - Saliency Surface “strength”
Opacity function (1) Goal: visualize voxels that have a selected threshold value fv - No intermediate geometry is extracted - The idea is to assign voxels that have value fv the maximum opacity (say a) And then create a smooth transition for the surrounding area from 1 to 0 Levoy wants to maintain a constant thickness for the transition area.
Opacity function (2) Maintain a constant isosurface thickness Can we assign opacity based on function value instead of distance? (local operation: we don’t know where the isosurface is) Yes – we can based on the value distance f – fv but we need to take into account the local gradient opacity = a opacity = 0
Opacity function (3) Assign opacity based on value difference (f-fv) and local gradient gradient: the value fall-off rate grad = Df/Ds Assuming a region has a constant gradient and the isosurface transition has a thickness R F = f(x) Then we interpolate the opacity opacity = a – a * ( fv-f(x))/ (grad * R) opacity = a F = fv opacity = 0 F = fv – grad * R thickness = R
Levoy - Classification A
DCH - Material Percentage V. Probabilistic classifier probability that a voxel has intensity I: pi - percentage of material Pi(I) - prob. that material i has value I Pi(I) given through statistics/physics pi then given by:
DCH - Classification Like Levoy - assumes only two materials per voxel that will lead to material percentage volumes from them we conclude color/opacity: where Ci=(aiRi, aiGi, aiBi, ai)
DCH- Classification
Levoy - Improvements Levoy 1990 front-to-back with early ray termination a = 0.95 hierarchical oct-tree data structure skip empty cells efficiently
Texture Based Volume Rendering
3D Texture Based Volume Rendering Best known practical volume rendering method for rectlinear grid datasets Realtime Rendering is possible
Interpolation of Samples Volume stored as 3D texture Viewport-aligned slices Blended back-to-front Trilinear interpolation by hardware
Classification Density values from texture map Classification via lookup table Takes place in texture mapping stage
Shading is possible Principle Precompute Gradient plus density in texture Shade first intensity (keep density!) Classification via 2D pixel texture
Texture Mapping + Textured-mapped polygon 2D image 2D polygon
Tex. Mapping for Volume Rendering Consider ray casting … (top view) z y x
Texture based volume rendering z y Use pProxy geometry for sampling Render every xz slice in the volume as a texture-mapped polygon The proxy polygon will sample the volume data Per-fragment RGBA (color and opacity) as classification results The polygons are blended from back to front
Texture based volume rendering
Changing Viewing Direction What if we change the viewing position? x y That is okay, we just change the eye position (or rotate the polygons and re-render), Until …
Changing View Direction (2) Until … You are not going to see anything this way … x y This is because the view direction now is Parallel to the slice planes What do we do?
Switch Slicing Planes y What do we do? x Change the orientation of slicing planes Now the slice polygons are parallel to YZ plane in the object space
Some Considerations… (5) When do we need to change the slicing orientation? x y When the major component of view vector changes from y to -x
Some Considerations… (6) Major component of view vector? Given the view vector (x,y,z) -> get the maximal component If x: then the slicing planes are parallel to yz plane If y: then the slicing planes are parallel to xz plane If z: then the slicing planes are parallel to xy plane -> This is called (object-space) axis-aligned method.
Three copies of data needed We need to reorganize the input textures for diff. View directions. Reorganize the textures on the fly is too time consuming. We want to prepare the texture sets beforehand x z y xz slices yz slices xy slices
Texture based volume rendering Algorithm: (using 2D texture mapping hardware) Turn off the depth test; Enable blending For (each slice from back to front) { - Load the 2D slice of data into texture memory - Create a polygon corresponding to the slice - Assign texture coordinates to four corners of the polygon - Render and blend the polygon (use OpenGL alpha blending) to the frame buffer }
Problem (1) Non-even sampling rate d’’ > d’ > d Sampling artifact will become visible
Problem (2) Object-space axis-alighed method can create artifact: Popping Effect x y There is a sudden change of slicing direction when the view vector transits from one major direction to another. The change in the image intensity can be quite visible
Solution (1) Insert intermediate slides to maintain the sampling rate
Solution (2) Use Image-space axis-aligned slicing plane: the slicing planes are always parallel to the view plane
3D Texture Based Volume Rendering
Image-Space Axis-Aligned Arbitrary slicing through the volume and texture mapping capabilities are needed - Arbitrary slicing: this can be computed using software in real time This is basically polygon-volume clipping
Image-Space Axis-Aligned (2) Texture mapping to the arbitrary slices This requires 3D Solid texture mapping Input texture: A bunch of slices (volume) Depending on the position of the polygon, appropriate textures are mapped to the polygon.
3D Texture Mapping Now the input texture space is 3D Texture coordinates (r,s) -> (r,s,t) (0,1,1) (1,1,1) (0,1,0) (1,1,0) (r0,s0,t0) (r1,s1,t1) (r2,s2,t2) (r3,s3,t3) (0,0,0) (1,0,0)
Pros and Cons 2D textured object-aligned slices + Very high performance + High availability - High memory requirement - Bi-linear interpolation - inconsistent sampling rates - popping artifacts
Pros and Cons 3D textured view-aligned slices + Higher quality + No popping effect Need to compute the slicing planes for every view angle Limited availability (not anymore)
Classification Implementation Red v Green (R, G, B, A) v Blue v value Alpha v
Classification Implementation (2) Pre-classification – using color palette glColorTableExt( GL_SHARED_TEXTURE_PALETTE_EXT, GL_RGBA8, 256*4, GL_RGBA, GL_UNSIGNED_BYTE, color_palette); Post-classification – using 1D(2D,3D) texture glTexImage1D(Gl_TEXTURE_1D, 0, GL_RBGA8, 256*4, 0, GL_RGBA, GL_UNSIGNED_BYTE, color_palette);
Classification implementation (3) Post-classification – dependent texture Texture Unit 0 (volume intensity) v Texture Unit 1 (transfer function) (s, t, r) (R, G, B, A)
Shading Use per-fragment shader Store the pre-computed gradient into a RGBA texture Light 1 direction as constant color 0 Light 1 color as primary color Light 2 direction as constant color 1 Light 2 color as secondary color
Non-polygonal isosurface Store voxel gradient as GRB texture Store the volume density as alpha Use OpenGL alpha test to discard volume density not equal to the threshold Use the gradient texture to perform shading
Non-polygonal isosurface (2) - Isosurface rendering results No shading diffuse diffuse + specular