Download presentation
Presentation is loading. Please wait.
1
CSc4730/6730 Scientific Visualization
Lecture 25 Volume Rendering Ying Zhu Georgia State University
2
Outline Background Volume rendering algorithm Main references
Ray casting Plane composition Main references R. A. Drebin, et al. “Volume Rendering”, Proceedings of ACM SIGGRAPH, 1998 M. Levoy, “Volume Rendering: Display of Surfaces from Volume Data”, IEEE Computer Graphics & Applications, May 1988
3
Background
4
What is volume rendering?
Volume is a three-dimentional array of voxels. Just the same way an image is a 2D array of pixels. 3D images produced by CT, MRI or even mathematic scalar fields are easily represented as volumes. Voxel is the basic element of the volume. Typical volume size may be 1283 voxels, but any other size is acceptable. Volume Rendering means rendering the voxel-based data into viewable 2D image.
5
What is volume rendering?
Volume rendering is a technique for visualizing sampled functions of three spatial dimensions by computing 2-D projections of a colored semitransparent volume.
6
Background The major application area of volume rendering is medical imaging Volume data is available from X-ray Computer Tomagraphy (CT) scanners and Positron Emission Tomagraphy (PET) scanners. CT scanners produce three-dimensional stacks of parallel plane images, each of which consist of an array of X-ray absorption coefficients.
7
Background Typically X-ray CT images will have a resolution of 512 * 512 * 12 bits and there will be up to 50 slides in a stack. The slides are 1-5 mm thick, and are spaced 1-5 mm apart.
8
Background In the two-dimensional domain, these slides can be viewed one at a time. The advantage of CT images over conventional X-ray images is that they only contain information from that one plane. A conventional X-ray image, on the other hand, contains information from all the planes, and the result is an accumulation of shadows that are a function of the density of the tissue, bone, organs, etc., anything that absorbs the X-rays.
9
Background The availability of the stacks of parallel data produced by CT scanners prompted the development of techniques for viewing the data as a three-dimensional field rather than as individual planes. This gave the immediate advantage that the information could be viewed from any view point.
10
Background Disadvantages of Marching Cubes algorithm
It requires that a binary decision be made on the position of the intermediate surface that is extracted and rendered. Also, extracting an intermediate structure can cause false positive (artifacts that do not exist) and false negatives (discarding small or poorly defined features).
11
Background The basic goal of volume rendering is to allow the best use of the three-dimensional data and not attempt to impose any geometric structure on it. It solves one of the most important limitations of surface extraction techniques, namely the way in which they display a projection of a thin shell in the acquisition space.
12
Background Surface extraction techniques fail to take into account that, particularly in medical imaging, data may originate from fluid and other materials which may be partially transparent and should be modeled as such. Volume rendering doesn't suffer from this limitation.
13
Why Volume Rendering? Pros: Natural representation of CT/MRI images.
Transparency effects (Fire, Smoke…). High quality Cons: Huge data sets Computationally expensive. Cannot be embedded easily into polygonal scene
14
Rendering Methods There are two categories of volume rendering algorithms: 1. Ray casting algorithms (Object Order) Basic ray-casting Using octrees 2. Plane Composing (Image Order) Basic slicing Shear-Warp factorization Transparent textures
15
Ray Casting Ray casting is a method in which for every pixel in the image, a ray is cast through the volume. The ray intersects a line of voxels. While passing, the color of the pixel is accumulated acording the voxels’ color and transperacy. Basic Comlexity = Depth * ImageSize
16
Ray casting “Additive reprojection" projects voxels along a certain viewing direction. Intensities of voxels along parallel viewing rays are projected to provide an intensity in the viewing plane. Voxels of a specified depth can be assigned a maximum opacity, so that the depth that the volume is visualized to can be controlled. This provides several useful features: The volume can be visualized from any direction. Hidden surface removal can be implemented so that, for example, front ribs can obscure back ribs. Color can be used to enhance interpretation.
17
Ray casting Additive reprojection uses a lighting model which is a combination of reflected and transmitted light from the voxel.
18
Ray casting In this figure, the outgoing light is made up of:
light reflected in the view direction from the light source incoming light filtered by the voxel any light emitted by the voxel
19
Ray casting Several different algorithms for ray casting exist.
Here we describe the implementation presented by Marc Levoy For every pixel in the output image, a ray is shot into the data volume. At a predetermined number of evenly spaced locations along the ray the color and opacity values are obtained by interpolation. The interpolated colors and opacities are then merged with each other and with the background by compositing in either front-to-back or back-to-front order to yield the color of the pixel.
20
Visualization pipeline
Levoy describes his technique as consisting of two pipelines - a visualization pipeline and a classification pipeline. The output produced by these two pipelines is then combined using volumetric compositing to produce the final image. In the visualization pipeline the volume data is shaded. Every voxel in the data is given a shade using a local gradient approximation to obtain a "voxel normal." This normal is then used as input into a standard Phong shading model to obtain an intensity. The output from this pipeline is a three color component intensity for every voxel in the data set.
21
Classification pipeline
The classification pipeline associates an opacity with each voxel. The color parameters which are used in the shading model are also used in this pipeline. The classification of opacity is context dependent, and the application of Levoy's technique is in X-ray CT images (where each voxel represents an X-ray absorption coefficient).
22
The classification of opacity
Classification of opacity in Levoy’s paper
23
The classification of opacity
We now have two values associates with each voxel: C(X) - a shade calculated from a reflection model using the local gradient α(X) - an opacity derived from tissue type of known CT values The next stage, volumetric compositing, produces a two-dimensional projection of these values in the view plane.
24
Ray casting Rays are cast from the eye to the voxel, and the values of C(X) and α(X) are "combined" into single values to provide a final pixel intensity.
25
Ray casting For a single voxel along a ray, the standard transparency formula is: where: Cout is the outgoing intensity/color for voxel X along the ray Cinis the incoming intensity for the voxel It is possible to interpolate from the vertex values of the voxel which the ray passes through, but it's better to consider neighboring voxels (8 or 26) and trilinearly interpolate. This yields values that lie exactly along the ray.
26
Ray casting pseudocode
27
Ray Casting - Performance Improvements
Use of Octrees Minimizes the number transparent voxels during the accumulation, since a group of transparent voxels may be represented as a single node in the octree. More efficient memory usage. Interleaving methods Sample every two (n) voxels as long as voxels are fully transparent. Sample only ¼ of the points in the image and interpolate - Faster for interactive mode, but less quality. Pyramids, k-d trees and other data-structures.
28
Ray Casting - Improving Image Quality
Multi Cast or Super Sampling: Instead of sampling one ray per pixel, sampling 4 rays per pixel. Better image... but four times longer to render. Ray subdividing: Used with perspective projection. When the rays draw away from each other, the sampling of the volume is not complete. The solution is to divide the ray when the rays density falls.
29
Plane Composing The plane composing (or “slicing”) method, divides the volume into slices. During the rendering process, the slices are composes one over the other, producing the image Basic Complexity = Volume_Size
30
Plane Composing The slicing method works best in the case of parallel projection and the volume edge is parallel to the view plane. In this case, the voxels can be accessed easily as planes in the volume. However, when the view-plane is not parallel to the volume, a more complicated sampling algorithm is needed. Since a voxel is not projected onto one pixel, a filter should be used for the composition.
31
Plane Composing The planes composing may be done from the rear side forwards, or front side backwards.
32
Plane Composing - Performance Improvements
Shear-warp factorization of the viewing matrix. Information of min-max non-transparent voxels. Again, interleaving methods Sample only half of the planes. Sample only ¼ of the points in the image... Use the graphics hardware Sending the volume planes one by one as transparent textures to the graphics card Back to front Use 3D texturing technique to render the image
33
Examples Characteristics of datasets
34
Jaw with skin
35
Jaw without skin
36
Ribonuclease
37
Head
38
Rendering times
39
Brute force
40
Heirarchical enumeration
41
Heirarchical enumeration and adaptive termination
42
Isosurface vs. Volume Rendering
43
Surface graphics Surface graphics generates computer images based on surface primitives (polygon meshes, smooth patches …) for object (boundary) representations Marching cube algorithm
44
Surface graphics Advantages Disadvantages
Abstraction of object gemetry Continuous representation Work well with light-material interaction Hardware acceleration Disadvantages Sensitive to scene complexity Sensitive to object complexity Texture mapping can be tricky No interior/volume information
45
Volume graphics Volume graphics refers to the rendering techniques that generate images directly from volume datasets (e.g. MRI, CT data) A volume-sampled plane within a volumetric terrain (image courtesy of A. Kaufman)
46
Volume graphics Advantages Insensitive to scene complexity
Insensitive to object complexity Work well for 3D texture mapping Can model volume densities, inhomogeneous materials and amorphous phenomena Support constructive solid geometry (CSG)
47
Volume graphics Discrete representation generates aliasing
Involve more complicated light-material interaction Require large memory Require intensive computation and thus high computing power Lack of geometry information (boundary)
48
References R. A. Drebin, et al. “Volume Rendering”, Proceedings of ACM SIGGRAPH, 1988 M. Levoy, “Volume Rendering: Display of Surfaces from Volume Data”, IEEE Computer Graphics & Applications, May 1988 K. Frenkel, Volume Rendering, Communications of the ACM, 32(4), 1989, pp
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.