Strategies for Direct Volume Rendering of Diffusion Tensor Fields Gordon Kindlmann, David Weinstein, and David Hart Presented by Chris Kuck.

Slides:



Advertisements
Similar presentations
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Advertisements

Yingcai Xiao Chapter 6 Fundamental Algorithms. Types of Visualization Transformation Types 1.Data (Attribute Transformation) 2.Topology (Topological Transformation)
Week 9 - Monday.  What did we talk about last time?  BRDFs  Texture mapping and bump mapping in shaders.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
3D Graphics Rendering and Terrain Modeling
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
1 Computer Graphics Chapter 9 Rendering. [9]-2RM Rendering Three dimensional object rendering is the set of collective processes which make the object.
1 Computer Graphics By : Mohammed abu Lamdy ITGD3107 University of Palestine Supervision: Assistant Professor Dr. Sana’a Wafa Al-Sayegh.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
Based on slides created by Edward Angel
1 Angel: Interactive Computer Graphics 5E © Addison-Wesley 2009 Shading I.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Visual Recognition Tutorial
Computer Graphics - Class 10
Rendering (彩現 渲染).
Motion Analysis Slides are from RPI Registration Class.
IMGD 1001: Illumination by Mark Claypool
Tensor Data Visualization Mengxia Zhu. Tensor data A tensor is a multivariate quantity  Scalar is a tensor of rank zero s = s(x,y,z)  Vector is a tensor.
Introduction to Volume Visualization Mengxia Zhu Fall 2007.
/mhj 1212 Introduction Diffusion Tensor Imaging (DTI) is a fairly new Magnetic Resonance Imaging technique. It shows the diffusion (i.e. random motion)
Objectives Learn to shade objects so their images appear three- dimensional Learn to shade objects so their images appear three- dimensional Introduce.
Modeling Fluid Phenomena -Vinay Bondhugula (25 th & 27 th April 2006)
Tensor Field Visualization
Direct Volume Rendering w/Shading via Three- Dimensional Textures.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Shading I Ed Angel Professor of Computer Science, Electrical and Computer Engineering,
Course Website: Computer Graphics 16: Illumination.
CS 480/680 Computer Graphics Shading I Dr. Frederick C Harris, Jr.
SVD(Singular Value Decomposition) and Its Applications
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Shading (introduction to rendering). Rendering  We know how to specify the geometry but how is the color calculated.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Diffusion Tensor Processing and Visualization Ross Whitaker University of Utah National Alliance for Medical Image Computing.
Vectors and the Geometry of Space
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Trajectory Physics Based Fibertracking in Diffusion Tensor Magnetic Resonance Imaging Garrett Jenkinson, Advisor: José Moura, Graduate Student: Hsun-Hsien.
Lecture 3 : Direct Volume Rendering Bong-Soo Sohn School of Computer Science and Engineering Chung-Ang University Acknowledgement : Han-Wei Shen Lecture.
Shading & Texture. Shading Flat Shading The process of assigning colors to pixels. Smooth Shading Gouraud ShadingPhong Shading Shading.
Visualizing Fiber Tracts in the Brain Using Diffusion Tensor Data Masters Project Presentation Yoshihito Yagi Thursday, July 28 th, 10:00 a.m. 499 Dirac.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Basic Rendering Pipeline and Shading Spring 2012.
References: [1]S.M. Smith et al. (2004) Advances in functional and structural MR image analysis and implementation in FSL. Neuroimage 23: [2]S.M.
Visualization of Anatomic Covariance Tensor Fields Gordon L. Kindlmann, David M. Weinstein, Agatha D. Lee, Arthur W. Toga, and Paul M. Thompson Presented.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CHAPTER 8 Color and Texture Mapping © 2008 Cengage Learning EMEA.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Local Illumination and Shading
Conductor, insulator and ground. Force between two point charges:
Basic Theory (for curve 01). 1.1 Points and Vectors  Real life methods for constructing curves and surfaces often start with points and vectors, which.
CDS 301 Fall, 2008 From Graphics to Visualization Chap. 2 Sep. 3, 2009 Jie Zhang Copyright ©
OpenGL Shading. 2 Objectives Learn to shade objects so their images appear three-dimensional Introduce the types of light-material interactions Build.
David Luebke3/16/2016 CS 551 / 645: Introductory Computer Graphics David Luebke
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
Computer Graphics Lecture 30 Mathematics of Lighting and Shading - IV Taqdees A. Siddiqi
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Computer Graphics: Illumination
Illumination and Shading. Illumination (Lighting) Model the interaction of light with surface points to determine their final color and brightness OpenGL.
Review of Matrix Operations
Morphing and Shape Processing
BRDFs Randy Rauwendaal.
3D Graphics Rendering PPT By Ricardo Veguilla.
Human Brain Mapping Conference 2003 # 653
Computer Vision Lecture 4: Color
Real-Time Volume Graphics [06] Local Volume Illumination
Volume Rendering (2).
Tensor Visualization Chap. 7 October 21, 2008 Jie Zhang Copyright ©
CS-378: Game Technology Lecture #4: Texture and Other Maps
CS 480/680 Computer Graphics Shading.
Presentation transcript:

Strategies for Direct Volume Rendering of Diffusion Tensor Fields Gordon Kindlmann, David Weinstein, and David Hart Presented by Chris Kuck

Diffusion Tensor fields In the living tissue water molecules exhibit Brownian motion. This water motion, or diffusion, can be isotropic or anisotropic. The diffusion can be described by a 3x3 symmetric real valued matrix and is used as a good approximation to the diffusion process. These matrices are calculated from a sequence of diffusion-weighted MRI’s.

Diffusion Tensor fields The direction and magnitude are stored as the systems orthogonal eigenvectors and eigenvalues Each tensor, and it’s corresponding eigensystem, can be represented in a concise and elegant way as an ellipsoid.

Why use diffusion tensor fields? Currently we have no way of visualizing the structure of the white matter contained in the brain. However, these intricate structures can be differentiated from the surrounding material by observing that white matter generally exhibits the property of anisotropy. However, the cases where highly concentrated white matter are involved, this property could be false.

White Matter Visualization Creating a detailed understanding of the white matter tracts in the brain by visualizing the structure of these white matter tracts, could lead to advances in neuroanatomy, surgical planning, and cognitive sciences.

Strategies Barycentric Mapping Lit-Tensors Hue-Balls and Deflection Mapping Reaction-Diffusion Textures Diffusion Interpolation

Barycentric Mapping Motivation: Displaying 6 dimensions of data at once is not a useful visualization. We need some way to reduce the diffusion-tensor field. Ideally we would like the resulting visualization to be opaque where there are regions of interest and transparent elsewhere.

Barycentric Mapping Before we can adequately remove isotropic areas of diffusion from the brain visualization, we must first define what anisotropy means They used Westen et al.’s formulas to derive the amount of anisotropy in each tensor.

Isotropic/anisotropic coefficients Where c l is the amount of linear anisotropy, c p is the amount of planar anisotropy and c s is the amount of isotropy and c l + c p + c s = 1.

Anisotropy Index Which gives way to: C a is defined as the anisotropy index.

Barycentric space Now we define a space that is called barycentric space. This space is the combination of all different types of anisotropy and isotropy. With this space we can mark each tensor with a value of c a, c l, c p, and c s.

Barycentric Mapping After marking each element, we can create a look- up table in Barycentric space to see what value for opacity we should use, thus reducing the entire dataset down to one dimension.

Barycentric Mapping

As well as a look-up table for opacity, it is possible to create a similar look-up table in Barycentric space for color such that the user can see the different types of anisotropy in the brain.

Barycentric Mapping

Lit-Tensors Now that we have reduced the data set to a reasonably amount of data we need to somehow depict accurate lighting as well. They’re solution is a shading technique termed lit- tensors, which can indicate the type and orientation of anisotropy.

Lit-Tensors They follow these constraints to do so: 1)With linear anisotropy lighting should be identical to illuminated streamlines 2)In planar anisotropy lighting should be identical to standard surface rendering. 3)Every where else the surface normals must be smoothly interpolated

Lit-Tensors This problem can be viewed as a codimension problem*. The ellipsoid that is represents linear anisotropy has a codimension of 2, and 1 with planar anisotropy. *D.C. Banks, “Illumination in Diverse Codimension” This paper, unlike Banks’, however makes no claims of its physical accuracy or plausibility.

Lit-Tensor calculation First, start with Blinn-Phong Shading model Where k a, k d, and k s are the respective intensity coefficients, A is the amount of ambient light, O is the Object color. λ is replaced with either r, g, or b for color. L is the vector pointing to the directional light source, N is the normal of the surface, and H is the “half-way” vector.

Lit-Tensor calculation You can view linear anisotropic tensors as streamlines and because of this, there are an infinite set of normals. By using the Pythagorean theorem the dot product can by expressing in terms of a T tangent to the surface.

Lit-Tensor calculation Where U is either L or H depending on whether you are doing specular or diffusion lighting respectively. T could be represented with either 1 vector in the planar case, or 2 vectors in the linear case, thus a new parameter is needed

Lit-Tensor calculation Where are the eigenvalues sorted as lambda1 >= lambda2 >= lambda3. C theta ranges from completely linear ( 0 ) to completely planar ( π/2 ). Then the dot product can be rearranged as

Lit-Tensors

Lit-Tensors accomplish their goal of computing lighting conditions via the direction and magnitude of each tensor. However this does not provide a very intuitive way to view the lighting conditions. Their solution was to mix, or use completely, opacity gradient shading.

Lit-Tensors

Hue-Balls and Deflection Mapping The idea: Take a tensor and reduce it to a vector. Then map from this vector to a point on a color unit sphere. How they accomplish this is they pick some input vector, and multiply it by the tensor. This output vector is then mapped onto the color unit sphere.

Hue-Balls and Deflection Mapping

In Addition to finding the color, they compute “deflection” by finding the difference between the input vector and output vector. This vector, when there are high levels of anisotropy, will be deflected a great amount. Since they are trying to make all anisotropic areas opaque, they use this value to assign an opacity.

Hue-Balls and Deflection Mapping

Reaction Diffusion Textures Reaction Diffusion textures are an idea that was introduced by Turing that was looking for a mathematical model to describe pigmentation patterns in the animal kingdom. The equations that Turing proposed are simple in nature.

Reaction Diffusion Textures Where at t=0, a=b=4. k is the controlling factor in the growth of the patterns. d a and d b control how fast the two chemicals can spread throughout the medium. The overall convergence speed is controlled by s. Finally β is a pattern of uniformly distributed random values in a small interval centered around 0.

Reaction Diffusion calculation In practice, a regular two-dimensional grid is used to simulate the Laplacian. This could be extended to three dimensions pretty easily.

Reaction Diffusion calculation In three dimensions the discrete grid to convolve to approximate the Laplacian is similar.

Reaction Diffusion calculation To get the texture to be representative of the tensor data, they use Fick’s second law to determine diffusion. The resulting equations change the discrete grid once again.

Reaction Diffusion visualization To visualize the data in a more convenient fashion, they remove all area’s that are not anisotropic. They do this by calculating the average anisotropy, and any spots below.5 * average anisotropy are removed.

Diffusion Tensor Interpolation 3 different ways to interpolate the tensors: 1)MRI channels 2)Tensor matrices 3)Eigensystems

MRI channel interpolation It would be more accurate to interpolate straight from the MRI channels, to do this first you must calculate a set of log image values via: Where Ai is the channel and b is the direction independent diffusion weight

MRI channel Interpolation Then the diffusion tensor is: While this is more accurate it is not computationally feasible to do this.

Matrix Interpolation Another way to do this is component wise interpolation in the matrices. If the process of getting the log images were linear, interpolation between the MRI channels and the interpolation between matrices, would be identical. However, it is not linear.

Eigensystem interpolation Another approach is to solve the eigensystems for each sample point, store this volume instead of the tensor volume, and then interpolate as needed. This holds an extreme advantage over MRI channel interpolation and Matrix interpolation since each eigensystem needs to be solved only once.

Eigensystem interpolation However, the orientation of the principal eigenvector, is undetermined at several times. During planar anisotropy the direction of the two principle eigenvectors degenerates to every vector perpendicular to the minor axis. Also between any two points, how are we guaranteed that the direction is the same as when we started.

Evaluation The results that they found were computed using Barycentric maps. They came to the conclusion that even though eigensystem interpolation is dramatically cheaper, if the density of points are not close enough, it loses too much information and ends up being too inaccurate. They chose matrix interpolation for every calculation they used in this paper.