Database-Assisted Low-Dose CT Image Restoration Klaus Mueller Computer Science Lab for Visual Analytics and Imaging (VAI) Stony Brook University Wei Xu,

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Distinctive Image Features from Scale-Invariant Keypoints David Lowe.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Aggregating local image descriptors into compact codes
Wavelets Fast Multiresolution Image Querying Jacobs et.al. SIGGRAPH95.
Presented by Xinyu Chang
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
MIT CSAIL Vision interfaces Approximate Correspondences in High Dimensions Kristen Grauman* Trevor Darrell MIT CSAIL (*) UT Austin…
A NOVEL LOCAL FEATURE DESCRIPTOR FOR IMAGE MATCHING Heng Yang, Qing Wang ICME 2008.
3D Shape Histograms for Similarity Search and Classification in Spatial Databases. Mihael Ankerst,Gabi Kastenmuller, Hans-Peter-Kriegel,Thomas Seidl Univ.
Semi-Supervised Hierarchical Models for 3D Human Pose Reconstruction Atul Kanaujia, CBIM, Rutgers Cristian Sminchisescu, TTI-C Dimitris Metaxas,CBIM, Rutgers.
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Image Denoising using Locally Learned Dictionaries Priyam Chatterjee Peyman Milanfar Dept. of Electrical Engineering University of California, Santa Cruz.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
IBBT – Ugent – Telin – IPI Dimitri Van Cauwelaert A study of the 2D - SIFT algorithm Dimitri Van Cauwelaert.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
Fitting: The Hough transform
Effective Image Database Search via Dimensionality Reduction Anders Bjorholm Dahl and Henrik Aanæs IEEE Computer Society Conference on Computer Vision.
Robust and large-scale alignment Image from
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
A Study of Approaches for Object Recognition
Distinctive Image Feature from Scale-Invariant KeyPoints
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Scale Invariant Feature Transform (SIFT)
What is Cluster Analysis?
1 Invariant Local Feature for Object Recognition Presented by Wyman 2/05/2006.
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Overview Introduction to local features
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
05 - Feature Detection Overview Feature Detection –Intensity Extrema –Blob Detection –Corner Detection Feature Descriptors Feature Matching Conclusion.
Internet-scale Imagery for Graphics and Vision James Hays cs195g Computational Photography Brown University, Spring 2010.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao.
Video Google: A Text Retrieval Approach to Object Matching in Videos Josef Sivic and Andrew Zisserman.
Beyond Sliding Windows: Object Localization by Efficient Subwindow Search The best paper prize at CVPR 2008.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Fitting: The Hough transform
Local invariant features Cordelia Schmid INRIA, Grenoble.
Visual Categorization With Bags of Keypoints Original Authors: G. Csurka, C.R. Dance, L. Fan, J. Willamowski, C. Bray ECCV Workshop on Statistical Learning.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
CSE 185 Introduction to Computer Vision Feature Matching.
Distinctive Image Features from Scale-Invariant Keypoints
776 Computer Vision Jan-Michael Frahm Spring 2012.
More sliding window detection: Discriminative part-based models
Iterative K-Means Algorithm Based on Fisher Discriminant UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE JOENSUU, FINLAND Mantao Xu to be presented.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
SIFT.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Visual homing using PCA-SIFT
SIFT Scale-Invariant Feature Transform David Lowe
CS262: Computer Vision Lect 09: SIFT Descriptors
Lecture 13: Feature Descriptors and Matching
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Feature description and matching
CS 1674: Intro to Computer Vision Scene Recognition
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
The SIFT (Scale Invariant Feature Transform) Detector and Descriptor
SIFT.
CSE 185 Introduction to Computer Vision
Feature descriptors and matching
Presented by Xu Miao April 20, 2005
Presentation transcript:

Database-Assisted Low-Dose CT Image Restoration Klaus Mueller Computer Science Lab for Visual Analytics and Imaging (VAI) Stony Brook University Wei Xu, Sungsoo Ha and Klaus Mueller

Motivation Low-dose CT: * Images from Google.com

Motivation Minimize the radiation, while maximize the clarity Enforce better quality directly in the reconstruction process TV-CBCT [J. Xun & S. Jiang] ASD-POCS [E. Sidky & X. Pan] R-OS-SIRT [W. Xu & K. Mueller] Solutions Improve quality in a post- processing de-noising step [Z. Kelm et al.] [H. Yu & G. Wang] [J. Ma & Z. Liang] [W. Xu & K. Mueller]

Post-processing De-noising Filter - NLM Neighborhood filters – Non-local Means (NLM) To update pixel x: a mean value of pixels in its search window Weight: by the patch similarity y x Search Window W Central pixel x x’s patch area P x pixel y inside W y’s patch area P y Assumption: there exists a high degree of redundancy to overcome noise by consulting similar patches to average contributions for a more stable outcome

Post-processing De-noising Filter - NLM Neighborhood filters – Non-local Means (NLM) x,y,z: spatial variables W : search window, P : patch area around each pixel h : parameter to control the smoothness G a : Gaussian kernel y x

NLM’s Results Reduce moderate artifacts Input NLM

NLM’s Results But limited for extreme low-dose situation Input NLM TVM

What to do now… Information in the input image is not sufficient Extend the search space beyond the current image Utilize prior scans of the same patient: - Z. Kelm, H. Yu & G. Wang, Q. Xu & G. Wang, J. Ma & Z. Liang, W. Xu & K. Mueller - simple, but limited Utilize database of different patients - find reference image and incorporate into the de-noising

Reference-based NLM (R- NLM) Compare between central patch and the reference patch Input Ref weight, pixel value y x

R-NLM’s Result Input NLM R-NLM Gold Standard Magic ? But…

Matched Reference-based NLM (MR-NLM) Input Matched-R Clean-R pixel value weight

MR-NLM’s Result Input NLM MR-NLM Magic ? Yes ! Gold Standard

Refinement to MR-NLM The refinement to NLM is also applicable to MR-NLM Implement two redundancy control methods Reduce search window redundancy [T. Tasdizen]:  discard unrelated pixels whose mean and variance are different enough Reduce patch redundancy [P. Coupe et al.]:  apply PCA to high-D patch space  project patches to a lower dimensional sub-space Improve not only efficiency but also accuracy

Database-Assisted CT Image Restoration (DA-CTIR) Framework

Online Database Construction 2D Image Space High-D Image Feature Space Image Scan Global Image Feature Exact as salient local image structure and contextual information Learn the cluster centers of the local features of all images and label them Concatenate local labels to form global descriptor as distinct salient properties of the image

Local Image Feature Descriptor In MR-NLM: Input image is low-dose The database contains only high (normal)-dose images Matching is between artifact-free and artifact-contained ones  local feature descriptor should be tolerant to artifact (streak, noise, etc.) and small deformation Scale-Invariant Feature Transform (SIFT) feature Captures histogram of edge orientations in a local neighborhood Scale-invariant, transform-invariant and less sensitive to noise

Local Image Feature Descriptor SIFT feature descriptor: Over the neighborhood of size 16  16 dividing to 4  4 blocks In each block, 8-orientation histogram of edges is computed Dense SIFTs over a regular spaced grid: better, robust Grid spacing of 8 pixels, N = 32  32 (64  64) SIFTs for (512 2 ) image block 8-bin orientation histogram neighborhood 4  4  8  128-D feature vector

Learn visual words To describe one image, the dimension is reduced from 128N to N (N  1024 or 4096). A set of local features {S 0, S 1,.., S N-1 } k-means clustering K cluster centers as visual words {V 0, V 1, …, V K-1 } as visual vocabulary V Local feature vector is assigned to index of the closest visual word Labeling

Global Image Feature Descriptor Partition image to multi-resolution to increase the precision Concatenate histograms of labels from each sub-region. Totally, 26K dimensions (K  50 in this paper) A set of labels in fixed grid positions Spatial pyramid based vector quantization Global Image Feature

Dimension 2D Image Space High-D Image Feature Space Scan Image Global Image Feature 128k-D per image 1k-D per image 1.3k-D per image 64k-D

Online Prior Search 2D Image Space High-D Image Feature Space Target Image M nearest references Support Kd-tree structure (PKD- tree) for fast labeling process, check our paper for details Histogram Intersection Essentially concatenated histograms while not only high-D vector; histogram intersection vs. Euclidean distance

Online De-noising Registration FBP De-noised image MR-NLM Target image, M nearest references, Low-dose condition SIFT-flow Tolerant to noise and small deformation Optical-flow to obtain displacement field SIFT instead of pixel Refined MR-NLM Two redundancy controls Fall back to regular NLM for pixel with close to zero normalization factor

Experiments Two image databases (not pre-aligned): head scans - 15 NIH visible human head images - 33 CT cadaver head images human lung scans from two patients - “give a scan” online database Original reconstructions are utilized as: Basis for low-dose simulation (limited number of projections with noise) Basis for generating target scan (deformed or rotated and then reconstruct with low-dose condition) Gold standard for evaluation Fan-beam geometry

Results Head database: low-dose condition: 45-proj SNR 15 Ideal Input Priors DA-CTIR Refined DA-CTIR

Results Lung database: low-dose condition: 60-proj SNR 20 Ideal Input DA-CTIR Refined DA-CTIR

Future Works PCA reduction to global image feature Larger database for more experiments to verify effectiveness GPU acceleration

Questions?