Coarse-to-Fine Image Reconstruction Rebecca Willett In collaboration with Robert Nowak and Rui Castro.

Slides:



Advertisements
Similar presentations
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Computer Science Dr. Peng NingCSC 774 Adv. Net. Security1 CSC 774 Advanced Network Security Topic 7.3 Secure and Resilient Location Discovery in Wireless.
IMI 1 Approximation Theory Metric: Complicated Function Signal Image Solution to PDE Simple Function Polynomials Splines Rational Func.
Multiscale Analysis of Photon-Limited Astronomical Images Rebecca Willett.
Multiscale Analysis for Intensity and Density Estimation Rebecca Willett’s MS Defense Thanks to Rob Nowak, Mike Orchard, Don Johnson, and Rich Baraniuk.
Machine Learning Week 2 Lecture 1.
Model assessment and cross-validation - overview
CMPUT 466/551 Principal Source: CMU
Image Denoising using Locally Learned Dictionaries Priyam Chatterjee Peyman Milanfar Dept. of Electrical Engineering University of California, Santa Cruz.
1 Inzell, Germany, September 17-21, 2007 Agnieszka Lisowska University of Silesia Institute of Informatics Sosnowiec, POLAND
Bayesian Learning Rong Jin. Outline MAP learning vs. ML learning Minimum description length principle Bayes optimal classifier Bagging.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Estimating Surface Normals in Noisy Point Cloud Data Niloy J. Mitra, An Nguyen Stanford University.
Efficient Statistical Pruning for Maximum Likelihood Decoding Radhika Gowaikar Babak Hassibi California Institute of Technology July 3, 2003.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Ensemble Learning: An Introduction
Bayesian Learning Rong Jin.
Statistical Learning: Pattern Classification, Prediction, and Control Peter Bartlett August 2002, UC Berkeley CIS.
Representation and Compression of Multi-Dimensional Piecewise Functions Dror Baron Signal Processing and Systems (SP&S) Seminar June 2009 Joint work with:
Multiscale transforms : wavelets, ridgelets, curvelets, etc.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
5. 1 JPEG “ JPEG ” is Joint Photographic Experts Group. compresses pictures which don't have sharp changes e.g. landscape pictures. May lose some of the.
Classification and Prediction: Regression Analysis
Ensemble Learning (2), Tree and Forest
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Image Representation Gaussian pyramids Laplacian Pyramids
Besov Bayes Chomsky Plato Richard Baraniuk Rice University dsp.rice.edu Joint work with Hyeokho Choi Justin Romberg Mike Wakin Multiscale Geometric Image.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
1 Patch Complexity, Finite Pixel Correlations and Optimal Denoising Anat Levin, Boaz Nadler, Fredo Durand and Bill Freeman Weizmann Institute, MIT CSAIL.
 Coding efficiency/Compression ratio:  The loss of information or distortion measure:
Template attacks Suresh Chari, Josyula R. Rao, Pankaj Rohatgi IBM Research.
1 Wavelets, Ridgelets, and Curvelets for Poisson Noise Removal 國立交通大學電子研究所 張瑞男
Overview of Supervised Learning Overview of Supervised Learning2 Outline Linear Regression and Nearest Neighbors method Statistical Decision.
Basis Expansions and Regularization Part II. Outline Review of Splines Wavelet Smoothing Reproducing Kernel Hilbert Spaces.
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Event retrieval in large video collections with circulant temporal encoding CVPR 2013 Oral.
Lecture3 – Overview of Supervised Learning Rice ELEC 697 Farinaz Koushanfar Fall 2006.
High-dimensional Error Analysis of Regularized M-Estimators Ehsan AbbasiChristos ThrampoulidisBabak Hassibi Allerton Conference Wednesday September 30,
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Chapter1: Introduction Chapter2: Overview of Supervised Learning
Chapter 13 (Prototype Methods and Nearest-Neighbors )
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Bivariate Splines for Image Denoising*° *Grant Fiddyment University of Georgia, 2008 °Ming-Jun Lai Dept. of Mathematics University of Georgia.
A Low-Complexity Universal Architecture for Distributed Rate-Constrained Nonparametric Statistical Learning in Sensor Networks Avon Loy Fernandes, Maxim.
Active Learning and the Importance of Feedback in Sampling Rui Castro Rebecca Willett and Robert Nowak.
Regularization of energy-based representations Minimize total energy E p (u) + (1- )E d (u,d) E p (u) : Stabilizing function - a smoothness constraint.
EE591f Digital Video Processing
Computational Intelligence: Methods and Applications Lecture 15 Model selection and tradeoffs. Włodzisław Duch Dept. of Informatics, UMK Google: W Duch.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Basis Expansions and Generalized Additive Models Basis expansion Piecewise polynomials Splines Generalized Additive Model MARS.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Overfitting, Bias/Variance tradeoff. 2 Content of the presentation Bias and variance definitions Parameters that influence bias and variance Bias and.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Multiscale Likelihood Analysis and Inverse Problems in Imaging
Biointelligence Laboratory, Seoul National University
Compressive Coded Aperture Video Reconstruction
Line Fitting James Hayes.
Image Primitives and Correspondence
Multiscale Likelihood Analysis and Image Reconstruction
Measures of Complexity
Jeremy Bolton, PhD Assistant Teaching Professor
mEEC: A Novel Error Estimation Code with Multi-Dimensional Feature
Foundation of Video Coding Part II: Scalar and Vector Quantization
Wavelet-based histograms for selectivity estimation
Sudocodes Fast measurement and reconstruction of sparse signals
Presentation transcript:

Coarse-to-Fine Image Reconstruction Rebecca Willett In collaboration with Robert Nowak and Rui Castro

Haar Tree Pruning MSE = Poisson Data ~14 photons/pixel MSE = Wedgelet Tree Pruning MSE = O(n) O(n 11/6 )

Iterative reconstruction E-Step: Compute conditional expectation of new noisy image estimate given data and current image estimate Traditional Shepp-Vardi M-Step: Maximum Likelihood Estimation Improved M-Step: Complexity Regularized Multiscale Poisson Denoising (Willett & Nowak, IEEE-TMI ‘03)

MLE Jeff Fessler’s PWLS Wedgelet-based reconstruction Shepp-Logan Wedgelet-based tomography

Tomography

piecewise constant 2-d function with “smooth” edges A simple image model

Access only to n noisy “pixels” Measurement model Goal: find an estimate of the original image such that is small.

Image space

Kolmogorov metric entropy

Dudley ‘74

approx. errestimation err. Minimax lower bound Korostelev & Tsybakov, ‘93

Adaptively pruned partitions

Tree pruning estimation

Partitions and Estimators Sum-of-squared errors empirical risk:

Complexity penalized estimator: Complexity Regularization and the Bias-Variance Trade-off set of all possible tree prunings |P| fidelity to data complexity

The Li-Barron bound approximation error (bias) estimation error (variance) Li & Barron, ‘00 Nowak & Kolaczyk, ‘01

The Kraft inequality

Decorate each partition set with a constant: squared approximation error This class of models is not well-matched to the class of images Estimating smooth contours - Haar

Donoho ‘99 Approximating smooth contours - wedgelets

Approximating smoother contours Original Image Haar Wavelet Partition Wedgelet Partition Wedgelet > 850 terms < 370 terms (Donoho ‘99)

squared approximation error Use wedges and decorate each partition set with a constant: This is the best achievable rate!!! Estimating smoother contours - wedgelets

Simple Computation Poor approximation Haar-based estimationWedgelet estimation Complex Computation Good approximation The problem with estimating smooth contours

Computational implications

space of all signal models is very large from which one is selected A solution: Coarse-to-fine model selection two-step process involves search first over coarse model space coarse model space

second step involves search over small subset of models Coarse-to-fine model selection

Start with a uniform partition C2F wedgelets: two-stage optimization Stage 1: Adapt partition to the data by pruning Stage 2: Only apply wedges in the small boxes that remain

C2F wedgelets: two-stage optimization

Error analysis of two-stage approach: (Castro, Willett, Nowak, ICASSP ‘04)

Controlling variance in the preview stage Start with a coarse partition in the first stage: lowers the variance of the coarse resolution estimate with high probability, pruned coarse partition close to optimal coarse partition unpruned boxes at this stage indicate edges or boundaries

Controlling bias in the preview stage Bias becomes large if a square containing a boundary fragment is pruned in the first stage (this may happen if a boundary is close to the side of the squares) Solution: Compute TWO coarse partitions - one normal, and one shifted Refine any region unpruned in either or both shifts potential problem area: not a problem after shift:

Computational implications

noisy data MSE = stage 1 result MSE = stage 2 result O(n 7/6 ), MSE = Main result in action Compare with standard wedgelet denoising : Significant computational savings and better result ! O(n 11/6 ), MSE =

low resolution high resolution C2F limitations: The “ribbon”

C2F and other greedy methods: Matching pursuit 20 Questions (Geman & Blanchard, ‘03) Boosting

More general image models platelet planar fits (Willett & Nowak, IEEE-TMI ‘03, Willett & Nowak, Wavelets X. Nowak, Mitra, & Willett, JSAC ‘03)

Platelet Approximation Theory m-term approximation error decay rate: Fourier: O(m -1/2 ) Wavelets: O(m -1 ) Wedgelets: O(m -1 ) Platelets: O(m -2 ) Curvelets: O(m -2 ) Twice continuously differentiable

Confocal microscopy simulation Noisy Image Haar Estimate Platelet Estimate

C2F limitations: complex images “Images are edges”: many images consist almost entirely of edges C2F model still appropriate for many applications: –nuclear medicine –feature classification –temperature field estimation

C2F in multiple dimensions

Final remarks and ongoing work Careful greedy methods can perform as well as exhaustive searches, both in theory and practice Coarse-to-fine estimation dramatically reduces computational complexities Similar ideas can be used in other scenarios –Reduce the amount of data required (e.g., active learning and adaptive sampling) –Reduce number of bits required to encode model locations in compression schemes