Power Spectrum Estimation in Theory and in Practice Adrian Liu, MIT.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Eigen Decomposition and Singular Value Decomposition
Eigen Decomposition and Singular Value Decomposition
Noise & Data Reduction. Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum.
Aggregating local image descriptors into compact codes
Digital Kommunikationselektronik TNE027 Lecture 5 1 Fourier Transforms Discrete Fourier Transform (DFT) Algorithms Fast Fourier Transform (FFT) Algorithms.
SPM 2002 C1C2C3 X =  C1 C2 Xb L C1 L C2  C1 C2 Xb L C1  L C2 Y Xb e Space of X C1 C2 Xb Space X C1 C2 C1  C3 P C1C2  Xb Xb Space of X C1 C2 C1 
© Crown copyright Met Office Does the order of the horizontal and vertical transforms matter in the representation of an operational static covariance.
Wiener Filtering & Basis Functions Nov 4th 2004 Jukka Parviainen T Biomedical Signal Processing Sections
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Machine Learning Lecture 8 Data Processing and Representation
Budapest May 27, 2008 Unifying mixed linear models and the MASH algorithm for breakpoint detection and correction Anders Grimvall, Sackmone Sirisack, Agne.
Exploiting Sparse Markov and Covariance Structure in Multiresolution Models Presenter: Zhe Chen ECE / CMR Tennessee Technological University October 22,
Effects of model error on ensemble forecast using the EnKF Hiroshi Koyama 1 and Masahiro Watanabe 2 1 : Center for Climate System Research, University.
Subtleties in Foreground Subtraction Adrian Liu, MIT mK 1 K 100 mK.
Spatial Autocorrelation Basics NR 245 Austin Troy University of Vermont.
Galaxy and Mass Power Spectra Shaun Cole ICC, University of Durham Main Contributors: Ariel Sanchez (Cordoba) Steve Wilkins (Cambridge) Imperial College.
Environmental Data Analysis with MatLab Lecture 17: Covariance and Autocorrelation.
Dimensional reduction, PCA
3D Geometry for Computer Graphics
2D Fourier Theory for Image Analysis Mani Thomas CISC 489/689.
Bill Campbell and Liz Satterfield Naval Research Laboratory, Monterey CA Presented at the AMS Annual Meeting 4-8 January 2015 Phoenix, AZ Accounting for.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Transforms: Basis to Basis Normal Basis Hadamard Basis Basis functions Method to find coefficients (“Transform”) Inverse Transform.
Principal Component Analysis Principles and Application.
Introduction to Power Spectrum Estimation Lloyd Knox (UC Davis) CCAPP, 23 June 2010.
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Image Representation Gaussian pyramids Laplacian Pyramids
H 0 (z) x(n)x(n) o(n)o(n) M G 0 (z) M + vo(n)vo(n) yo(n)yo(n) H 1 (z) 1(n)1(n) M G 1 (z) M v1(n)v1(n) y1(n)y1(n) fo(n)fo(n) f1(n)f1(n) y(n)y(n) Figure.
The Statistical Properties of Large Scale Structure Alexander Szalay Department of Physics and Astronomy The Johns Hopkins University.
Chapter 2 Dimensionality Reduction. Linear Methods
1 Design of an SIMD Multimicroprocessor for RCA GaAs Systolic Array Based on 4096 Node Processor Elements Adaptive signal processing is of crucial importance.
Foreground subtraction or foreground avoidance? Adrian Liu, UC Berkeley.
Eigenstructure Methods for Noise Covariance Estimation Olawoye Oyeyele AICIP Group Presentation April 29th, 2003.
Lecture 22 MA471 Fall Advection Equation Recall the 2D advection equation: We will use a Runge-Kutta time integrator and spectral representation.
1 The Venzke et al. * Optimal Detection Analysis Jeff Knight * Venzke, S., M. R. Allen, R. T. Sutton and D. P. Rowell, The Atmospheric Response over the.
The vertical resolution of the IASI assimilation system – how sensitive is the analysis to the misspecification of background errors? Fiona Hilton and.
Development of an object- oriented verification technique for QPF Michael Baldwin 1 Matthew Wandishin 2, S. Lakshmivarahan 3 1 Cooperative Institute for.
Digital Imaging and Remote Sensing Laboratory NAPC 1 Noise Adjusted Principal Component Transform (NAPC) Data are first preprocessed to remove system bias.
SPM short course – Oct Linear Models and Contrasts Jean-Baptiste Poline Neurospin, I2BM, CEA Saclay, France.
Basics of Neural Networks Neural Network Topologies.
Data assimilation and forecasting the weather (!) Eugenia Kalnay and many friends University of Maryland.
EE565 Advanced Image Processing Copyright Xin Li Image Denoising Theory of linear estimation Spatial domain denoising techniques Conventional Wiener.
Advanced Stellar Populations Advanced Stellar Populations Raul Jimenez
Sabrina Rainwater David National Research Council Postdoc at NRL with Craig Bishop and Dan Hodyss Naval Research Laboratory Multi-scale Covariance Localization.
NCAF Manchester July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre.
Adventures in Parameter Estimation Jason Dick University of California, Davis.
Ultra-high dimensional feature selection Yun Li
Chapter 13 Discrete Image Transforms
GBT 21 cm intensity mapping collaboration Academia Sinica (Tzu-Ching Chang, Victor Yu-wei Liao) Beijing (Xuelei Chen, Yi-Chao Li) Carnegie Mellon University.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CSC321: Neural Networks Lecture 9: Speeding up the Learning
Factor and Principle Component Analysis
Linear Filters and Edges Chapters 7 and 8
STATISTICAL ORBIT DETERMINATION Kalman (sequential) filter
Principle Component Analysis (PCA) Networks (§ 5.8)
Wavelets : Introduction and Examples
Lecture 8:Eigenfaces and Shared Features
T. Chernyakova, A. Aberdam, E. Bar-Ilan, Y. C. Eldar
The Chinese University of Hong Kong
Functional Data Analysis
Application of Independent Component Analysis (ICA) to Beam Diagnosis
Image Processing, Lecture #8
Image Processing, Lecture #8
SVD, PCA, AND THE NFL By: Andrew Zachary.
Feature space tansformation methods
Spectral Transformation
Fourier Transforms.
21 cm Foreground Subtraction in the Image Plane and in the uv-Plane
Presentation transcript:

Power Spectrum Estimation in Theory and in Practice Adrian Liu, MIT

What we would like to do Inverse noise and foreground covariance matrix Vector containing measurement

What we would like to do Bandpower at k  “Geometry” -- Fourier transform, binning Noise/residual foreground bias removal

The Essence of the Method For similar methods, see also N. Petrovic & S.P. Oh, MNRAS 413, 2103 (2011) G. Paciga et. al., MNRAS 413, 1174 (2011) Filter Rapidly fluctuating modes retained Smooth modes suppressed High foreground scenario Foregroundless scenario

Why we like this method Lossless Cleaned Data Raw Data Cleaning

Why we like this method Lossless Smaller “vertical” error bars

Why we like this method Lossless Smaller “vertical” error bars mK 1 K 100 mK Log 10 T (in mK) Errors using Line of Sight Method AL, Tegmark, Phys. Rev. D 83, (2011)

Why we like this method Lossless Smaller “vertical” error bars <10 mK 130 mK Log 10 T (in mK) Errors using Inverse Variance Method 30 mK AL, Tegmark, Phys. Rev. D 83, (2011)

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars AL, Tegmark, Phys. Rev. D 83, (2011)

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars AL, Tegmark, Phys. Rev. D 83, (2011)

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars No additive noise/foreground bias

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars No additive noise/foreground bias A systematic framework for evaluating error statistics

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars No additive noise/foreground bias A systematic framework for evaluating error statistics BUT

Why we like this method Lossless Smaller “vertical” error bars Smaller “horizontal” error bars No additive noise/foreground bias A systematic framework for evaluating error statistics BUT Computationally expensive because matrix inverse scales as O(n 3 ). [Recall C -1 x] Error statistics for 16 by 16 by 30 dataset takes CPU-months

Quicker alternatives Full inverse variance AL, Tegmark 2011 O(n log n) version Dillon, AL, Tegmark (in prep.) FFT + FKP Williams, AL, Hewitt, Tegmark

Quicker alternatives Full inverse variance AL, Tegmark 2011 O(n log n) version Dillon, AL, Tegmark (in prep.) FFT + FKP Williams, AL, Hewitt, Tegmark

O(n log n) version Finding the matrix inverse C -1 is the slowest step.

O(n log n) version Finding the matrix inverse C -1 is the slowest step. Use the conjugate gradient method for finding C -1 x, which only requires being able to multiply by Cx.

O(n log n) version Finding the matrix inverse C -1 is the slowest step. Use the conjugate gradient method for finding C -1, which only requires being able to multiply by C. Multiplication is quick in basis where matrices are diagonal.

O(n log n) version Finding the matrix inverse C -1 is the slowest step. Use the conjugate gradient method for finding C -1, which only requires being able to multiply by C. Multiplication is quick in basis where matrices are diagonal. Need to multiply by C = C noise + C sync + C ps + …

Different components are diagonal in different combinations of Fourier space C = C ps + C sync + C noise + … Real spatial Fourier spectral Fourier spatial Fourier spectral Real spatial Real spectral

Comparison of Foreground Models GSM Our model Eigenvalue AL, Pritchard, Loeb, Tegmark, in prep.

Quicker alternatives Full inverse variance AL, Tegmark 2011 O(n log n) version Dillon, AL, Tegmark (in prep.) FFT + FKP Williams, AL, Hewitt, Tegmark

FKP + FFT version Bandpower at k  “Geometry” -- Fourier transform, binning Noise/residual foreground bias removal

FKP + FFT version Foreground avoidance instead of foreground subtraction mK 1 K 100 mK

FKP + FFT version Foreground avoidance instead of foreground subtraction. Use FFTs to get O(n log n) scaling, adjusting for non- cubic geometry using weightings.

FKP + FFT version Foreground avoidance instead of foreground subtraction. Use FFTs to get O(n log n) scaling, adjusting for non- cubic geometry using weightings. Use Feldman-Kaiser-Peacock (FKP) approximation –Power estimates from neighboring k-cells perfectly correlated and therefore redundant. –Power estimates from far away k-cells uncorrelated. –Approximation encapsulated by FKP weighting. –Optimal (same as full inverse variance method) on scales much smaller than survey volume.

FKP + FFT version mK 1 K 100 mK

Summary Full inverse variance AL, Tegmark 2011 O(n log n) version Dillon, AL, Tegmark (in prep.) FFT + FKP Williams, AL, Hewitt, Tegmark