Presentation is loading. Please wait.

Presentation is loading. Please wait.

(for the Algorithm Development Group of the US Planck Team)

Similar presentations


Presentation on theme: "(for the Algorithm Development Group of the US Planck Team)"— Presentation transcript:

1 (for the Algorithm Development Group of the US Planck Team)
Recovery of low-l temperature and polarization power spectra from Planck sky maps - the case of HFI 100, 143, and 217 GHz channels Krzysztof M. Górski JPL Caltech (for the Algorithm Development Group of the US Planck Team)

2 TOD Generation and Map Making for the Planck HFI 100, 143, and 217 GHz TQU Data
Motivation (in a nutshell): Find out if residuals of the 1/f2 detector noise remaining in the sky maps made from the TODs with correlated noise compromise Planck’s ability to measure the low-l reionization features in the TQU power spectra Input Data: Generated by Level S Simulation Pipeline Involved: G. Rocha, G. Prezeau, I. O’Dwyer, K. Huffenberger, C. Cantalupo TOD Volume: about ~1.5 TB at all channels Map Making: G. Rocha run DPC compliant version of Springtide to generate 3 TQU maps M. Ashdown helped with running Springtide to generate the internally coadded single TQU map Power Spectrum Evaluation Simple galactic cut developed by C. Cantalupo (based on tresholded, smoothed SFD Galaxy dust emission map; ~60% of the sky retained KMG: HEALPix (MPI) anafast runs on full and cut-sky maps – Sun/Linux 64bit, 4CPU, 32GB RAM machine at JPL; single anafast run on Nside=2048, lmax=3000 TQU map (~600 MB size) takes ~4 mins CPU

3 TOD Generation and Map Making for the Planck HFI 100, 143, and 217 GHz TQU Data
Computing: Seaborg and Bassi machines (NERSC, LBL, Berkeley) Seaborg: MHz Bassi: GHz TOD generation All runs on Seaborg 122 CPUs used; wall clock time per frequency channel: ~ 0.5 hr Memory (RAM) – insignificant, as Total Convolver not used Disk Space: ~ 1.5 TB for all three channels Work Force: G. Rocha, G. Prezeau, I. O’Dwyer, K. Huffenberger, C. Cantalupo Map Making All runs on Bassi 24*8 CPUs used; wall clock time per frequency channel: ~ hr per pure signal map, hr per S+N maps Memory (RAM) – for Springtide dominated by the map size – ~ 600 MB Disk Space: ~ 1.5 TB Work Force: G. Rocha

4 Simulation Parameters and Computational Resources
Detectors 8 @ 100 GHz (8 polarized) 143 GHz (8 polarized, 4 unpolarized) 217 GHz (8 polarized, 4 unpolarized) Observations (ONLY CMB TQU signals are measured) Hz  =  6,324,480,000 samples per detector (  32) Total number of samples ~2.024  1011 Noise Properties White + 1/f2, knee=30 mHz, 6-day piecewise stationary Scanning Strategy/Pointing Cycloidal - slow (6 month) precession; Satellite pointing with jitter Resolution of the Sky Maps HEALPix Nside= 2048  —  50,331,648 x 1.7' pixels per Stokes parameter Computing: Machines Seaborg and Bassi at NERSC Processors 6000 x 375 MHz; 976 x 1.9GHz Run-time <3 hr wallclock given CPUs deployed Memory Dominated by map size Disk ~ 1.5 Tbytes (aggregate TODs)

5 Input CMB Sky The input CMB signals for these simulations were generated by matching spherical harmonics up to l = 3000 to the WMAP data.

6 Planck Scanning strategy – Slow, 6-months, Precession

7 Galaxy Cut

8 HFI 100 GHz Channel

9 HFI 143 GHz Channel

10 HFI 217 GHz Channel

11 Coadded 100/143/217 GHz Channels

12 Summary Fantastic exercise. Excellent Work Force! Congratulations to those involved. At the present time, this sort of exercise can be only “easily” run by the combination of the US Planck ADG and Planck CTP members, on computing resources available in the USA. Very ecouraging outcome for doing low-l science with Planck, despite correlated noise in TODs, “funny” scanning of the sky, etc.


Download ppt "(for the Algorithm Development Group of the US Planck Team)"

Similar presentations


Ads by Google