Download presentation
Presentation is loading. Please wait.
1
Status and requirements of PLANCK
NA4/SA1 meeting
2
Brief intro on application
Planck: ESA satellite mission (2007); LevelS: mission simulation s/w foreach instrument { foreach frequency { foreach cosmology { …. Some Montecarlo jobs; Links with VObs.
3
Plank simulation is a set of 70 instances of the Pipeline.
The LevelS Pipeline: Chained but not parallel; Stages are C/C++/Fortran…. Shell/perl scripts; Cmb maps foregrounds Scanning F90! Noise Cmb maps analysis Plank simulation is a set of 70 instances of the Pipeline.
4
Some benchmarks + HFI (50 channels) LFI 30 GHz (4) LFI 44 GHz (6)
(12) 389 min 620 min 830 min 34 GB 45 GB 75 GB TOTAL (for LFI) 255h 1.3 TB + HFI (50 channels)
5
Questions: Is it parallel? NO it runs concurrently.
Do you need MPI/parallel? Yes. In later phase 16/32 CPUs in the site. What is the BW? > Gigabit! How long does it run? From 6h up to 24h
6
Status of the Application
VO Setup: Management; Technical Management; VO manager; Site managers; RLS; Planck users cert; Planck sites setup; EGEE site support. Application Setup: Basic Gridification; First tests; IT; People (MPA!!); Refined gridification; Data&metadata; Tests: Runs; Data;
7
Technical organization
VO manager/(RA ?) GT R.A.: Italy OATs OAPd IASF Uni Mi UniRM2 SISSA GT, C. Vuerli S. Pastore C. Burigana D. Maino G. DeGasperis C. Baccigalupi R.A.: Spain IFC E. Martinez Gonzalez R.A.: France IAP IN2PL/LAL PCC/CdE S. Du J. Delabruille R.A.: UK Inst. Astro Edinburgh T. Mann R.A.: Germany MPA M. Reineke T. Esslin T. Banday R.A.: The Netherlands ESA/ESTEC K. Bennet
8
VO status & needs Knowledge: Slow startup… Technical setup;
Two sites: OATS+IFC; Two members. Problem of European users to join the VO… Knowledge: Heterogeneous! Contacts with EGEE sites; MPA looks for EGEE in Munich; Training: User tutorial; Site manager tutorial; Data and replica!!! DBMS and Metadata!
9
VO evolution User join the VO 15-30 members; UI in each site;
Quantum-grid in each site; Regional Area Current status Future status (~ end of summar) R.A.: Italy 15 CPUS GB CPUS 1 TB (total) R.A.: Spain 30 CPUS GB more R.A.: France none 6 CPUS GB (total) R.A.: UK 2 CPUS GB (total) R.A.: Germany R.A.: The Netherlands
10
Application status Basic gridification: Basic tests:
Customized scripts; WN env; Data handling. Basic tests: IT LFI (22 channels) > 12 time faster!!! but ~5% failures
11
Lesson learned Massive data production on WN ( > 40 GB):
Big disks; Complex site topology (parallel/distributed FS); Compressing/RM-CR/removing file program; FITSIO with fgal/gsiftp support; Data handling: Complex data structure; 1 GB RAM. 10-15 terabytes ˜ CD-ROM 1 Eiffel Tower unit
12
Application needs Massive storage of ~ > 5 TB
Data storing/replica (automatic!) Tier or not Tier? User common data front-end: web portal or data browser; DSE support (metadata) for Grid/non-Grid data: G-DSE; External DB; More then 200 CPUS.
13
Application deployment: status & strategy
Software deployment: Dynamic; Licences: SW Compilers. MPI support intrasite: 16/32 CPUs; Specific io libs; Grid-UI Submission tools TEST (summer 2005) Data browsing Network & storage tests (end 2005).
14
Grid added values (…not just CPUS) Data sharing!
distributed data for distributed users; Replica and security; Common interface to SW and data; Collaborative work for simulations and reduction: less time, less space, less frustration….
15
What we have… what we need
VO and RLS RB Basic Grid-FS browsing tools: grid-ls, grid-cp etc. Beowulf/parallel sys as one WN. DB connection + WS. Easiest WN env setup (we are Astrophysics…) Documentation!!!! We are young and we need time to grow… Discuss later our needs for EGEE-2 ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.