Status and requirements of PLANCK

Slides:



Advertisements
Similar presentations
INAF experience in Grid projects C. Vuerli, G. Taffoni, V. Manna, A. Barisani, F. Pasian INAF – Trieste.
Advertisements

INAF experience in Grid projects F. Pasian INAF. Wed 17 May GRID.IT Project The GRID.IT Project The GRID.IT Project –Application 1 Accessing Databases.
Introduction to Grids and Grid applications Gergely Sipos MTA SZTAKI
GRID Activities at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC – Madrid, Spain.
Porto, January Grid Computing Course Summary of day 2.
Astronomical GRID Applications at ESAC Science Archives and Computer Engineering Unit Science Operations Department ESA/ESAC.
Enabling Grids for E-sciencE Medical image processing web portal : Requirements analysis. An almost end user point of view … H. Benoit-Cattin,
QCDgrid Technology James Perry, George Beckett, Lorna Smith EPCC, The University Of Edinburgh.
Grid Information Systems. Two grid information problems Two problems  Monitoring  Discovery We can use similar techniques for both.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Test Of Distributed Data Quality Monitoring Of CMS Tracker Dataset H->ZZ->2e2mu with PileUp - 10,000 events ( ~ 50,000 hits for events) The monitoring.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
INFSO-RI Enabling Grids for E-sciencE Planck Simulations Status of the Application C. Vuerli, G. Taffoni, A. Barisani, A. Zacchei,
3rd Nov 2000HEPiX/HEPNT CDF-UK MINI-GRID Ian McArthur Oxford University, Physics Department
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Monitoring in EGEE EGEE/SEEGRID Summer School 2006, Budapest Judit Novak, CERN Piotr Nyczyk, CERN Valentin Vidic, CERN/RBI.
INFSO-RI Enabling Grids for E-sciencE Project Gridification: the UNOSAT experience Patricia Méndez Lorenzo CERN (IT-PSS/ED) CERN,
DATABASE MANAGEMENT SYSTEMS IN DATA INTENSIVE ENVIRONMENNTS Leon Guzenda Chief Technology Officer.
Astro-WISE & Grid Fokke Dijkstra – Donald Smits Centre for Information Technology Andrey Belikov – OmegaCEN, Kapteyn institute University of Groningen.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
St.Petersburg state university computing centre and the 1st results in the DC for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev.
Status of the LHCb MC production system Andrei Tsaregorodtsev, CPPM, Marseille DataGRID France workshop, Marseille, 24 September 2002.
3 Apr 2002Stefano Belforte – INFN Trieste Necessita’ CDF al Tier11 CDF needs at Tier 1 Many details in slides for (your) future reference Will move faster.
(D)CI related activities at IFCA Marcos López-Caniego Instituto de Física de Cantabria (CSIC-UC) Astro VRC Workshop Paris Nov 7th 2011.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Derek Ross E-Science Department DCache Deployment at Tier1A UK HEP Sysman April 2005.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Planck Report on the status of the mission Carlo Baccigalupi, SISSA.
DTI Mission – 29 June LCG Security Ian Neilson LCG Security Officer Grid Deployment Group CERN.
US Planck Data Analysis Review 1 Julian BorrillUS Planck Data Analysis Review 9–10 May 2006 Computing Facilities & Capabilities Julian Borrill Computational.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
1 Cherenkov Telescope Array: a production system prototype L. Arrabito 1 C. Barbier 2, J. Bregeon 1, A. Haupt 3, N. Neyroud 2 for the CTA Consortium 1.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Spanish National Research Council- CSIC Isabel.
EGEE is a project funded by the European Union under contract IST The CompChem Virtual Organization EGEE 07 Conference Budapest (HU)‏ October.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
EGEE is a project funded by the European Union under contract IST Compchem VO's user support EGEE Workshop for VOs Karlsruhe (Germany) March.
DDN Web Object Scalar for Big Data Management Shaun de Witt, Roger Downing (STFC) Glenn Wright (DDN)
RI EGI-InSPIRE RI Astronomy and Astrophysics Dr. Giuliano Taffoni Dr. Claudio Vuerli.
CMB & LSS Virtual Research Community Marcos López-Caniego Enrique Martínez Isabel Campos Jesús Marco Instituto de Física de Cantabria (CSIC-UC) EGI Community.
EGEE Workshop on Management of Rights in Production Grids Paris, June 19th, 2006 Victor Alessandrini IDRIS - CNRS DEISA : status, strategies, perspectives.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Pierre Auger Observatory Jiří Chudoba Institute of Physics and CESNET, Prague.
Introduction to Grid and Grid applications Peter Kacsuk MTA SZTAKI
1 Tutorial Outline 30’ From Content Management Systems to VREs 50’ Creating a VRE 80 Using a VRE 20’ Conclusions.
A Web Based Job Submission System for a Physics Computing Cluster David Jones IOP Particle Physics 2004 Birmingham 1.
DataGrid France 12 Feb – WP9 – n° 1 WP9 Earth Observation Applications.
1 The Life-Science Grid Community Tristan Glatard 1 1 Creatis, CNRS, INSERM, Université de Lyon, France The Spanish Network for e-Science 2/12/2010.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
EGRID Project: Experience Report Implementation of a GRID Infrastructure for the Analysis of Economic and Financial data.
Enabling Grids for E-sciencE University of Perugia Computational Chemistry status report EGAAP Meeting – 21 rst April 2005 Athens, Greece.
BaBar-Grid Status and Prospects
ALICE Monitoring
Ian Bird GDB Meeting CERN 9 September 2003
Added value of new features of the ATLAS computing model and a shared Tier-2 and Tier-3 facilities from the community point of view Gabriel Amorós on behalf.
Simulations and Data Reduction of the ESA Planck mission
INFN – GRID status and activities
The GENIUS portal Roberto Barbera University of Catania and INFN
Grid and Scientific applications
STORM & GPFS on Tier-2 Milan
The Client/Server Database Environment
OpenGATE meeting/Grid tutorial, mars 9nd 2005
Job workflow Pre production operations:
CompChem VO: User experience using MPI
Simulation use cases for T2 in ALICE
The GENIUS portal and the GILDA t-Infrastructure
The National Grid Service Mike Mineter NeSC-TOE
Site availability Dec. 19 th 2006
Presentation transcript:

Status and requirements of PLANCK NA4/SA1 meeting

Brief intro on application Planck: ESA satellite mission (2007); LevelS: mission simulation s/w foreach instrument { foreach frequency { foreach cosmology { …. Some Montecarlo jobs; Links with VObs.

Plank simulation is a set of 70 instances of the Pipeline. The LevelS Pipeline: Chained but not parallel; Stages are C/C++/Fortran…. Shell/perl scripts; Cmb maps foregrounds Scanning F90! Noise Cmb maps analysis Plank simulation is a set of 70 instances of the Pipeline.

Some benchmarks + HFI (50 channels) LFI 30 GHz (4) LFI 44 GHz (6) (12) 389 min 620 min 830 min 34 GB 45 GB 75 GB TOTAL (for LFI) 255h 1.3 TB + HFI (50 channels)

Questions: Is it parallel? NO it runs concurrently. Do you need MPI/parallel? Yes. In later phase 16/32 CPUs in the site. What is the BW? > Gigabit! How long does it run? From 6h up to 24h

Status of the Application VO Setup: Management; Technical Management; VO manager; Site managers; RLS; Planck users cert; Planck sites setup; EGEE site support. Application Setup: Basic Gridification; First tests; IT; People (MPA!!); Refined gridification; Data&metadata; Tests: Runs; Data;

Technical organization VO manager/(RA ?) GT R.A.: Italy OATs OAPd IASF Uni Mi UniRM2 SISSA GT, C. Vuerli S. Pastore C. Burigana D. Maino G. DeGasperis C. Baccigalupi R.A.: Spain IFC E. Martinez Gonzalez R.A.: France IAP IN2PL/LAL PCC/CdE S. Du J. Delabruille R.A.: UK Inst. Astro Edinburgh T. Mann R.A.: Germany MPA M. Reineke T. Esslin T. Banday R.A.: The Netherlands ESA/ESTEC K. Bennet

VO status & needs Knowledge: Slow startup… Technical setup; Two sites: OATS+IFC; Two members. Problem of European users to join the VO… Knowledge: Heterogeneous! Contacts with EGEE sites; MPA looks for EGEE in Munich; Training: User tutorial; Site manager tutorial; Data and replica!!! DBMS and Metadata!

VO evolution User join the VO 15-30 members; UI in each site; Quantum-grid in each site; Regional Area Current status Future status (~ end of summar) R.A.: Italy 15 CPUS 300 GB 15 + 45 CPUS 1 TB (total) R.A.: Spain 30 CPUS 240 GB more R.A.: France none 6 CPUS 360 GB (total) R.A.: UK 2 CPUS 240 GB (total) R.A.: Germany R.A.: The Netherlands

Application status Basic gridification: Basic tests: Customized scripts; WN env; Data handling. Basic tests: IT LFI (22 channels) > 12 time faster!!! but ~5% failures

Lesson learned Massive data production on WN ( > 40 GB): Big disks; Complex site topology (parallel/distributed FS); Compressing/RM-CR/removing file program; FITSIO with fgal/gsiftp support; Data handling: Complex data structure; 1 GB RAM. 10-15 terabytes ˜20.000 CD-ROM 1 Eiffel Tower unit

Application needs Massive storage of ~ > 5 TB Data storing/replica (automatic!) Tier or not Tier? User common data front-end: web portal or data browser; DSE support (metadata) for Grid/non-Grid data: G-DSE; External DB; More then 200 CPUS.

Application deployment: status & strategy Software deployment: Dynamic; Licences: SW Compilers. MPI support intrasite: 16/32 CPUs; Specific io libs; Grid-UI Submission tools TEST (summer 2005) Data browsing Network & storage tests (end 2005).

Grid added values (…not just CPUS) Data sharing! distributed data for distributed users; Replica and security; Common interface to SW and data; Collaborative work for simulations and reduction: less time, less space, less frustration….

What we have… what we need VO and RLS RB Basic Grid-FS browsing tools: grid-ls, grid-cp etc. Beowulf/parallel sys as one WN. DB connection + WS. Easiest WN env setup (we are Astrophysics…) Documentation!!!! We are young and we need time to grow… Discuss later our needs for EGEE-2 ?