Competences & Plans at GSI

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM DEVELOPMENT OF ONLINE EVENT SELECTION IN CBM I. Kisel (for CBM Collaboration) I. Kisel (for CBM Collaboration)
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
ALICE HLT High Speed Tracking and Vertexing Real-Time 2010 Conference Lisboa, May 25, 2010 Sergey Gorbunov 1,2 1 Frankfurt Institute for Advanced Studies,
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
LOGO PROOF system for parallel MPD event processing Gertsenberger K. V. Joint Institute for Nuclear Research, Dubna.
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
Status of Reconstruction in CBM
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
Simulations for CBM CBM-India Meeting, Jammu, 12 February 2008 V. Friese
Standalone FLES Package for Event Reconstruction and Selection in CBM DPG Mainz, 21 March 2012 I. Kisel 1,2, I. Kulakov 1, M. Zyzak 1 (for the CBM.
Simulations and Software CBM Collaboration Meeting, GSI, 17 October 2008 Volker Friese Simulations Software Computing.
CBM Computing Model First Thoughts CBM Collaboration Meeting, Trogir, 9 October 2009 Volker Friese.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Workflows and Data Management. Workflow and DM Run3 and after: conditions m LHCb major upgrade is for Run3 (2020 horizon)! o Luminosity x 5 ( )
Predrag Buncic ALICE Status Report LHCC Referee Meeting CERN
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
The Helmholtz Association Project „Large Scale Data Management and Analysis“ (LSDMA) Kilian Schwarz, GSI; Christopher Jung, KIT.
International Accelerator Facility for Beams of Ions and Antiprotons at Darmstadt The Facility for Antiproton and Ion Research Peter Malzacher, GSI EGEE'09,
LIT participation LIT participation Ivanov V.V. Laboratory of Information Technologies Meeting on proposal of the setup preparation for external beams.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
Grid Operations in Germany T1-T2 workshop 2015 Torino, Italy Kilian Schwarz WooJin Park Christopher Jung.
Grid Operations in Germany T1-T2 workshop 2016 Bergen, Norway Kilian Schwarz Sören Fleischer Raffaele Grosso Christopher Jung.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
LHCb Computing 2015 Q3 Report Stefan Roiser LHCC Referees Meeting 1 December 2015.
Computing in CBM Volker Friese GSI Darmstadt International Conference on Matter under High Densities 21 – 23 June 2016 Sikkim Manipal Institute of Technology,
Lyon Analysis Facility - status & evolution - Renaud Vernet.
Multi-Strange Hyperons Triggering at SIS 100
CEPC software & computing study group report
Data Formats and Impact on Federated Access
PROOF system for parallel NICA event processing
– a CBM full system test-setup at GSI
Ian Bird WLCG Workshop San Francisco, 8th October 2016
Grid site as a tool for data processing and data analysis
SuperB and its computing requirements
Pasquale Migliozzi INFN Napoli
evoluzione modello per Run3 LHC
Workshop Computing Models status and perspectives
LHC experiments Requirements and Concepts ALICE
Fast Parallel Event Reconstruction
Computing Model di PANDA:
ALICE – First paper.
GSIAF & Anar Manafov, Victor Penso, Carsten Preuss, and Kilian Schwarz, GSI Darmstadt, ALICE Offline week, v. 0.8.
Bernd Panzer-Steindel, CERN/IT
Event Reconstruction and Data Analysis in R3BRoot Framework
ALICE HLT tracking running on GPU
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Статус ГРИД-кластера ИЯФ СО РАН.
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
ALICE Computing Upgrade Predrag Buncic
Grid Canada Testbed using HEP applications
Cloud Computing Data Centers
Cloud Computing Data Centers
Presentation transcript:

Competences & Plans at GSI P.Malzacher@gsi.de K.Schwarz@gsi.de LSDMA Kickoff Workshop 22 March 2012, KIT

open source and community software budget commodity hardware GSI: a German National Lab for Heavy Ion Research FAIR: Facility for Ion and Antiproton Research ~2018 GSI computing today ALICE T2/T3 HADES ~ 14000 cores, ~ 5.5 PB lustre ~ 9 PB archive capacity PANDA CBM/HADES NuSTAR APPA FAIR computing 2018 CBM PANDA NuSTAR APPA LQCD 300000 cores 40 PB disk 40 PB archive open source and community software budget commodity hardware support different communities scarce manpower & FAIR MAN & FAIR Grid/Cloud

1Tb/s 1Tb/s TB/s 1Tb/s 1Tb/s

Distribution of Competences energy efficient high throughput cluster cluster file system lustre tape archive, gStore MAN Grid, Cloud Exp. Frameworks Parallel Programming Two HEP like experiments: CBM & PANDA, two research fields with a lot of smaller experiments ~3000 scientists from ~50 countries

„General Purpose“ HPC System, optimized for high I/O bandwidth Compute farms: 4000 cores, ethernet based 10000 cores IB based Debian, SGE shared usage, based on priorities by GSI, FAIR, ALICE Tier2/3 Lustre with 3 MDS HA cluster, 48 core, 300 GB RAM 180 Fileservers with 3.5 TB RAM as OSS 7.000 disks > 5.5 PB raw capacity I/O capacity about 1 Tb/s more to come IB based l

We provide a mixture of: a Tier 2 integrated in the AliEn Grid, The Alice Tier-2 at GSI We provide a mixture of: a Tier 2 integrated in the AliEn Grid, mainly simulation a Tier 3, analysis trains fast interactive responds via PROOF based on PoD calibration & alignment of TPC integrated in the standard GSI batch farm (GSI, FAIR). We are able to readjust the relative size of the different parts on request. Main contributions from Germany: Uni Heidelberg Uni Frankfurt Uni Münster Uni Darmstadt GSI TPC TRD HLT GridKa Tier-1 GSI Tier-2/3 LoeweCSC

New (dark) Datacenter: ready 2014 Minimal floor footprint Space for 800 19“ racks Planned cooling power 6 MW (building supports more) Internal temperature 30° Minimal construction cost Fast to build Autonomous rack power mgnt. Back door rack cooling Smoke detection in every rack Use of heat for building heating Shortest cable lenghts Water pressure below 1bar avoiding risk of spills Use of any commercial 19“ architecture Unmatched packing- and power density Construction cost about 11 M€

Proof of Concept: Testcontainer with PUE < 1.1 in operation since 1.5 years Mini Cube ~ 100 racks, 1.2 MW cooling power: 1Q2012

First Step to the FAIR HPC Link 12 * 10 Gb/s GSI – LOEWE CSC 2 * 10 Gb/s GSI – Uni Ffm 4 * 10 Gb/s GSI – Uni Mainz

gStore: a lightweight tape access system: 2 3584-L23 ATL, 8 TS1140 tape drives, 17 data mover resource max storage capacities: Experimentdata: Copy of rawdata overall data mover disk cache   8.8 PB 1.3 PB 0.26 PB overall gStore I/O bandwith: DM disk <-> tape (SAN) DM disk <-> clients (LAN) DAQ online clients ->DM incl. copy to lustre 2.0 GB/s 5.0 GB/s 1.0 GB/s 0.5 GB/s

Exploration of Virtualization & Cloud Computing GSI as Cloud provider: SClab: Cloud Prototype at GSI ~ 200 cores demo production for ALICE GSI as Cloud user: FLUKA jobs for Radiation Safety Studies on Amazon and Ffm cloud IaaS- cloud VM CMS server DRMS master client worker application

The FairRoot framework is used by CBM, PANDA and parts of NUSTAR Virtual MC Geant3 Geant4 FLUKA G4VMC FlukaVMC G3VMC Geometry Close contact common developments Run Manager Event Generator Magnetic Field Detector base IO Manager Tasks RTDataBase SQL Conf, Par, Geo Root files Hits, Digits, Tracks Application Cuts, processes Display Track propagation Always in close contact STS TRD TOF RICH ECAL MVD ZDC MUCH ASCII Urqmd Pluto Track finding digitizers Hit Producers Dipole Map Active Map const. field CBM Code STT MUO TOF DCH EMC MVD TPC DIRC ASCII EVT DPM Track finding digitizers Hit Producers Dipole Map Solenoid Map const. field Panda Code

Challenges for reconstruction UrQMD, central Au+Au @ 25 AGeV particles per collision of two gold nuclei results up to now D0, cτ=127 μm K- π+ Open charm: Typical signal multiplicities: O(10-6) No „easy“ trigger signatures Fast online event reconstruction, no hardware trigger: time to reconstruct 1-10ms & 107 collisions/s -> 104 – 105 cores

Exploring Different Architectures and Parallelization Strategies: eg Exploring Different Architectures and Parallelization Strategies: eg. PANDA Tracking on CPU and GPU, PROOF, PoD CPU 1 Event 1 Track Candidates GPU Track Fitting Tracks CPU 2 Event 2 CPU 3 Event 3 CPU 4 Event 4 No. of Process 50 Track/Event 2000 Track/Event 1 CPU 1.7 E4 Track/s 9.1 E2 Track/s 1 CPU + GPU (Tesla) 5.0 E4 Track/s 6.3 E5 Track/s 4 CPU + GPU (Tesla) 1.2 E5 Track/s 2.2 E6 Track/s

Remote access to data and computing via the Grid: example Alice and PANDA Grid with AliEn Panda Grid Sites

www.eofs.org

http://www.crisp-fp7.eu/ WP16: Common User Identity System WP19: Distributed Data Infrastructure

Long term goal: Distributed data management for FAIR Short/Medium term projects: Storage Federation Remote lustre SE on top of lustre global xroot redirector for ALICE and FAIR Cloud Storage interface to existing environments (ROOT, AliEn,...) Optimized job/data placement hot vs unused data check for corrupted data sets reading from several SEs 3rd party copy

Summary: FAIR Computing FAIR will serve about 20 scientific collaborations from four major fields of research and application. Basis of the software systems of the larger FAIR experiments is FairRoot - a simulation, reconstruction and analysis framework. It is designed to optimize the accessibility for new as well as experienced users, to be able to cope with future developments, and to enhance the synergy between the different experiments. Parallelization of the software on all levels is essential to harvest the performance gains of future architectures.

Data recording for CBM and PANDA require a detector read-out without hardware triggers, relying exclusively on online event filtering. The first layer of the system consists of a combination of specialized processing elements such as GPUs, CELLs or FPGAs in combination with COTS computers, connected by an efficient high-speed network. Major building blocks of the infrastructure are a new datacenter for online computing, storage systems and offline cluster, linked to a metropolitan area network connecting the compute facilities of the surrounding universities and research centers, embedded in an international Grid/Cloud infrastructure.