SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.

Slides:



Advertisements
Similar presentations
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Advertisements

Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Sep 29 – Oct 3, 2009 LCWA 09 Linear Collider Workshop of the Americas Sept 29 – Oct 4, 2009 Damping Ring R&D updates SLAC Mauro Pivi SLAC Allison Fero.
Cluster currently consists of: 1 Dell PowerEdge Ghz Dual, quad core Xeons (8 cores) and 16G of RAM Original GRIDVM - SL4 VM-Ware host 1 Dell PowerEdge.
25-26 June, 2009 CesrTA Workshop CTA09 Electron Cloud Single-Bunch Instability Modeling using CMAD M. Pivi CesrTA CTA09 Workshop June 2009.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
INFN Site Report Michele Michelotto 1. INFN new statute 2  The Research Ministry requested all the research agencies to present a new statute  Management.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.
1 Proposal for a CESR Damping Ring Test Facility M. Palmer & D.Rubin November 8, 2005.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
E-cloud studies at LNF T. Demma INFN-LNF. Plan of talk Introduction New feedback system to suppress horizontal coupled-bunch instability. Preliminary.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
Nov 17, 2009 Webex Assessing the 3.2 km Ring feasibility: Simulation parameters for electron cloud Build-up and Instability estimation LC DR Electron Cloud.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
3 February 2010 ILC Damping Ring electron cloud WG effort Mauro Pivi SLAC on behalf of ILC DR working group on e- cloud ILC DR Webex Meeting Jan 3, 2010.
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
S. Pardi Frascati, 2012 March GPGPU Evaluation – First experiences in Napoli Silvio Pardi.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
INFN Site Report R.Gomezel November 5-9,2007 The Genome Sequencing University St. Louis.
INFN Site Report R.Gomezel October 9-13, 2006 Jefferson Lab, Newport News.
ANL T3g infrastructure S.Chekanov (HEP Division, ANL) ANL ASC Jamboree September 2009.
Compute and Storage For the Farm at Jlab
Brief introduction about “Grid at LNS”
Emanuele Leonardi PADME General Meeting - LNF January 2017
Title of the Poster Supervised By: Prof.*********
SuperB – INFN-Bari Giacinto DONVITO.
Electron Cloud, IBS, and Fast Ion Update
A testbed for the SuperB computing model
Silvio Pardi R&D Storage Silvio Pardi
Electron Cloud Effects in SuperB
Andrea Chierici On behalf of INFN-T1 staff
Observations and Predictions for DAFNE and SuperB
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
UTFSM computer cluster
Статус ГРИД-кластера ИЯФ СО РАН.
News and computing activities at CC-IN2P3
IntraBeam Scattering Calculation
IntraBeam Scattering Calculation
Intra-Beam Scattering modeling for SuperB and CLIC
Electron Cloud in ilcDR: Update
E-cloud instability at CESR TA
Single-bunch instability preliminary studies ongoing.
SuperB General Meeting June , Perugia (Italy)
TeraScale Supernova Initiative
M. Pivi PAC09 Vancouver, Canada 4-8 May 2009
ILC Damping Ring electron cloud WG effort
Cluster Computers.
Presentation transcript:

SuperB – Naples Site Dr. Silvio Pardi

Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron Cloud Effects in SuperB SuperB Computing Model development

Fast Simulation The Napoli Group provide a large set of computational resources to the supervbo.org VO for the FastSim production. The shared resources come from the two main Computing Infrastructures in Naples SCoPE The supercomputing centre of the University Federico II The TIER2 of ATLAS - INFN

SCoPE Resources and configuration The SCoPE supercomputing centre, shares with the superbvo.org VO a set of 768 core from the 2400 available. Hardware: DELL Blade solution, with 2 Xeon quadCore per node and Infiniband interconnection between the nodes Software: Scientific Linux 4.6 – Scheduled update to the 5.X in the next month Grid Configuration: Two sites GRISU-UNINA : 512 core and 16 TB Disk UNINA-EGEE : 256 core and 4 TB Disk

TIER2 of ATLAS Resources and configuration The TIER2 of ATLAS, shares with the superbvo.org VO 506 core and two storage element for a total of 400TB Hardware: DELL Blade solution, E4 AND ASUS TWIN Software: Scientific Linux 5.X for Atlas production gLite 5.2 Grid Configuration: one site INFN-NAPOLI-ATLAS : 506 core and 400TB Disk space, mostly used by the Atlas Collaboration and shared with other VOs. About 1TB available for SuperB

SUMMARY TABLE InfrastructureSiteCPUDISKOS SCOPEGRISU-UNINA512 grisuce.scope.unina.it 16TBSL 4.X – SL 5.X next month SCOPEUNINA-EGEE256 ce.scope.unina.it 4TBSL 4.X – SL 5.X next month ATLAS TIER2INFN-NAPOLI-ATLAS506 atlasce01.na.infn.it 400TBSL 5.X

Electron Cloud Effects in SuperB In collaboration with the Frascati Group and with Dr. Theo Demma, the Napoli Group share the total SCoPE farm, composed by 2400 cores, to support simulation activity to study the Electron Cloud Effects in SuperB. The simulations are based on a parallel MPI code and FFTW library. Each run use about 100 core and take advantage from the Infiniband low latency network between the node. The access to the infrastructure is provide 100% with gLite interface through the local Virtual Organization unina.it

8 The interaction between the beam and the cloud is evaluated at 40 Interaction Points around the SuperB HER (LNF option) for different values of the electoron cloud density. More realistic simulations taking into account the full SuperB lattice (~1300 IPs) are currently running on several hundreds of the Grid-SCoPE cluster cores. MORE RESULTS IN THE PRESENTATION OF THEO DEMMA Beam energy E[GeV] 6.7 circumference L[m] 1200 bunch population N b 4.06x10 10 bunch length σ z [mm]5 horizontal emittance ε x [nm rad]1.6 vertical emittance ε y [pm rad]4 hor./vert. betatron tune Q x /Q y 40.57/17.59 synchrotron tune Q z 0.01 hor./vert. av. beta function25/25 momentum compaction  4.04e-4 Input parameters (LNF conf.) for CMAD  =5x10 11  =4x10 11  =3x10 11 PRELIMINARY Preliminary results for SuperB Vertical emittance growth induced by e-cloud

The SuperB Computing Model We assume a distributed system, largely in the south of Italy, with Data Centers in Napoli, Catania, Bari (al least) The system in Napoli will be hosted in the SCoPE supercomputing centre (33 rack, 1 Mwatt power) The center already hosts EGEE III and ATLAS hardware The SCoPE hardware is currently used for SuperB testing 10 rack ready for SuperB hardware TIER-1

The SuperB Computing Model Testing planned for SuperB hardware in Napoli -Full 10 Gb/s network for data transfer and MPI (single switch, 24 SFP+ fiber ports) -Dual 1 Gb/s network for general interconnection -Dedicated 10 Gb/s fiber link to GARR GigaPop (later X-PoP of GARR-X) -Data over IP at 10 Gb/s (FCoE, iSCSI) -Start of testing in july 2010

11 The SCoPE Center

12 The Hardware 300 WN biprocs quadcore 64bit blade solution, Low-latency network infiniband – more than 2400 core Storage 150 TB raw 40 Server for collective services and test node 33 Rack internal Refrigeration system at state of art

13 Rack 3,4,5,9,10,11 3 cestelli 16 lame Blade -bi processore quad core 48 Lame per armadio 4 server 1 unit servizi collective Rack 7 6 server 2 unit stroage element + 2 server Servizi collective 7 Cassetti disco Rack 8 4 server 2 unit stroage element 9 Cassetti disco Intelligenza che gestisce Lo storage Rack 12 1 cestello balde 16 lame AVAILABLE ATLAS SCOPE