Download presentation
Presentation is loading. Please wait.
Published byJosephine Cummings Modified over 8 years ago
1
SuperB – Naples Site Dr. Silvio Pardi
2
Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron Cloud Effects in SuperB SuperB Computing Model development
3
Fast Simulation The Napoli Group provide a large set of computational resources to the supervbo.org VO for the FastSim production. The shared resources come from the two main Computing Infrastructures in Naples SCoPE The supercomputing centre of the University Federico II The TIER2 of ATLAS - INFN
4
SCoPE Resources and configuration The SCoPE supercomputing centre, shares with the superbvo.org VO a set of 768 core from the 2400 available. Hardware: DELL Blade solution, with 2 Xeon quadCore per node and Infiniband interconnection between the nodes Software: Scientific Linux 4.6 – Scheduled update to the 5.X in the next month Grid Configuration: Two sites GRISU-UNINA : 512 core and 16 TB Disk UNINA-EGEE : 256 core and 4 TB Disk
5
TIER2 of ATLAS Resources and configuration The TIER2 of ATLAS, shares with the superbvo.org VO 506 core and two storage element for a total of 400TB Hardware: DELL Blade solution, E4 AND ASUS TWIN Software: Scientific Linux 5.X for Atlas production gLite 5.2 Grid Configuration: one site INFN-NAPOLI-ATLAS : 506 core and 400TB Disk space, mostly used by the Atlas Collaboration and shared with other VOs. About 1TB available for SuperB
6
SUMMARY TABLE InfrastructureSiteCPUDISKOS SCOPEGRISU-UNINA512 grisuce.scope.unina.it 16TBSL 4.X – SL 5.X next month SCOPEUNINA-EGEE256 ce.scope.unina.it 4TBSL 4.X – SL 5.X next month ATLAS TIER2INFN-NAPOLI-ATLAS506 atlasce01.na.infn.it 400TBSL 5.X
7
Electron Cloud Effects in SuperB In collaboration with the Frascati Group and with Dr. Theo Demma, the Napoli Group share the total SCoPE farm, composed by 2400 cores, to support simulation activity to study the Electron Cloud Effects in SuperB. The simulations are based on a parallel MPI code and FFTW library. Each run use about 100 core and take advantage from the Infiniband low latency network between the node. The access to the infrastructure is provide 100% with gLite interface through the local Virtual Organization unina.it
8
8 The interaction between the beam and the cloud is evaluated at 40 Interaction Points around the SuperB HER (LNF option) for different values of the electoron cloud density. More realistic simulations taking into account the full SuperB lattice (~1300 IPs) are currently running on several hundreds of the Grid-SCoPE cluster cores. MORE RESULTS IN THE PRESENTATION OF THEO DEMMA Beam energy E[GeV] 6.7 circumference L[m] 1200 bunch population N b 4.06x10 10 bunch length σ z [mm]5 horizontal emittance ε x [nm rad]1.6 vertical emittance ε y [pm rad]4 hor./vert. betatron tune Q x /Q y 40.57/17.59 synchrotron tune Q z 0.01 hor./vert. av. beta function25/25 momentum compaction 4.04e-4 Input parameters (LNF conf.) for CMAD =5x10 11 =4x10 11 =3x10 11 PRELIMINARY Preliminary results for SuperB Vertical emittance growth induced by e-cloud
9
The SuperB Computing Model We assume a distributed system, largely in the south of Italy, with Data Centers in Napoli, Catania, Bari (al least) The system in Napoli will be hosted in the SCoPE supercomputing centre (33 rack, 1 Mwatt power) The center already hosts EGEE III and ATLAS hardware The SCoPE hardware is currently used for SuperB testing 10 rack ready for SuperB hardware TIER-1
10
The SuperB Computing Model Testing planned for SuperB hardware in Napoli -Full 10 Gb/s network for data transfer and MPI (single switch, 24 SFP+ fiber ports) -Dual 1 Gb/s network for general interconnection -Dedicated 10 Gb/s fiber link to GARR GigaPop (later X-PoP of GARR-X) -Data over IP at 10 Gb/s (FCoE, iSCSI) -Start of testing in july 2010
11
11 The SCoPE Center
12
12 The Hardware 300 WN biprocs quadcore 64bit blade solution, Low-latency network infiniband – more than 2400 core Storage 150 TB raw 40 Server for collective services and test node 33 Rack internal Refrigeration system at state of art
13
13 Rack 3,4,5,9,10,11 3 cestelli 16 lame Blade -bi processore quad core 48 Lame per armadio 4 server 1 unit servizi collective Rack 7 6 server 2 unit stroage element + 2 server Servizi collective 7 Cassetti disco Rack 8 4 server 2 unit stroage element 9 Cassetti disco Intelligenza che gestisce Lo storage Rack 12 1 cestello balde 16 lame AVAILABLE ATLAS SCOPE
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.