Download presentation
Presentation is loading. Please wait.
Published byCecil Warner Modified over 9 years ago
1
Brazilian HEP Grid initiatives: ‘São Paulo Regional Analysis Center’ Rogério L. Iope SPRACE Systems Engineer 2nd EELA Workshop 24-25 June 2006 Island of Itacuruçá - Brazil
2
24 June 2006 R. L. Iope 2 SPRACE Project Computing Center for Data Analysis and Processing High performance computing cluster 90 dual Xeon servers (> 240 CPUs) 1.2 TFlops of integrated computing power and over 200 GB of RAM memory 12.8 TB on RAID + 7.1 TB on local disks ~20 TB of total storage capacity Gigabit fiber direct connection with international WHREN-LILA link Gigabit fiber connection with HEP clusters at Rio de Janeiro Remote Computing Resource for Fermilab DZero Collaboration Data replication and access center of SAM storage system Operational execution site of SAMGrid, the DZero processing grid Analysis enabled computer cluster Distributed Brazilian Tier-2 Center for CERN CMS Collaboration Joint effort of SPRACE at São Paulo and UERJ HEP group at Rio de Janeiro Associated with US-CMS Tier-1 Center at Fermilab Agregates computing resources of Institutes at São Paulo and Rio de Janeiro CHEPREO Provides leading-edge international connectivity through the WHREN-LILA link Helps bridging the digital divide between North and South America Enables international collaboration on Science Education programs
3
24 June 2006 R. L. Iope 3 SPRACE – Related Projects Network Infrastructure TIDIA Kyatera Project: São Paulo State funded project for a state wide optical testbed development and deployment Ultralight: NSF funded project for building ultrascale information system Grid Computing Initiatives Open Science Grid member site DOSAR - Distributed Organization for Scientific Analysis and Research OSG Virtual Organization Agregates research institutions of Southern US, Mexico, India and Brazil GridUNESP: São Paulo State University Grid Project Seven participating campi with 13 research groups 12 scientific projects on intensive demanding computing e-Science Education and Outreach TIDIA e-Learning Project: São Paulo State funded project for the development and deployment of e-learning tools The Particle Adventure Brazilian portuguese mirror site
4
24 June 2006 R. L. Iope 4 SPRACE Cluster & Researchers Phase 1 (2004) Phase 2 (2005) Phase 3 (2006) N. of CPUs50115244 Power (KSi2k)40.0132.4311.6 MSS (TB)412 Phase 1 Phase 2 Sérgio Novaes - Physicist, Researcher (PI) Eduardo Gregores - Physicist, Assistant Professor Sérgio Lietti - Physicist, Associate Scientist Pedro Mercadante - Physicist, Associate Scientist Rogério Iope - Physicist, Comp. Eng. Grad. Std.
5
24 June 2006 R. L. Iope 5 SPRACE site - detailed configuration
6
24 June 2006 R. L. Iope 6 SPRACE site - network facilities 8-pair high-quality Lucent SM fiber cable between USP Computer Center and SPRACE 1 pair: GIGA project (to HEPGrid-Brazil) 1 pair: International link (to ANSP network) 6 pairs: KyaTera project Cisco 3750G-24TS-E switch/router Donated by Caltech (+ ZX / LX SFPs) Default gateway of main servers Routes network traffic between 4 different networks 200.136.80.0/24Netblock for the international link 143.108.254.240/30Net between ANSP and SPRACE lab 143.107.128.0/26USP network – Physics Dept. 10.24.46.0.24GIGA project
7
24 June 2006 R. L. Iope 7 Detailed internal network configuration
8
24 June 2006 R. L. Iope 8 Detailed external network connectivity
9
24 June 2006 R. L. Iope 9 RNP testbed – the GIGA Project SPRACE UERJ UFRJ CBPF
10
24 June 2006 R. L. Iope 10 GIGA Project - detailed diagram USP UERJ UFRJ CBPF
11
24 June 2006 R. L. Iope 11 HEPGrid Brazil International Connectivity USP COTIA Cisco ONS 15454 ANSP switch RNP switch RedCLARA router RNP router SDH/SONET N x GbE BARUERI redundant dark fibers Cisco ONS 15454 (Miami) WHREN-LILA SPRACE UERJ UFRJ CBPF RNP RedCLARA (2 x) 1 Gbps 10 Gbps 2.5 Gbps
12
24 June 2006 R. L. Iope 12 HEPGrid Brazil International Connectivity
13
24 June 2006 R. L. Iope 13 SPRACE Production - DZero Data Processing SAMGrid enabled Started Operating on July / 05 Data reprocessed at SPRACE: 4,253 raw data files 9,206,931 events 3.12 TB of data Monte Carlo Production More than 3 million events produced since July / 05 UNESP SPRACE Standalone Cluster From March / 04 to July / 05 Monte Carlo Production Produced ~4.5 million events Stored more then 1.4 TBytes on tape at Fermilab
14
24 June 2006 R. L. Iope 14 Monte Carlo Production for DZero SPRACE cluster was placed into production on March 23rd 2004 Since then it has already produced more than 7.5 million Monte Carlo events for the DØ Collaboration of Tevatron More than 1.6 TB of data has been transferred to the Fermilab repository
15
24 June 2006 R. L. Iope 15 Reprocessing of DZero Data SPRACE was the only site in the Southern Hemisphere to participate in the reprocessing of DØ data. During 2005, SPRACE reprocessed 9.2 million events together with WestGrid in Canada, CCIN2P3 in France, UTA in USA, Praga in Tcheck Republic, GridKa in Germany, and GridPP and PPARC in UK.
16
24 June 2006 R. L. Iope 16 SPRACE Ganglia during Reprocessing
17
24 June 2006 R. L. Iope 17 From Dzero to CMS Set up a Tier 2 of World LHC Computing Grid (WLCG), connected to the Fermilab Tier1, for Monte Carlo Event Generation Data Processing for physics analysis, requiring very fast data access Data Processing for calibration, alignment and detector studies Tier 2 requirements: CPU: 900 kSI2k Disk: 200 TB (for 40 researchers doing physics analysis) WAN: at least 1 Gbps. Most with 10 Gbps Data import: 5 TB/day from Tier1 Data export: 1 TB/day Participate in the Computing, Software and Analysis 2006 integrated test (CSA06): September-November 2006 Tier-1: aiming to include all 7 centers, at least 5 600 CPU at Fermilab 350 CPU at CNAF 150 CPU at most other centers Tier-2: aim for 20, at least 15 sites participating Many sites now online and validated by Computing Integration / Service Challenge 4 > 20 CPUboxes/5TB per site, in reality much more available
18
24 June 2006 R. L. Iope 18 Demo events - SC2004 Bandwidth Challenge “High Speed TeraByte Transfers for Physics” BWC goal: to transfer as much data as possible using real applications over a 2 hour window
19
24 June 2006 R. L. Iope 19 SC2004 Bandwidth Challenge results Traffic exchange between São Paulo and Pittsburg 2.93 (1.95 + 0.98) Gbps2.93 (1.95 + 0.98) Gbps sustained for nearly one hour Record of data transmission between South and North Hemispheres
20
24 June 2006 R. L. Iope 20 iGrid2005 event results SPRACE link tested during iGrid2005 Workshop (São Paulo-San Diego) (São Paulo-San Diego) Stable connection almost saturated for ~ 2 h WHREN-LILA STM-4 link stressed to its limit (622 Mbps)
21
24 June 2006 R. L. Iope 21 SC2005 Bandwidth Challenge “Distributed TeraByte Particle Physics Data Sample Analysis”
22
24 June 2006 R. L. Iope 22 SC2005 Bandwidth Challenge results Sustained data transfer of ~900 Mbps for over 1h (São Paulo - Seattle) (São Paulo - Seattle) WHREN-LILA STM-4 link stressed to its new limit (1.2 Gbps), with aggregated traffic coming from UERJ SC|04 record: 101.13 Gbps
23
24 June 2006 R. L. Iope 23 SPRACE and the UltraLight Project http://ultralight.caltech.edu/
24
24 June 2006 R. L. Iope 24 SPRACE and the KyaTera Project The KyaTera project FAPESP Project for the study of advanced Internet technologies A large distributed network infrastructure (testbed) of the dark fiber mesh spread over several cities of the State of São Paulo Dark fibers reach directly the research labs for experimental tests Platform for developing and deploying new high performance e- Science applications SPRACE proposal to the KyaTera project Research in partnership with UltraLight project for provisioning end-to- end survivable optical connections (lightpaths) in the KyaTera testbed Research partners: OPTINET / UNICAMP (optical networking experts) LACAD / USP (HPC & distributed computing experts)
25
24 June 2006 R. L. Iope 25 SPRACE and the KyaTera Project Intel donation
26
24 June 2006 R. L. Iope 26 GridUNESP - Computing Power Integration GridUNESP project goals: Deploy high-performance processing centers in seven different cities of the State of São Paulo Integrate those centers using Grid Computing middleware architectures Unify UNESP computing resources, allowing an effective integration with international initiatives (OSG / EGEE) Finep just approved US$ 2 M to implement the project Research Projects: Structural Engineering Genomics High Temperature Superconductivity Molecular Biology High Energy Physics Geological Modeling Protein Folding
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.