Download presentation
Presentation is loading. Please wait.
1
I Brazilian LHC Computing Workshop Welcome
2
Purpose Discuss the computing needs of each experiment and establish the amount of computing resource that should be available in Brazil for processing the LHC data Make a strategic plan to guarantee that the different middleware and software could smoothly run in all the resources Pursue the real integration of the resources in a distributed LHC Tier 2 ( Tier 1) facility Discuss about a unified administrative/technical structure to present ourselves as a single, coherent and organized group to CERN and to our financial agencies Establish guidelines for new resource and infrastructure projects 30/Mar/2006 S. F. Novaes
3
OSG X LCG X AliEn X Dirac X …
LHCb: Very small participation from US Syracuse University: 11 out of 654 members Focused on European solutions LCG, EGEE EELA (E-Infrastructure shared between Europe and Latin America) Using DIRAC Grid Framework for LHCb Distributed Infrastructure with Remote Agent Control Python based grid job management system Follows Condor model of job pools Service Oriented Architecture Alice Alice Environment Starting point for developing the gLite, next generation of grid middleware developed by EGEE Projects using AliEn ALICE is using for simulation, reconstruction and distributed analysis of physics data Panda at GSI, Darmstadt is using for simulation Mammogrid, European Federated Mammogram Database AliEn will be progressively interfaced to emerging products from both Europe and the US 30/Mar/2006 S. F. Novaes
4
A Distributed Infrastructure
The Evaluation Panel agrees with the idea of establishing a common Nordic grid. The Nordic countries, individually, appear to have difficulties with establishing the sufficient critical mass to become attractive collaboration partners in the larger grid projects in Europe and the rest of the world. But the establishment of Nordic Data Grid Facility will ensure – for scientist in the Nordic countries – a grid infrastructure which is of sufficient critical mass to join various scientific projects needing grid and high performance computing in Europe and the rest of the world. Evaluation of NorduGrid (Spring 2005) 30/Mar/2006 S. F. Novaes
5
Present Status and New Initiatives
Present Computing Power: SPRACE: 240 processors = 1.45 TeraFlops = 310 kSi2k (after last upgrade) 15 TB of disk space HEPGrid UERJ: 200 processors = 1.06 TeraFlops = 210 kSi2k 7 TB of disk space UFRJ and CBPF ??? New initiatives GridUNESP A set of cluster focused on different branches of Science (UNESP) New proposals: 30/Mar/2006 S. F. Novaes
6
Several research areas:
High Energy Physics Lattice QCD High Tc Superconductivity Bioinformatics Genomics and Cancer Studies Protein Folding Molecular Biology Geological and Hydrographic Modeling Fluid Dynamics and Turbulent Flow Numerical Methods in Mechanical Engineering. Tier 0 512 processors + 12 TB RAID Tiers 1 7 X 64 processors + 4 TB RAID 30/Mar/2006 S. F. Novaes
7
Venue: University of São Paulo (USP)
8 May (Monday) Venue: University of São Paulo (USP) 09:45 – 10:00 Welcome 10:00 – 10:30 Alice Computing Alexandre Suaide 10:30 – 11:00 Atlas Computing Fernando Marroquim 11:00 – 11:30 Coffee Break 11:30 – 12:00 CMS Computing Andre Sznajder 12:00 – 12:30 LHCb Computing Miriam Gandelman 12:30 – 14:00 Lunch 14:00 – 14:30 The EELA Project Diego Carvalho 14:30 – 15:00 LCG-OSG Interoperability Ruth Pordes 15:00 – 15:30 15:30 – 16:00 Distributed Tier 2: A Case Study Horst Severini et al. 16:00 – 18:00 Discussion: Perspectives & Outlook Sérgio Novaes (moderator) 20:00 Dinner: 30/Mar/2006 S. F. Novaes
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.