Download presentation
Presentation is loading. Please wait.
Published byBruce Scot Bates Modified over 8 years ago
1
The German HEP-Grid initiative P.Malzacher@gsi.de, K.Schwarz@gsi.de for the German HEP Community Grid 13-Feb-2006, CHEP06, Mumbai Agenda: D-Grid in context HEP Community Grid HEP-CG Work Packages Summary
2
20002001200220032004200520062007200820092010 EDG EGEEEGEE 2 LCG R&DWLCG D-Grid in context: e-Science in Germany 10. Berlin Today 100+100+100 M€ Initiative
3
20002001200220032004200520062007200820092010 EDG EGEEEGEE 2 LCG R&DWLCG D-Grid in context: e-Science Call and Results 1. Call: 15 M€ Community Grids & Integration P. 2. Call: 15 M€ new communities extensions to D-Grid service providers Today 5 CGs IP Production quality national grid infrastructure Commercial uptake of services
4
Generic platform and generic Grid services D-Grid Integration Project Astro GridMedi GridC3 Grid HEP Grid In Grid Text GridONTOVERSEWikinger C3 Grid Grid Computing & Knowledge management & e-Learning e-Science =
5
PC² RRZN TUD RZG LRZ RWTH FZJ FZK FHG/ ITWM Uni-KA D-Grid WPs: Middleware & Tools, Infrastructure, Network & Security, Management & Sustainability Middleware: Globus 4.x gLite (LCG) UNICORE GAT and GridSphere Data Management: SRM/dCache OGSA-DAI Meta data schemas VO Management: VOMS and Shibboleth
6
HEP Community Grid (HEP CG) Coordination M.Kasemann, DESY A 3 year project, started Sept. 1, 2005
7
Focus on tools to improve data analysis for HEP and Astroparticle Physics. Focus on gaps, do not reinvent the wheel. Data management Advanced scalable data management Job-and data co-scheduling Extendable Metadata catalogues for Lattice QCD and Astroparticle physics Job monitoring and automated user job support Information services Improved Job failure treatment Incremental results of distributed analysis End-user data analysis tools Physics and user oriented job scheduling, workflows, automatic job scheduling All development is based on LCG / EGEE sw and will be kept compatible!
8
HEP CG WP1: Data Management Coordination M.Ernst, DESY Developing and supporting a scalable Storage Element based on Grid standards (DESY, Uni Dortmund, UniFreiburg, unfunded FZK) Combined job- and data-scheduling, accounting and monitoring of data used (Uni Dortmund) Development of grid-based, extendable metadata catalogues with semantically world- wide access (DESY, ZlB, unfunded: Humboldt Uni Berlin, NIC)
9
Pool Manager I/O Door Nodes SRM GFtp dCap (K)Ftp (Krb5,ssl) Http Admin File Name Space Database NFS Server pnfs File Name Space Provider OSM Enstore TSM Admin Doors dCache Components Pool Nodes HSMs Meta Data Operations only – No Data transfer P.Fuhrmann, dCache, the Upgrade, 13-Feb-2006 15:00 Session: Computing Facilities and Networking
10
Job WMS MDS (Job Description) Informations- quelle LFC CE PBS GRAM Local Scheduler SE Local Data Scheduler dCache HSM Prediction Engine Replication Scheduler LFN PFN Created by L. Schley A Computational and Data Scheduling Architecture for HEP Applications Poster session 1 (L.Schley) Improved Scheduling
11
Source: Dirk Pleiter DESY/Zeuthen D.Pleiter, Using Grid Technologies for Lattice QCD, 16-Feb-2006 14:40 Session: Grid middleware and e-Infrastructure operation
12
HEP CG WP2: Job Monitoring + User Support Tools Coordination: P.Mättig, Uni Wuppertal Development of a job information system (TU Dresden) Development of an expert-system to classify job - failures, automatic treatment of most common errors (Uni Wuppertal, unfunded FZK) R&D on interactive job steering and access to temporary, incomplete analysis job results (Uni Siegen)
13
Ralph Müller-Pfefferkorn Job Monitoring provide users with sufficient information about their jobs focus on „many jobs“ scenario -> graphical interface, visualizations ease of use user should not need to know more than necessary, which should be almost nothing from general to detailed views on jobs information like status, resource usage by jobs, output, time lines etc.
14
HEP CG WP3: Distributed Interactive Data Analysis Coordination P.Malzacher, GSI (LMU, GSI, unfunded: LRZ, MPI M, RZ Garching, Uni Karlsruhe, MPI Heidelberg) Optimise application specific job scheduling Analyse and test of software environment required Job management and Bookkeeping of distributed analysis Distribution of analysis, sum-up of results Interactive Analysis: Creation of a dedicated analysis cluster Dynamic partitioning of Grid analysis clusters
15
Start with Gap Analysis LMU: Investigating Job-Scheduler requirements for distributed and interactive analysis GANGA (ATLAS/LHCb) project shows good features for this task Used for test MC production on LCG Distributed Analysis Experiences in Poster Session 1 (J. Elmsheuser) GSI: Analysis based on PROOF Investigating different versions of PROOF clusters Connect ROOT and gLite: TGlite Developing a ROOT interface for gLite in Poster Session 2 (K. Schwarz) class TGrid : public TObject { public: … virtual TGridResult *Query ( … static TGrid *Connect ( const char *grid, const char *uid = 0, const char *pw = 0 … ClassDef(TGrid,0) };
16
Summary Rather late compared to other national Grid initiatives a German e-science program is well under way. The HEP-CG focuses on three work packages: data management, automated user support and interactive analysis. All HEP-CG development is based on LCG / EGEE software and will be kept compatible. Chances for HEP: Additional resources to improve Grid Software for HEP. Increase footprint of MW knowledge and involvement. Improve grid software. Challenges for HEP: Very heterogeneous disciplines and stakeholders. LCG/EGEE is not basis for many other partners. Several are undecided, have little constraints… Others need templates, portals…
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.