Grid Tools & Services Rob Gardner University of Chicago ATLAS Software Meeting August 27-29, 2003 BNL.

Slides:



Advertisements
Similar presentations
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Advertisements

A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
1 Developing Countries Access to Scientific Knowledge Ian Willers CERN, Switzerland.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
XCAT Science Portal Status & Future Work July 15, 2002 Shava Smallen Extreme! Computing Laboratory Indiana University.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
3 Sept 2001F HARRIS CHEP, Beijing 1 Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
VOX Project Status T. Levshina. Talk Overview VOX Status –Registration –Globus callouts/Plug-ins –LRAS –SAZ Collaboration with VOMS EDG team Preparation.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN Deploying the LHC Computing Grid The LCG Project Ian Bird IT Division, CERN CHEP March 2003.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
ANL/BNL Virtual Data Technologies in ATLAS Alexandre Vaniachine Pavel Nevski US-ATLAS Core/GRID software workshop Brookhaven National Laboratory May 6-7,
GriPhyN Status and Project Plan Mike Wilde Mathematics and Computer Science Division Argonne National Laboratory.
11 December 2000 Paolo Capiluppi - DataGrid Testbed Workshop CMS Applications Requirements DataGrid Testbed Workshop Milano, 11 December 2000 Paolo Capiluppi,
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
MAGDA Roger Jones UCL 16 th December RWL Jones, Lancaster University MAGDA  Main authors: Wensheng Deng, Torre Wenaus Wensheng DengTorre WenausWensheng.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
Magda Distributed Data Manager Status Torre Wenaus BNL ATLAS Data Challenge Workshop Feb 1, 2002 CERN.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Production Tools in ATLAS RWL Jones GridPP EB 24 th June 2003.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
Atlas Grid Status - part 1 Jennifer Schopf ANL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
ATLAS Magda Distributed Data Manager Torre Wenaus BNL PPDG Robust File Replication Meeting Jefferson Lab January 10, 2002.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
INFSO-RI Enabling Grids for E-sciencE ARDA Experiment Dashboard Ricardo Rocha (ARDA – CERN) on behalf of the Dashboard Team.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
Magda Distributed Data Manager Prototype Torre Wenaus BNL September 2001.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
David Adams ATLAS ATLAS Distributed Analysis (ADA) David Adams BNL December 5, 2003 ATLAS software workshop CERN.
VOX Project Status T. Levshina. 5/7/2003LCG SEC meetings2 Goals, team and collaborators Purpose: To facilitate the remote participation of US based physicists.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
David Adams ATLAS ADA: ATLAS Distributed Analysis David Adams BNL December 15, 2003 PPDG Collaboration Meeting LBL.
OSG Status and Rob Gardner University of Chicago US ATLAS Tier2 Meeting Harvard University, August 17-18, 2006.
Magda Distributed Data Manager Torre Wenaus BNL October 2001.
Bob Jones EGEE Technical Director
Regional Operations Centres Core infrastructure Centres
BaBar-Grid Status and Prospects
U.S. ATLAS Grid Production Experience
Moving the LHCb Monte Carlo production system to the GRID
Leigh Grundhoefer Indiana University
Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002
LHC Data Analysis using a worldwide computing grid
Report on GLUE activities 5th EU-DataGRID Conference
Gridifying the LHCb Monte Carlo production system
ATLAS DC2 & Continuous production
Presentation transcript:

Grid Tools & Services Rob Gardner University of Chicago ATLAS Software Meeting August 27-29, 2003 BNL

229 Aug 2003 Rob Gardner, U of Chicago Outline l Brief overview of Grid projects –US and LCG l Ongoing US ATLAS development efforts –Grid data management, packaging, virtual data tools l ATLAS PreDC2 Exercise and Grid3 –Joint project with CMS –Fall demonstrator exercise

329 Aug 2003 Rob Gardner, U of Chicago US ATLAS and Grid Projects l Particle Physics Data Grid (PPDG) –End to end applications and services –ATLAS activity: distributed data storage on the grid (Magda), EDG interoperability, Chimera end-to-end applications l GriPhyN –CS and Physics collaboration to develop virtual data concept –Physics: ATLAS, CMS, LIGO, Sloan Digital Sky Survey –CS: build on existing technologies such as Globus, Condor, fault tolerance, storage management, plus new computer science research l iVDGL –International Virtual Data Grid Laboratory –Platform on which to design, implement, integrate and test grid software –Infrastructure for ATLAS and CMS prototype Tier2 centers –Forum for grid interoperability – collaborate with EU DataGrid, DataTag

429 Aug 2003 Rob Gardner, U of Chicago US LHC Testbeds l US ATLAS and US CMS each have active testbed groups building production services on VDT based grids ATLAS CMS DTG – Development Test Grid US ATLAS Production Testbed VDT Testgrid

529 Aug 2003 Rob Gardner, U of Chicago Example Site Configuration l For component testing sites: –Pre-release VDT-testing –Pre-Grid3 environment l US ATLAS Grids –DTG (development) –APG (production) l Grid servers, L to R: –stability increases –more resources –cycle less often l Other machines –Client (submit) hosts –RLS, VDC, grid disk storage, etc.

629 Aug 2003 Rob Gardner, U of Chicago VDT I l VDT (Virtual Data Toolkit) is the baseline middleware package for US LHC and LCG projects l Delivered by the GriPhyN and iVDGL CS-Physics grid projects, with contributions from several technology providers l Core Grid services based on Globus and Condor –Globus security infrastructure (GSI) –Globus Information Services (MDS) –Globus Resource Management (GRAM) –Globus Replica Location Service (RLS) –Condor Execution >CondorG (Condor to Globus) >Directed Acyclic Graph Manager (DAGMAN)

729 Aug 2003 Rob Gardner, U of Chicago VDT II l Virtual Data Software –Virtual Data Language, abstract planning, and Catalog (Chimera) –Concrete Planning (Pegasus) –Scheduling Framework (Sphynx) l Virtual Organization Management Software >VOMS and extensions (VOX) >Grid User Management System (GUMS) l Packaging software (Pacman) l Monitoring Software – Mona Lisa, NetLogger l VDT Testers Group provides pre-release grid testing

829 Aug 2003 Rob Gardner, U of Chicago LCG – Goals (I.Bird) l The goal of the LCG project is to prototype and deploy the computing environment for the LHC experiments l Two phases: –Phase 1: 2002 – 2005 –Build a service prototype, based on existing grid middleware –Gain experience in running a production grid service –Produce the TDR for the final system –Phase 2: 2006 – 2008 –Build and commission the initial LHC computing environment F LCG is not a development project – it relies on other grid projects for grid middleware development and support

929 Aug 2003 Rob Gardner, U of Chicago LCG Regional Centres Tier 0 –CERN Tier 1 Centres –Brookhaven National Lab –CNAF Bologna –Fermilab –FZK Karlsruhe –IN2P3 Lyon –Rutherford Appleton Lab (UK) –University of Tokyo –CERN Other Centres l Academica Sinica (Taipei) l Barcelona l Caltech l GSI Darmstadt l Italian Tier 2s(Torino, Milano, Legnaro) l Manno (Switzerland) l Moscow State University l NIKHEF Amsterdam l Ohio Supercomputing Centre l Sweden (NorduGrid) l Tata Institute (India) l Triumf (Canada) l UCSD l UK Tier 2s l University of Florida– Gainesville l University of Prague l …… Confirmed Resources: Centres taking part in the LCG prototype service : 2003 – 2005

1029 Aug 2003 Rob Gardner, U of Chicago LCG Resource Commitments – 1Q04 CPU (kSI2K) Disk TB Support FTE Tape TB CERN Czech Republic France Germany Holland Italy Japan Poland Russia Taiwan Spain Sweden Switzerland UK USA Total

1129 Aug 2003 Rob Gardner, U of Chicago Deployment Goals for LCG-1 l Production service for Data Challenges in 2H03 & 2004 –Initially focused on batch production work –But ’04 data challenges have (as yet undefined) interactive analysis l Experience in close collaboration between the Regional Centres –Must have wide enough participation to understand the issues, l Learn how to maintain and operate a global grid l Focus on a production-quality service –Robustness, fault-tolerance, predictability, and supportability take precedence; additional functionality gets prioritized l LCG should be integrated into the sites’ physics computing services – should not be something apart –This requires coordination between participating sites in: >Policies and collaborative agreements >Resource planning and scheduling >Operations and Support

1229 Aug 2003 Rob Gardner, U of Chicago Elements of a Production LCG Service l Middleware: –Testing and certification –Packaging, configuration, distribution and site validation –Support – problem determination and resolution; feedback to middleware developers l Operations: –Grid infrastructure services –Site fabrics run as production services –Operations centres – trouble and performance monitoring, problem resolution – 24x7 globally l Support: –Experiment integration – ensure optimal use of system –User support – call centres/helpdesk – global coverage; documentation; training

1329 Aug 2003 Rob Gardner, U of Chicago LCG-1 Components, at a glance l Scope, from LCG Working Group 1 report: –“The LCG-1 prototype Grid has as its primary target support of the production processing needs of the experiments for their data challenges. It does not aim to provide a convenient or seamless environment for end-user analysis.” l Components (VDT and EDG based) –User Tools and Portals: not in scope –Information Services: GLUE compliant –AAA and VO: EDG VOMS –Data management: EDG RLS –Virtual Data Management: not covered –Job Management and Scheduling: EDG WP1 RB –Application Services and Higher Level Tools: none –Packaging and Configuration: GDB/WP4 and GLUE groups looking at this (lcfgng, pacman,…) –Monitoring: Nagios, Ganglia considered l Testbed deployments underway at BNL and FNAL

1429 Aug 2003 Rob Gardner, U of Chicago Tools and Services MAGDA Pacman Chimera DIAL GRAPPA GANGA GRAT Ganglia GUMS MonaLisa VOMS VOX

1529 Aug 2003 Rob Gardner, U of Chicago MAGDA: Manager for grid-based data l MAnager for Grid-based DAta prototype. l Designed for rapid development of components to support users quickly, with components later replaced by Grid Toolkit elements. –Deploy as an evolving production tool. l Designed for ‘managed production’ and ‘chaotic end-user’ usage. l Adopted by ATLAS for Data Challenge 0 and 1. Info: The system:

1629 Aug 2003 Rob Gardner, U of Chicago Architecture and Schema l MySQL database at the core of the system. –DB interaction via perl, C++, java, cgi (perl) scripts –C++ and Java APIs autogenerated off the MySQL DB schema l User interaction via web interface and command line. l Principal components: –File catalog covering any file types –Data repositories organized into sites, each with its locations –Computers with repository access: a host can access a set of sites –Logical files can optionally be organized into collections –Replication operations organized into tasks

1729 Aug 2003 Rob Gardner, U of Chicago Magda Architecture Location Site Location Site Location Site Host 2 Location Cache Disk Site Location Mass Store Site Source to cache stagein Source to dest transfer MySQL Synch via DB Host 1 Replication task Collection of logical files to replicate Spider gridftp,bbftp,scp Register replicas Catalog updates WAN

1829 Aug 2003 Rob Gardner, U of Chicago Pacman l Package and installation manager for the Grid l Adopted by VDT, US LHC Grid projects, NPACI, … l New features expected soon: –Dependent and remote installations; can handle Grid elements or farm components behind a gatekeeper –Can resolve against snapshots rather than the caches – very useful for virtual data –Robustness: integrity-checking, repair, uninstall and reinstall features, better handling of circular dependencies –Advanced warning of installation problems –Consistency checking between packages –Installation from cache subdirectories –Setup scripts generated for installation subpackages –Better version handling –More info:

1929 Aug 2003 Rob Gardner, U of Chicago Grid Component Environment l Components used from several sources: –ATLAS software –VDT middleware –Services: RLS, Magda, Production database, Production cookbook –RLS: primary at BNL, secondary at UC l Client host package: –GCL:GCE-Client >VDT Client, Chimera/Pegasus, other client tools (Magda, Cookbook, …)

2029 Aug 2003 Rob Gardner, U of Chicago As used in DC1 Reconstruction l Athena-based reconstruction l Installed ATLAS releases (Pacman cache) on select US ATLAS testbed sites l 2x520 partitions of DataSet 2001 (lumi10) have been reconstructed at JAZZ-cluster (Argonne), LBNL, IU and BU, BNL (test) l 2x520 Chimera derivations, ~200,000 events reconstructed l Submit hosts - LBNL; others: Argonne, UC, IU l RLS-servers at the University of Chicago and BNL l Storage host and Magda cache at BNL l Group-level Magda registration of output l Output transferred to BNL and CERN/Castor

2129 Aug 2003 Rob Gardner, U of Chicago l DC2: Q4/2003 – Q2/2004 l Goals –Full deployment of Event Data Model & Detector Description –Geant4 becomes the main simulation engine –Pile-up in Athena –Test the calibration and alignment procedures –Use LCG common software –Use widely GRID middleware –Perform large scale physics analysis –Further tests of the computing model l Scale –As for DC1: ~ 10 7 fully simulated events (pile-up too) l ATLAS-wide production framework needed DC2 … Setting the requirements

2229 Aug 2003 Rob Gardner, U of Chicago Broader Considerations l Need a level of task management above the services which LCG-1 will provide l Need to connect to ATLAS development efforts on interfaces and catalogs l Reuse components for analysis l Would like to use several grid middleware services besides LCG-1: –EDG, VDT, NorduGrid,..  Suggests flexible applications and user interfaces on the front-end, multiple grid execution environments on the backend  Need to find a point of integration in the middle

2329 Aug 2003 Rob Gardner, U of Chicago Superviso r A: 2 Chimera NG RB LCG Data Management System LCG executor RB Chimera executor RB NG Executor LCG executor Supervisor A: 1 Job run info Executable name + release version + physics signature Executable name + release version + physics signature Partition transformation definitions Dataset transformation definitions Tasks, datasets Jobs, partitions Locations hints

2429 Aug 2003 Rob Gardner, U of Chicago Example Chimera Executor VDLx abstract planner XML DAG EDG JDL NorduGrid planner NG JDL VDT planner Condor DAG LCG planner LCG JDL EDG planner declarative description of datasets in terms of transformations and derivations abstract work description adapter layer (concrete planning) execution environment shell planner shell DAG Application framework plugin (from Supervisor)

2529 Aug 2003 Rob Gardner, U of Chicago ATLAS PreDC2 and Grid3

2629 Aug 2003 Rob Gardner, U of Chicago Grid3 Background l A grid environment project among: –US ATLAS and US CMS Software and Computing projects –iVDGL, PPDG, and GriPhyN grid projects –SDSS and LIGO computing projects >These are the Stakeholders of the project l This builds on successful testbed efforts, data challenges, interoperability work (WorldGrid demonstration) –Will incorporate new ideas in VO project management being developed now, and others –Project explicitly includes production and analysis l Plan endorsed by Stakeholders in iVDGL Steering meeting –

2729 Aug 2003 Rob Gardner, U of Chicago What is Grid3? l Grid3 is a project to build a grid environment to: –Provide the next phase of the iVDGL Laboratory –Provide the infrastructure and services need for LHC production and analysis applications running at scale in a common grid environment –Provide a platform for computer science technology demonstrators –Provide a common grid environment for LIGO and SDSS applications

2829 Aug 2003 Rob Gardner, U of Chicago Grid3 Strategies l Develop, integrate and deploy a functional grid across LHC institutions, extending to non-LHC institutions and to international sites, working closely together with the existing efforts. l Demonstrate the functionalities and capabilities of this Grid, according to some well-defined metrics. l The Grid3 project major demonstration milestone coincides with the SC2003 conference l Work within the existing VDT-based US LHC Grid efforts, towards a functional and capable Grid. l The Grid3 project itself would end at the end of –Assuming the metrics would be met. –With the expectation that the Grid infrastructure would continue to be available for applications and scheduled iVDGL lab work, –A follow-up work plan will be defined, well in advance of the project end.

2929 Aug 2003 Rob Gardner, U of Chicago ATLAS PreDC2 and Grid3 l Monte Carlo production –if possible extending to non-U.S. sites, using Chimera-based tools l Collect and archive MC data at BNL –MAGDA, RLS/RLI/RLRCs involved l Push MC data files to CERN –MAGDA l Reconstruction at CERN –using LHC/EDG components l MAGDA "spider" finds new reconstruction output in RLI and copies them to BNL –may require interfaceing to EDG RLS l Data reduction at BNL, creating “collections” or “datasets” and skimming out n-tuples – DIAL, maybe Chimera l Analysis: Distributed analysis of collections and datasets. –DIAL, GANGA

3029 Aug 2003 Rob Gardner, U of Chicago Grid3 Core Components l A grid architecture consisting of facilities (eg., execution and storage sites), services (eg. a prototype operations center, information system), and applications. l The middleware will be based on VDT or greater, and will use components from other providers as appropriate (eg. Ganglia, LCG). l Applications l Other services (eg. RLS)

3129 Aug 2003 Rob Gardner, U of Chicago Core Components, 2 l An information service based on MDS will be deployed. l A simple monitoring service based on Ganglia, version or greater –(supporting hierarchy, grid-level collections), with collectors at one or more of the operations centers. Additionally, Ganglia information providers to MDS may be deployed if required, and native Ganglia installations may report to other collectors corresponding to other (logical) grids. An example is a collector at Fermilab providing a Ganglia grid-view of the US CMS managed resources. –Nagios for alarms and status monitoring should be investigated l Other monitoring systems, such as MonaLisa or a workflow monitoring system, will be deployed.

3229 Aug 2003 Rob Gardner, U of Chicago Core Components, 3 l A consistent method or set of recommendations for packaging, installing, configuring, and creating run-time environments for applications, subject to review. l Use of the WorldGrid project mechanism l One or more VO management mechanisms for authentication and authorization will be used. –The VOMS server method as developed by the VOX project. –The WorldGrid project method as developed by Pacman. –A fallback solution is to use LDAP VO servers, one for each VO containing the DN’s of the expected application users.

3329 Aug 2003 Rob Gardner, U of Chicago Metrics l In order to characterize the computing activity and gauge the resulting performance, we define additional goals of the project to include metrics and feature demonstrations. These will need to be discussed at the beginning of the project. We expect the project to include deployment of mechanisms to collect, aggregate, and report measurements. l Examples: –Data transferred per day>1 TB –Number of concurrent jobs >100 –Number of users>10 –Number of different applications >4 –Number of sites running multiple applications>10 –Rate of Faults/Crashes<1/hour –Operational Support Load of full demonstrator<2 FTEs

3429 Aug 2003 Rob Gardner, U of Chicago Monitoring framework Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

3529 Aug 2003 Rob Gardner, U of Chicago Information providers (on CE/WN) Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

3629 Aug 2003 Rob Gardner, U of Chicago MonALISA Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

3729 Aug 2003 Rob Gardner, U of Chicago SQL DataBase at IU Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

3829 Aug 2003 Rob Gardner, U of Chicago Outputs (for clients) Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

3929 Aug 2003 Rob Gardner, U of Chicago Monitoring framework MonALISA SQL DB at IU IP on CE/WN Applet, Excell, … Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

4029 Aug 2003 Rob Gardner, U of Chicago MF - Placement MonALISA SQL DB at IU IP on CE/WN Applet, Excell, … Ganglia MDS Job DB agents Information providers Collector Persistent storage Monitoring storage Report Web Outputs Information consumers … SNMP

4129 Aug 2003 Rob Gardner, U of Chicago Grid3 and LCG l Grid3 should federate with the LCG –We will have a working group within Grid3 to understand what is needed for basic interoperability, specifically submission of jobs and movement of files across the grids. l There will be other issues that may affect interoperability –consistent replica management, virtual organization support and optimizing of resource usage across federated grids. –We do not have the effort to address all these during the Grid3 project itself. We will identify, discuss and document such issues for collaborative work with the LCG. l Many of the working areas in Grid3 are already joint projects between the LCG and Trillium or the S&C projects. – Additional collaboration in areas of monitoring and operations have been discussed over the past few months. l As we proceed to better understand the technical plans the expectation is that we will propose further areas of joint work. l Use lessons learnt for DC2