Presentation is loading. Please wait.

Presentation is loading. Please wait.

Grid Job, Information and Data Management for the Run II Experiments at FNAL Igor Terekhov et al (see next slide) FNAL/CD/CCF, D0, CDF, Condor team, UTA,

Similar presentations


Presentation on theme: "Grid Job, Information and Data Management for the Run II Experiments at FNAL Igor Terekhov et al (see next slide) FNAL/CD/CCF, D0, CDF, Condor team, UTA,"— Presentation transcript:

1 Grid Job, Information and Data Management for the Run II Experiments at FNAL Igor Terekhov et al (see next slide) FNAL/CD/CCF, D0, CDF, Condor team, UTA, ICL

2 Igor Terekhov, FNAL Authors A.Baranovskii, G. Garzoglio, H. Kouteniemi, A. Kreymer, L. Lueking, V. Murthi, P. Mhashikar, S. Patil, A. Rana, F. Ratnikov, A. Roy, T. Rockwell, S. Stonjek, T. Tannenbaum, R. Walker, F. Wuerthwein

3 Igor Terekhov, FNAL Plan of Attack Brief History, D0 and CDF computing, data handling Grid Jobs and Information Management Architecture Job management Information management JIM project status and plans Globally Distributed data handling in SAM and beyond Summary

4 Igor Terekhov, FNAL History Run II CDF and D0, the two largest, currently running collider experiments Each experiment to accumulate ~1PB raw, reconstructed, analyzed data by 2007. Get the Higgs jointly. Real data acquisition – 5 /wk, 25MB/s, 1TB/day, plus MC

5 Igor Terekhov, FNAL

6 Globally Distributed Computing and Grid D0 – 78 institutions, 18 countries. CDF – 60 institutions, 12 countries. Many institutions have computing (including storage) resources, dozens for each of D0, CDF Some of these are actually shared, regionally or experiment-wide Sharing is good A possible contribution by the institution into the collaboration while keeping it local Recent Grid trend (and its funding) encourages it

7 Igor Terekhov, FNAL Goals of Globally Distributed Computing in Run II To distribute data to processing centers – SAM is a way, see later slide To benefit from the pool of distributed resources – maximize job turnaround, yet keep single interface To facilitate and automate decision making on job/data placement. Submit to the cyberspace, choose best resource To reliably execute jobs spread across multiple resources To provide an aggregate view of the system and its activities and keep track of what’s happening To maintain security Finally, to learn and prepare for the LHC computing

8 Igor Terekhov, FNAL Data Distribution - SAM SAM is Sequential data Access via Meta-data. http://{d0,cdf}db.fnal.gov/sam Presented numerous times, prev CHEPS Core features: meta-data cataloguing, global data replication and routing, co-allocation of compute and data resources Global data distribution: MC import from remote sites Off-site analysis centers Off-site reconstruction (D0)

9 Igor Terekhov, FNAL Data Site WAN Data Flow Routing+Caching=Replication

10 Igor Terekhov, FNAL Now that the Data’s Distributed: JIM Grid Jobs and Information Management Owes to the D0 Grid funding – PPDG (the FNAL team), UK GridPP (Rod Walker, ICL) Very young – started 2001 Actively explore, adopt, enhance, develop new Grid technologies Collaborate with the Condor team from The University of Wisconsin on Job management JIM with SAM is also called The SAMGrid T<10min?

11 Igor Terekhov, FNAL

12 Job Management Strategies We distinguish grid-level (global) job scheduling (selection of a cluster to run) from local scheduling (distribution of the job within the cluster) We distinguish structured jobs from unstructured. Structured jobs have their details known to Grid middleware. Unstructured jobs are mapped as a whole onto a cluster In the first phase, we want reasonably intelligent scheduling and reliable execution of unstructured data-intensive jobs.

13 Igor Terekhov, FNAL Job Management Highlights We seek to provide automated resource selection (brokering) at the global level with final scheduling done locally (environments like CDF CAF, Frank’s talk) Focus on data-intensive jobs: Execution time is composed of: Time to retrieve any missing input data Time to process the data Time to store output data In the Leading Order, we rank sites by the amount of data cached at the site (minimize missing input data) Scheduler is interfaced with the data handling system

14 Igor Terekhov, FNAL Job Management – Distinct JIM Features Decision making is based on both: Information existing irrespective of jobs (resource description) Functions of (jobs,resource) Decision making is interfaced with data handling middleware rather than individual SE’s or RC alone: this allows incorporation of DH considerations Decision making is entirely in the Condor framework (no own RB) – strong promotion of standards, interoperability

15 Igor Terekhov, FNAL Condor Framework and Enhancements We Drove Initial Condor-G: Personal Grid agent helping user run a job on a cluster of his/her choice JIM: True grid service for accepting and placing jobs from all users Added MMS for Grid job brokering JIM: from 2-tier to 3-tier architecture Decouple queing/spooling/scheduling machine from user machine Security delegation, proper std* spooling, etc Will move into standard Condor

16 Igor Terekhov, FNAL Condor Framework and Enhancements We Drove Classic Matchmaking service (MMS): Clusters advertise their availability, jobs are matched with clusters Cluster (Resource) description exists irrespective of jobs JIM: Ranking expressions contain functions that are evaluated at run-time Helps rank a job by a function(job,resource) Now: query participating sites for data cached. Future: estimates when data for the job can arrive etc Feature now in standard Condor-G

17 Igor Terekhov, FNAL JOB Computing Element Submission Client User Interface Queuing System Job Management User Interface Broker Match Making Service Information Collector Execution Site #1 Submission Client Match Making Service Computing Element Grid Sensors Execution Site #n Queuing System Grid Sensors Storage Element Computing Element Storage Element Data Handling System Storage Element Informatio n Collector Grid Sensor s Computin g Element Data Handling System

18 Igor Terekhov, FNAL Monitoring Highlights Sites (resources) and jobs Distributed knowledge about jobs etc Incremental knowledge building GMA for current state inquiries, Logging for recent history studies All Web based

19 Igor Terekhov, FNAL Web Browser Web Server Site 1 Information System IP Web Browser Web Server 1 Site 2 Information System IP Web Server N Site N Information System JIM Monitoring

20 Igor Terekhov, FNAL Information Management – Implementation and Technology Choices XML for representation of site configuration and (almost) all other information Xquery and XSLT for information processing Xindice and other native XML databases for database semantics

21 Igor Terekhov, FNAL Main Site/cluster Config … Schema Resource Advertisement Monitoring Schema Data Handling Hosting Environment Meta-Schema

22 Igor Terekhov, FNAL JIM Project Status Delivered prototype for D0, Oct 10, 2002: Remote job submission Brokering based on data cached Web-based monitoring SC-2002 demo – 11 sites (D0, CDF), big success April 2003 – production deployment of V1 (Grid analysis in production a reality as of April, 1) Post V1 – OGSA, Web services, logging service

23 Igor Terekhov, FNAL Grid Data Handling We define GDH as a middleware service which: Brokers storage requests Maintains economical knowledge about costs of access to different SE’s Replicates data as needed (not only as driven by admins) Generalizes or replaces some of the services of the Data Management part of SAM

24 Igor Terekhov, FNAL Grid Data Handling, Initial Thoughts

25 Igor Terekhov, FNAL The Necessary (Almost) Final Slide Run II experiments’ computing is highly distributed, Grid trend is very relevant The JIM (Jobs and Information Management) part of the SAMGrid addresses the needs for global and grid computing at Run II We use Condor and Globus middleware to schedule jobs globally (based on data), and provide Web-based monitoring Demo available – see me or Gabriele

26 Igor Terekhov, FNAL Acks PPDG project, its management, for making it possible GridPP project in the UK, for its funding Jae Yu and others of UTA-Texas, FNAL CD mgmt for continuing support for student internship programs Other members of the Condor team for fruitful discussions

27 Igor Terekhov, FNAL P.S. – Related Talks F. Wuerthwein, CAF (Cluster Analysis Facility) – job management on a cluster and interface to JIM/Grid F. Ratnikov, Monitoring on CAF and interface to JIM/Grid S. Stonjek, SAMgrid deployment experiences L. Lueking, G. Garzoglio – SAM-related

28 Backup Slides

29 Igor Terekhov, FNAL Information Management In JIM’s view, this includes both: resource description for job brokering Infrastructure for monitoring (core project area) GT MDS is not sufficient: Need (persistent) info representation that’s independent of LDIF or other such format Need maximum flexibility in information structure – no fixed schema Need configuration tools, push operation etc


Download ppt "Grid Job, Information and Data Management for the Run II Experiments at FNAL Igor Terekhov et al (see next slide) FNAL/CD/CCF, D0, CDF, Condor team, UTA,"

Similar presentations


Ads by Google