Presentation is loading. Please wait.

Presentation is loading. Please wait.

ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen

Similar presentations


Presentation on theme: "ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen"— Presentation transcript:

1 ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen waananen@nbi.dkwaananen@nbi.dk

2 2ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid project u Launched in spring of 2001, with the aim of creating a Grid infrastructure in the Nordic countries. u Idea to have a Monarch architecture with a common tier 1 center u Partners from Denmark, Norway, Sweden, and Finland u Initially meant to be the Nordic branch of the EU DataGrid (EDG) project u 3 full-time researchers with few externally funded

3 3ATLAS Data Challenge with NorduGridAnders Wäänänen Motivations u NorduGrid was initially meant to be a pure deployment project u One goal was to have the ATLAS data challenge run by May 2002 u Should be based on the the Globus Toolkit™ u Available Grid middleware: n The Globus Toolkit™ s A toolbox – not a complete solution n European DataGrid software s Not mature for production in the beginning of 2002 s Architecture problems

4 4ATLAS Data Challenge with NorduGridAnders Wäänänen A Job Submission Example UI JDL Logging & Book-keeping ResourceBroker Output “sandbox” Input “sandbox” Job Submission Service StorageElement ComputeElement Brokerinfo Output “sandbox” Input “sandbox”InformationService Job Status ReplicaCatalogue Author. &Authen. Job Submit Job Query Job Status

5 5ATLAS Data Challenge with NorduGridAnders Wäänänen Architecture requirements u No single point of failure u Should be scalable u Resource owners should have full control over their resources u As few site requirements as possible: n Local cluster installation details should not be dictated s Method, OS version, configuration, etc… n Compute nodes should not be required to be on the public network n Clusters need not be dedicated to the Grid

6 6ATLAS Data Challenge with NorduGridAnders Wäänänen User interface u The NorduGrid user interface provides a set of commands for interacting with the grid n ngsub – for submitting jobs n ngstat – for states of jobs and clusters n ngcat – to see stdout/stderr of running jobs n ngget – to retrieve the results from finished jobs n ngkill – to kill running jobs n ngclean – to delete finished jobs from the system n ngcopy – to copy files to, from and between file servers and replica catalogs n ngremove – to delete files from file servers and RC’s

7 7ATLAS Data Challenge with NorduGridAnders Wäänänen ATLAS Data Challenges u A series of computing challenges within Atlas of increasing size and complexity. u Preparing for data-taking and analysis at the LHC. u Thorough validation of the complete Atlas software suite. u Introduction and use of Grid middleware as fast and as much as possible.

8 8ATLAS Data Challenge with NorduGridAnders Wäänänen Data Challenge 1 u Main goals: n Need to produce data for High Level Trigger & Physics groups s Study performance of Athena framework and algorithms for use in HLT s High statistics needed n Few samples of up to 10 7 events in 10-20 days, O(1000) CPU’s n Simulation & pile-up n Reconstruction & analysis on a large scale s learn about data model; I/O performances; identify bottlenecks etc n Data management s Use/evaluate persistency technology (AthenaRoot I/O) s Learn about distributed analysis n Involvement of sites outside CERN n use of Grid as and when possible and appropriate

9 9ATLAS Data Challenge with NorduGridAnders Wäänänen DC1, phase 1: Task Flow  Example: one sample of di-jet events PYTHIA event generation: 1.5 x 10 7 events split into partitions (read: ROOT files) Detector simulation: 20 jobs per partition, ZEBRA output Atlsim/Geant3 + Filter 10 5 events Atlsim/Geant3 + Filter Hits/ Digits MCTruth Atlsim/Geant3 + Filter Pythia6 Di-jet Athena-Root I/OZebra HepMC Event generation Detector Simulation (5000 evts) (~450 evts) Hits/ Digits MCTruth Hits/ Digits MCtruth

10 10ATLAS Data Challenge with NorduGridAnders Wäänänen DC1, phase 1: Summary u July-August 2002 u 39 institutes in 18 countries u 3200 CPU’s, approx.110 kSI95 – 71000 CPU-days u 5 × 10 7 events generated u 1 × 10 7 events simulated u 30 Tbytes produced u 35 000 files of output

11 11ATLAS Data Challenge with NorduGridAnders Wäänänen DC1, phase1 for NorduGrid u Simulation u Dataset 2000 & 2003 (different event generation) assigned to NorduGrid u Total number of fully simulated events: n 287296 (1.15 × 10 7 of input events) u Total output size: 762 GB. u All files uploaded to a Storage Element (University of Oslo) and registered in the Replica Catalog.

12 12ATLAS Data Challenge with NorduGridAnders Wäänänen Job xRSL script & (executable=”ds2000.sh”) (arguments=”1244”) (stdout=”dc1.002000.simul.01244.hlt.pythia_jet_17.log”) (join=”yes”) (inputfiles=(“ds2000.sh” “http://www.nordugrid.org/applications/dc1/2000/dc1.002000.simul.NG.sh”)) (outputfiles= (“atlas.01244.zebra” “rc://dc1.uio.no/2000/log/dc1.002000.simul.01244.hlt.pythia_jet_17.zebra”) (“atlas.01244.his” “rc://dc1.uio.no/2000/log/dc1.002000.simul.01244.hlt.pythia_jet_17.his”) (“dc1.002000.simul.01244.hlt.pythia_jet_17.log” “rc://dc1.uio.no/2000/log/dc1.002000.simul.01244.hlt.pythia_jet_17.log”) (“dc1.002000.simul.01244.hlt.pythia_jet_17.AMI” “rc://dc1.uio.no/2000/log/dc1.002000.simul.01244.hlt.pythia_jet_17.AMI”) (“dc1.002000.simul.01244.hlt.pythia_jet_17.MAG” “rc://dc1.uio.no/2000/log/dc1.002000.simul.01244.hlt.pythia_jet_17.MAG”)) (jobname=”dc1.002000.simul.01244.hlt.pythia_jet_17”) (runtimeEnvironment=”DC1-ATLAS”) (replicacollection=”ldap://grid.uio.no:389/lc=ATLAS,rc=NorduGrid,dc=nordugrid,dc=org”) (maxCPUTime=2000)(maxDisk=1200) (notify=”e waananen@nbi.dk)

13 13ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid job submission  The user submits a xRSL-file specifying the job-options.  The xRSL-file is processed by the User-Interface.  The User-Interface queries the NG Information System for resources and the NorduGrid Replica-Catalog for location of input-files and submits the job to the selected resource.  Here the job is processed by the Grid Manager, which downloads or links files to the local session directory.  The Grid Manager submits the job to the local resource management system.  After simulation finishes, the Grid-Manager moves requested output to Storage Elements and registers these into the NorduGrid Replica- Catalog.

14 14ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid job submission RC RSL MDS Grid Manager Gatekeeper GridFTP RSL

15 15ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid Production sites

16 16ATLAS Data Challenge with NorduGridAnders Wäänänen

17 17ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid Pileup u DC1, pile-up: n Low luminosity pile-up for the phase 1 events u Number of jobs: 1300 n dataset 2000: 300 n dataset 2003: 1000 u Total output-size: 1083 GB n dataset 2000: 463 GB n dataset 2003: 620 GB

18 18ATLAS Data Challenge with NorduGridAnders Wäänänen Pileup procedure u Each job downloaded one zebra-file from dc1.uio.no of approximate n 900MB for dataset 2000 n 400MB for dataset 2003 u Use locally present minimum-bias zebra-files to "pileup" events on top of the original simulated ones present in the downloaded file. The output size of each file was about 50 % bigger than the original downloaded file i.e.: n 1.5 GB for dataset 2000 n 600 GB for dataset 2003 u Upload output-files to dc1.uio.no and dc2.uio.no SE‘s u Register into the RC.

19 19ATLAS Data Challenge with NorduGridAnders Wäänänen Other details u At peak production, up to 200 jobs were managed by the NorduGrid at the same time. u Has most of Scandinavian production clusters under its belt (2 of them are in Top 500) u However not all of them allow for installation of ATLAS Software u Atlas job manager Atlas Commander support the NorduGrid toolkit u Issues n Replica Catalog scalability problems n MDS / OpenLDAP hangs – solved n Software threading problems – partly solved s Problems partly in Globus libraries

20 20ATLAS Data Challenge with NorduGridAnders Wäänänen NorduGrid DC1 timeline u April 5 th 2002 n First ATLAS job submitted (Athena Hello World) u May 10 th 2002 n First pre-DC1-validation-job submitted (ATLSIM test using Atlas-release 3.0.1) u End of May 2002 n Now clear that NorduGrid mature enough to handle real production u Spring 2003 (now) n Keep running Data challenges and improve the toolkit

21 21ATLAS Data Challenge with NorduGridAnders Wäänänen Quick client installation/job run u As a normal user (non system privileges required): Retrieve nordugrid-standalone-0.3.17.rh72.i386.tgz tar xfz nordugrid-standalone-0.3.17.rh72.i386.tgz cd nordugrid-standalone-0.3.17 source./setup.sh Get a personal certificate: grid-cert-request n Install certificate per instructions n Get authorized on a cluster n Run a job grid-proxy-init ngsub '&(executable=/bin/echo)(arguments="Hello World")‘

22 22ATLAS Data Challenge with NorduGridAnders Wäänänen Resources u Documentation and source code are available for download u Main Web site: n http://www.nordugrid.org/ http://www.nordugrid.org/ u ATLAS DC1 with NorduGrid n http://www.nordugrid.org/applications/dc1/ http://www.nordugrid.org/applications/dc1/ u Software repository n ftp://ftp.nordugrid.org/pub/nordugrid/ ftp://ftp.nordugrid.org/pub/nordugrid/

23 23ATLAS Data Challenge with NorduGridAnders Wäänänen The NorduGrid core group u Александр Константинов u Balázs Kónya u Mattias Ellert u Оксана Смирнова u Jakob Langgaard Nielsen u Trond Myklebust u Anders Wäänänen


Download ppt "ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen"

Similar presentations


Ads by Google