Application examples Oxana Smirnova (Lund, EPF) 3 rd NorduGrid Workshop, May23, 2002.

Slides:



Advertisements
Similar presentations
Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
Advertisements

NorduGrid Grid Manager developed at NorduGrid project.
CERN LCG Overview & Scaling challenges David Smith For LCG Deployment Group CERN HEPiX 2003, Vancouver.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
06/08/10 PBS, LSF and ARC integration Zoltán Farkas MTA SZTAKI LPDS.
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL March 25, 2003 CHEP 2003 Data Analysis Environment and Visualization.
Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)
Cross Cluster Migration Remote access support Adianto Wibisono supervised by : Dr. Dick van Albada Kamil Iskra, M. Sc.
The NorduGrid project: Using Globus toolkit for building Grid infrastructure presented by Aleksandr Konstantinov Mattias Ellert Aleksandr Konstantinov.
MultiJob PanDA Pilot Oleynik Danila 28/05/2015. Overview Initial PanDA pilot concept & HPC Motivation PanDA Pilot workflow at nutshell MultiJob Pilot.
Submit Host Setup (user) Tc.data file Pool.config file Properties file Vdl-gen file Input file Exitcode checking script.
DIRAC API DIRAC Project. Overview  DIRAC API  Why APIs are important?  Why advanced users prefer APIs?  How it is done?  What is local mode what.
Introduction to UNIX/Linux Exercises Dan Stanzione.
GRAPPA Part of Active Notebook Science Portal project A “notebook” like GRAPPA consists of –Set of ordinary web pages, viewable from any browser –Editable.
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
A Distributed Computing System Based on BOINC September - CHEP 2004 Pedro Andrade António Amorim Jaime Villate.
Introduction to NorduGrid ARC / Arto Teräs Slide 1(16) Introduction to NorduGrid ARC Arto Teräs Free and Open Source Software Developers' Meeting.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
K. Harrison CERN, 20th April 2004 AJDL interface and LCG submission - Overview of AJDL - Using AJDL from Python - LCG submission.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
Fundamental Programming: Fundamental Programming K.Chinnasarn, Ph.D.
PROGRESS: ICCS'2003 GRID SERVICE PROVIDER: How to improve flexibility of grid user interfaces? Michał Kosiedowski.
1 5th AstroGrid-D Meeting, MPE Garching Frank Breiting, AIP November 14, 2006 Status of the Dynamo Use Case as prepared by Michael Braun Frank Breitling.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
Production Tools in ATLAS RWL Jones GridPP EB 24 th June 2003.
1 Large-Scale Profile-HMM on the Grid Laurent Falquet Swiss Institute of Bioinformatics CH-1015 Lausanne, Switzerland Borrowed from Heinz Stockinger June.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
Part Five: Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Review of Condor,SGE,LSF,PBS
Working with AliEn Kilian Schwarz ALICE Group Meeting April
Using UNIX Shell Scripts Michael Griffiths Corporate Information and Computing Services The University of Sheffield
1 Andrea Sciabà CERN Critical Services and Monitoring - CMS Andrea Sciabà WLCG Service Reliability Workshop 26 – 30 November, 2007.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
PROOF and ALICE Analysis Facilities Arsen Hayrapetyan Yerevan Physics Institute, CERN.
ATLAS Data Challenge on NorduGrid CHEP2003 – UCSD Anders Wäänänen
Performance of The NorduGrid ARC And The Dulcinea Executor in ATLAS Data Challenge 2 Oxana Smirnova (Lund University/CERN) for the NorduGrid collaboration.
Globus Grid Tutorial Part 2: Running Programs Across Multiple Resources.
The NorduGrid toolkit user interface Mattias Ellert Presented at the 3 rd NorduGrid workshop, Helsinki,
Using Ganga for physics analysis Karl Harrison (University of Cambridge) ATLAS Distributed Analysis Tutorial Milano, 5-6 February 2007
Oxana Smirnova LCG/ATLAS/Lund September 3, 2002, Budapest 5 th EU DataGrid Conference ATLAS-EDG Task Force status report.
Jaime Frey Computer Sciences Department University of Wisconsin-Madison What’s New in Condor-G.
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
Daniele Spiga PerugiaCMS Italia 14 Feb ’07 Napoli1 CRAB status and next evolution Daniele Spiga University & INFN Perugia On behalf of CRAB Team.
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
PROGRESS: GEW'2003 Using Resources of Multiple Grids with the Grid Service Provider Michał Kosiedowski.
Client installation DIRAC Project. DIRAC Client Software  Many operations can be performed through the Web interface  Even more to come  However, certain.
Oxana Smirnova LCG/ATLAS/Lund August 27, 2002, EDG Retreat ATLAS-EDG Task Force status report.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
David Adams ATLAS AJDL: Abstract Job Description Language David Adams BNL June 29, 2004 PPDG Collaboration Meeting Williams Bay.
NorduGrid Architecture EDG ATF Meeting CERN – June 12 th 2002 Anders Wäänänen.
GridWay Overview John-Paul Robinson University of Alabama at Birmingham SURAgrid All-Hands Meeting Washington, D.C. March 15, 2007.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Unix Scripts and PBS on BioU
Oxana Smirnova, Jakob Nielsen (Lund University/CERN)
How to connect your DG to EDGeS? Zoltán Farkas, MTA SZTAKI
Grid Application Support Group Case study Schrodinger equations on the Grid Status report 16. January, Created by Akos Balasko
Globus Job Management. Globus Job Management Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Oxana Smirnova (Lund, EPF) 3rd NorduGrid Workshop, May23, 2002
Overview of Workflows: Why Use Them?
Presentation transcript:

Application examples Oxana Smirnova (Lund, EPF) 3 rd NorduGrid Workshop, May23, 2002

Oxana Smirnova2 NorduGrid & applications The level of the NorduGrid stability allows to execute real tasks Most of the applications so far are in the high energy physics domain Apart of the simple curiosity and testing, there is a bare necessity to run ATLAS Data Challenge exercises –Requires a lot of computing power –Intensions are to run it in a distributed (Grid) manner a.s.a.p.

Oxana Smirnova3 Pre-requisites Valid NorduGrid credentials A machine with the User Interface installed –Includes minimal Globus client installation A user may want to make some personal adjustments: –.ngrc # Sample.ngrc file # Comments starts with # NGDEBUG=1 NGDOWNLOAD=/tmp –.ngiislist ldap://grid.nbi.dk:2135/O=Grid/Mds-Vo-name=NorduGrid ldap://grid.quark.lu.se:2135/O=Grid/Mds-Vo-name=NorduGrid

Oxana Smirnova4 Double heavy hadron production (* user-specified job name *) (jobName=Bsubc) (* standard output file *) (stdout="myoutput.dat") (* flag whether to merge stdout and stderr *) (join="yes") (* request notification on status change *) (notify="e & (* an executable *) (executable=bc_run) (* files to be staged in before the execution *) (inputFiles= (run.dat "") ) (* files to be staged out after the execution *) (outputFiles= (bc.hbook "") ) xRSL specifications:

Oxana Smirnova5 Double heavy hadron production ngsub –f bsubc.xrsl

Oxana Smirnova6 Re-used JETSET/PYTHIA generation Problem: –A legacy code (FORTRAN) uses specific input cards –Each job needs a new random seed “Gridification” is still trivial: –Generate xRSL files on-fly –Submit for execution a shell-script, wrapped around the actual executable

Oxana Smirnova7 Re-used JETSET/PYTHIA generation #!/bin/sh # Submits several run.sh jobs with different random seed if [ $# -eq 0 ]; then echo "" echo Usage: $0 njobs nevents echo "" exit 127 fi MAXCOUNT=$1 MAXEVT=$2 echo "Submitting $MAXCOUNT jobs, $MAXEVT events each..." FLOOR=10000 count=1 while [ "$count" -le $MAXCOUNT ] do number=0 while [ "$number" -le $FLOOR ] do number=$RANDOM done let "inseed = $number*100" jname="flong"$count outname=$jname".out" echo "Job" $jname "started with random seed" $inseed let "count += 1" cat ffun.xrsl & (executable=run.sh) (arguments=$inseed $MAXEVT) (executables=ffungen) (inputFiles=(ffungen "")) (outputFiles=(ffun.hbook "")) (jobName=$jname) (stdout=$outname) (join=yes) (maxCpuTime=100) (ftpThreads=6) (middleware="NorduGrid-0.1.6") EOXRSL ngsub -f ffun.xrsl sleep 10 done Job submission script ffun.sh

Oxana Smirnova8 Re-used JETSET/PYTHIA generation #!/bin/sh cat ffun.inp $ $1 EOF time./ffungen Executable run.shJob submission: ffun.sh

Oxana Smirnova9 Re-used JETSET/PYTHIA generation

Oxana Smirnova10 ATLAS DC1 Problem: –Executed binary, libraries etc do not belong to a user, but are a part of the ATLAS runtime environment –Input files are residing on [remote] storages However, as soon as the runtime environment is set up, the task becomes even easier on the Grid, comparing to a traditional method (see the Demo later) –Again, the most convenient method may be to produce xRSL and steering files on-fly

Oxana Smirnova11 ATLAS DC1 xRSL example &(executable=“\/$ATLAS_ROOT/bin/atlsim") (arguments="-w 0 -b dc simu.0000.nordugrid.kumac partition=0001 nskip=0 ntrig=2") (stdout=out.txt)(stderr=err.txt) (outputFiles= ("out.txt" "")("err.txt" "") ("dc simu.0001.nordugrid.zebra" "")("dc simu.0001.nordugrid.his" "")) (inputFiles= ("atlsim.makefile" " ("atlas.kumac" " ("atlsim.logon.kumac" " (" dc simu.0000.nordugrid.kumac " " dc simu.0000.nordugrid.kumac ") ("gen0016_1.root" ) (runTimeEnvironment="ATLAS-3.0.1") (jobname="dc simu.0001.nordugrid")

Oxana Smirnova12 ATLAS DC1

Oxana Smirnova13 Summary NorduGrid is ready to run “traditional” applications in a Grid environment There is still plenty of room for development: e.g., data management, production job management etc