4/2/2002HEP Globus Testing Request - Jae Yu x2814 1 Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.

Slides:



Advertisements
Similar presentations
Grid Resource Allocation Management (GRAM) GRAM provides the user to access the grid in order to run, terminate and monitor jobs remotely. The job request.
Advertisements

McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
A Computation Management Agent for Multi-Institutional Grids
S. Gadomski, "ATLAS computing in Geneva", journee de reflexion, 14 Sept ATLAS computing in Geneva Szymon Gadomski description of the hardware the.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Slides for Grid Computing: Techniques and Applications by Barry Wilkinson, Chapman & Hall/CRC press, © Chapter 1, pp For educational use only.
Sergey Belov, LIT JINR 15 September, NEC’2011, Varna, Bulgaria.
GRID Workload Management System Massimo Sgaravatto INFN Padova.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Workload Management Massimo Sgaravatto INFN Padova.
The B A B AR G RID demonstrator Tim Adye, Roger Barlow, Alessandra Forti, Andrew McNab, David Smith What is BaBar? The BaBar detector is a High Energy.
Status of Globus activities within INFN (update) Massimo Sgaravatto INFN Padova for the INFN Globus group
Web Based Applications
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Workload Management WP Status and next steps Massimo Sgaravatto INFN Padova.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Computational grids and grids projects DSS,
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Grid Workload Management Massimo Sgaravatto INFN Padova.
Laboratório de Instrumentação e Física Experimental de Partículas GRID Activities at LIP Jorge Gomes - (LIP Computer Centre)
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of TX At Arlington.
Spending Plans and Schedule Jae Yu July 26, 2002.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
WP8 Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid WP8 Meeting, 16th November 2000 Glenn Patrick (RAL)
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
The GRID and the Linux Farm at the RCF HEPIX – Amsterdam HEPIX – Amsterdam May 19-23, 2003 May 19-23, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind, A.
The GRID and the Linux Farm at the RCF CHEP 2003 – San Diego CHEP 2003 – San Diego March 27, 2003 March 27, 2003 A. Chan, R. Hogue, C. Hollowell, O. Rind,
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
From DØ To ATLAS Jae Yu ATLAS Grid Test-Bed Workshop Apr. 4-6, 2002, UTA Introduction DØ-Grid & DØRACE DØ Progress UTA DØGrid Activities Conclusions.
Role Based VO Authorization Services Ian Fisk Gabriele Carcassi July 20, 2005.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Proposal for a IS schema Massimo Sgaravatto INFN Padova.
The DZero/PPDG Test Bed Test bed composition as of Feb 2002: 3 PC at Fermilab (sammy, samadams, sameggs) Contact: Gabriele Garzoglio 1 PC at Imperial College.
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
Portal Update Plan Ashok Adiga (512)
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
LSF Universus By Robert Stober Systems Engineer Platform Computing, Inc.
DOSAR Roadmap Jae Yu DOSAR Roadmap 5 th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007 Jae Yu Univ. of Texas, Arlington LHC Tier 3 Efforts.
Open Science Grid Build a Grid Session Siddhartha E.S University of Florida.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
VOX Project Tanya Levshina. 05/17/2004 VOX Project2 Presentation overview Introduction VOX Project VOMRS Concepts Roles Registration flow EDG VOMS Open.
LHCb Grid MeetingLiverpool, UK GRID Activities Glenn Patrick Not particularly knowledgeable-just based on attending 3 meetings.  UK-HEP.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
Dave Newbold, University of Bristol14/8/2001 Testbed 1 What is it? First deployment of DataGrid middleware tools The place where we find out if it all.
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
Belle II Physics Analysis Center at TIFR
Simulation use cases for T2 in ALICE
Proposal for a DØ Remote Analysis Model (DØRAM)
Grid Computing Software Interface
Presentation transcript:

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing DØGrid and remote computing systems, in addition to ATLAS Since the DØ experiment is taking data currently, this is the best platform to test out Grid computing architecture for LHC experiment such as ATLAS Playing leading role in the DØ Grid and its test-bed activity and linking it to ATLAS will increase UTA’s chance to host an ATLAS Tier II computing site An MRI proposal to acquire sufficient hardware for a DØ Regional Analysis Center (DØRAC) has been submitted  UTA is the only US candidate at the moment for DØRAC UTA-HEP group has been participating in various Grid Test-bed activities through ATLAS and DØ Experiments and independently The more activity we contribute the higher the chance for Tier II

4/2/2002HEP Globus Testing Request - Jae Yu x Globus Remote Job Submission Test Globus is one of the available Grid Job submission and security tool kit but not a batch system Globus is being explored for adoption for DØ Globus can and must work with many different batch control systems; PBS, Condor, Condor-G, LSF HEP group has been working with ACS toward utilizing the ample Intel Linux cpu power for DØ detector simulation, but due to its LSF batch control system the process has been slow This works in our advantage for Globus remote job submission testing for LSF system

4/2/2002HEP Globus Testing Request - Jae Yu x How does Globus Remote Job Submission work? UTA-ACS Central Grid Certification Authority DoE Authorized Remote User FNAL Institutional Gatekeeper LSF Batch Controller ACS Intel Batch Machines 4 5,6 7 Globus Job Submission Sequence 1.An authorized user with a proper certificate submits a job to the gatekeeper 2.The Gatekeeper checks the certificate with the central authority 3.The central authority gives back the key for clearance 4.A Globus account in Gatekeeper submits the job to the batch controller 5.Batch controller assigns a batch queue for the processing 7.Batch controller tells back to the Gatekeeper for completion 8.Gatekeeper sends results to the user

4/2/2002HEP Globus Testing Request - Jae Yu x What do we need from ACS? An ACS Intel machine as the Globus gatekeeper A Globus account (normal user privilege) Install Globus took kit must be installed on the gatekeeper as root (Need help from ACS personnel) Work with the ACS contact to get Globus set up, including certification application and other configuration Batch job will be a simple non-cpu critical “Hello world!” style. Testing will be sporadic and brief. Total Testing Duration: Not clearly known yet but presume at least a couple of months Contact persons: –HEP: Tomasz Wlodek & Jae Yu (x2814) –FNAL: Gabrielle Garzoglio

4/2/2002HEP Globus Testing Request - Jae Yu x Some detailed technical information ACS Linux farm runs RH6.2 with a planned upgrade to RH7.2 sometime in the summer. Does this matter for the testing? –No. –The globus toolkit 2.0 is distributed for Linux 2.x/Intel x86. The same software should work for both the version minor configuration changes. ACS's LSF system is version 4.1. Does this matter? –Probably not due to minimal use of LSF command in Globus –This is part of the testing How much disk space is the testing going to require? –Globus toolkit: 150 MB. –Log files: ~15 KB/Mo. –No plan for implementing large executable in the first phase Globus system stability –No system crash in 3 months testing of globus –However, no experience with LSF yet

4/2/2002HEP Globus Testing Request - Jae Yu x How much of security risk is this testing going to be? –Very minimal –Globus Security Infrastructure (GSI) uses asymmetric (PKI) mechanisms to enforce security which is used in e-business web sites for transactions Number of people participating in testing –Currently 5 non-UTA participants –Expect to grow at a minimal level to less than 20 –UTA HEP group is a strong participant –Not all of them will run on UTA ACS system What will this lead to? Is this just purely a one-time test, or will it lead to significant use of UTA's supercomputer system by others (non-UTA people)? –Two phases of testing is planned Phase 1: Minimal “Hello World!” style testing that uses extremely insignificant resources Phase 2: More complicated DØ experiment executable with several months after phase1 –Resource usage even for Phase 2 is not expected to large or extensive –This testing will likely to continue as long as it is allowed to go on

4/2/2002HEP Globus Testing Request - Jae Yu x ACS Administrator has a complete control over the testing executable –At any time the ACS administrators notice issues, the testing will shutdown till a resolution –If the testing seriously interferes with ACS LSF operation, the testing should stop and a resolution needs to be found –The contact persons at UTA and Fermilab will be responsible for regulating and coordinating the flow of testing Proposed Operating Principles for Globus Testing