CPT Demo May 10 2004 Build on SC03 Demo and extend it. Phase 1: Doing Root Analysis and add BOSS, Rendezvous, and Pool RLS catalog to analysis workflow.

Slides:



Advertisements
Similar presentations
The Replica Location Service In wide area computing systems, it is often desirable to create copies (replicas) of data objects. Replication can be used.
Advertisements

Building Portals to access Grid Middleware National Technical University of Athens Konstantinos Dolkas, On behalf of Andreas Menychtas.
Abstraction Layers Why do we need them? –Protection against change Where in the hourglass do we put them? –Computer Scientist perspective Expose low-level.
Data Management Expert Panel - WP2. WP2 Overview.
EU 2nd Year Review – Jan – Title – n° 1 WP1 Speaker name (Speaker function and WP ) Presentation address e.g.
1 OBJECTIVES To generate a web-based system enables to assemble model configurations. to submit these configurations on different.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Workload Management meeting 07/10/2004 Federica Fanzago INFN Padova Grape for analysis M.Corvo, F.Fanzago, N.Smirnov INFN Padova.
Andrew McNab - EDG Access Control - 14 Jan 2003 EU DataGrid security with GSI and Globus Andrew McNab University of Manchester
Sphinx Server Sphinx Client Data Warehouse Submitter Generic Grid Site Monitoring Service Resource Message Interface Current Sphinx Client/Server Multi-threaded.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
Other servers Java client, ROOT (analysis tool), IGUANA (CMS viz. tool), ROOT-CAVES client (analysis sharing tool), … any app that can make XML-RPC/SOAP.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
BaBar WEB job submission with Globus authentication and AFS access T. Adye, R. Barlow, A. Forti, A. McNab, S. Salih, D. H. Smith on behalf of the BaBar.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
Don Quijote Data Management for the ATLAS Automatic Production System Miguel Branco – CERN ATC
Korea Workshop May Grid Analysis Environment (GAE) (overview) Frank van Lingen (on behalf of the GAE.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
David Adams ATLAS DIAL Distributed Interactive Analysis of Large datasets David Adams BNL July 15, 2003 LCG Analysis RTAG CERN.
OSG Middleware Roadmap Rob Gardner University of Chicago OSG / EGEE Operations Workshop CERN June 19-20, 2006.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
Job Submission Condor, Globus, Java CoG Kit Young Suk Moon.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Pegasus-a framework for planning for execution in grids Ewa Deelman USC Information Sciences Institute.
CYBERINFRASTRUCTURE FOR THE GEOSCIENCES Data Replication Service Sandeep Chandra GEON Systems Group San Diego Supercomputer Center.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
Stuart Wakefield Imperial College London Evolution of BOSS, a tool for job submission and tracking W. Bacchi, G. Codispoti, C. Grandi, INFN Bologna D.
13 May 2004EB/TB Middleware meeting Use of R-GMA in BOSS for CMS Peter Hobson & Henry Nebrensky Brunel University, UK Some slides stolen from various talks.
Giuseppe Codispoti INFN - Bologna Egee User ForumMarch 2th BOSS: the CMS interface for job summission, monitoring and bookkeeping W. Bacchi, P.
Claudio Grandi INFN Bologna CHEP'03 Conference, San Diego March 27th 2003 BOSS: a tool for batch job monitoring and book-keeping Claudio Grandi (INFN Bologna)
CHEP 2004 Grid Enabled Analysis: Prototype, Status and Results (on behalf of the GAE collaboration) Caltech, University of Florida, NUST, UBP Frank van.
Grid Scheduler: Plan & Schedule Adam Arbree Jang Uk In.
The Experiments – progress and status Roger Barlow GridPP7 Oxford 2 nd July 2003.
CLRC and the European DataGrid Middleware Information and Monitoring Services The current information service is built on the hierarchical database OpenLDAP.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Overview of Privilege Project at Fermilab (compilation of multiple talks and documents written by various authors) Tanya Levshina.
Pegasus-a framework for planning for execution in grids Karan Vahi USC Information Sciences Institute May 5 th, 2004.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
Distributed Services for Grid Enabled Data Analysis Distributed Services for Grid Enabled Data Analysis.
The CMS Top 5 Issues/Concerns wrt. WLCG services WLCG-MB April 3, 2007 Matthias Kasemann CERN/DESY.
Korea Workshop May GAE CMS Analysis (Example) Michael Thomas (on behalf of the GAE group)
GridChem Architecture Overview Rion Dooley. Presentation Outline Computational Chemistry Grid (CCG) Current Architectural Overview CCG Future Architectural.
Condor Services for the Global Grid: Interoperability between OGSA and Condor Clovis Chapman 1, Paul Wilson 2, Todd Tannenbaum 3, Matthew Farrellee 3,
The Gateway Computational Web Portal Marlon Pierce Indiana University March 15, 2002.
STAR Scheduling status Gabriele Carcassi 9 September 2002.
Enabling Grids for E-sciencE CMS/ARDA activity within the CMS distributed system Julia Andreeva, CERN On behalf of ARDA group CHEP06.
STAR Scheduler Gabriele Carcassi STAR Collaboration.
VOX Project Status T. Levshina. 5/7/2003LCG SEC meetings2 Goals, team and collaborators Purpose: To facilitate the remote participation of US based physicists.
+ Support multiple virtual environment for Grid computing Dr. Lizhe Wang.
Site Authorization Service Local Resource Authorization Service (VOX Project) Vijay Sekhri Tanya Levshina Fermilab.
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
Current Globus Developments Jennifer Schopf, ANL.
CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
BOSS: the CMS interface for job summission, monitoring and bookkeeping
BOSS: the CMS interface for job summission, monitoring and bookkeeping
Introduction to Grid Technology
The CMS Grid Analysis Environment GAE (The CAIGEE Architecture)
BOSS: the CMS interface for job summission, monitoring and bookkeeping
CMS report from FNAL demo week Marco Verlato (INFN-Padova)
ExaO: Software Defined Data Distribution for Exascale Sciences
Initial job submission and monitoring efforts with JClarens
Distributed Services for Grid Distributed Services for Grid
Presentation transcript:

CPT Demo May Build on SC03 Demo and extend it. Phase 1: Doing Root Analysis and add BOSS, Rendezvous, and Pool RLS catalog to analysis workflow Phase 2: Add analysis web client to analysis front end Phase 3: Add MC run job service and ability to submit ORCA files Phase 1 is certainly feasible, Phase 3 uncertain (Therefore Phase 1 and 2 as backup)

People Involved: Rick Cavanaugh (UFL) Applications Dimitri Bourilkov (UFL) CAVES Mandar Kulkarni (UFL) CAVES/Sphinx Craig Prescott (UFL) (consultant on CMS production tools) Jang Uk (UFL) Sphinx Laukik Chitnis (UFL) Monitoring Conrad Steenberg (Caltech) Michael Thomas (Caltech) Frank v. Lingen (Caltech) …….?? People involved:

Root Clarens Client (100%) Web based Client (70%) Chimera Clarens Service (100%) Sphinx Job Submission Client (90%) BOSS Clarens Service (90%) Clarens Rendezvous Service (85%) Clarens File Service (100%) Clarens POOL Service (90%) MCRunJob Clarens Service (??%) (Contact Anzar from FNAL) MonaLisa (100%) Web Interface for JClarens (80%) BOSS WSDL (0%) ACL Management GUI(70%) Catalog Browser Interface (90%) Components used and work to be done for Demo: (between brackets an estimate of how “finished” these components are) Which hosts will we use (one at least located at CERN?) What analysis and data will we use? Will we show a multi user analysis?

RLS (RLI) Chimera Virtual Data Catalog Clarens-Service Service Flow for SC03 Demo (Rick’s mods) Sphinx Scheduler Clarens-Service Execution Globus + local sched File Service Clarens-Service Analysis Client (ROOT+Web) Workflow Management: McRunJob/MOP Sphinx-Client Clarens-Service Computing Element Storage Element MonALISA Monitoring RLS (LRC) Grid Site MonALISA Central Repostiory Sphinx Database

RLS (RLI) VO management Clarens-Service Look up Clarens-Service Chimera Virtual Data Catalog Clarens-Service Service Flow for CPT Demo (Rick’s mods) Sphinx Scheduler Clarens-Service Execution Globus + local sched File Service Clarens-Service Analysis Client (ROOT+Web) Workflow Management: McRunJob/MOP Sphinx-Client Clarens-Service POOL RLS - Meta Data Catalog Clarens-Service Computing Element Storage Element MonALISA Monitoring RLS (LRC) Grid Site BOSS Job-monitoring Clarens-Service MonALISA Central Repostiory Sphinx Database Job = BOSS + Clarens-client

Service Flow (Rick’s Mods) 1User authenticates 2User looks-up which services are available 3User queries either: –VDC for input data and defines the application + output data –or, POOL for input data 4User sends “job” request to Workflow Manager –4.1 WM extracts/puts abstract “job” into VDC –4.2 WM extracts input POOL information for abstract “job” –4.3 WM sends abstract “job” to scheduler Scheduler queries RLS, MonALISA ; sends concrete “job” back to WM –4.4 WM submits “concrete” job to grid site Job executes under a BOSS-Clarens-Client Wrapper; Job finishes ; RLS/POOL is updated ; data is available via Clarens FS 5User checks on status of “job” by querying BOSS Job-monitor 6User uses Clarens File Service to access ROOT files

Replica Location & Selection VO managementLook up Virtual Data Catalog Service Flow for CPT Demo Scheduling Execution Data collection Analysis Client authentication Workflow Management Meta Data Catalog Computing Element Storage Element Register Clarens BOSS Sphinx Job Submission MC RunJob Pool RLS Chimera Clarens Sphinx Scheduling Clarens File Service Clarens Root Client Java web interface client ROOT FAMOS ORCA (CMS) Implementations Monitoring MonALISA GUI

Replica Location & Selection VO managementLook up Virtual Data Catalog Policy & Accounting Our Original Architecture as Comparison with the demo setup Scheduling Monitoring Execution Steering Data collection Analysis Client authentication discovery feedback Workflow Management Meta Data Catalog Replica Catalog Replica Management feedback Computing Element Specification Storage Element Authorized Analysisversioning Authorized Authorized 9 9 Authorized Authorized Authorized 12 Releaseresources Authorized Replication based on trend analysis Schedule Performance analysis Supervisor Authorized Autonomous decisions on behalf of user Look up Authorized resources allocate Register Reservation Experation Clarens Chimera Sphinx VDT-Client MonALISA VDT-Server Clarens (CMS) Implementations ROOT- Clarens/ Cojac/ IGUANA Pool RefDB CAVES EDG RB ROOT FAMOS ORCA Sphinx Shakar Clarens Condor CondorG Multiple applications/users will execute multiple service flows in a grid environment BOSS