The Grid Effort at UF Presented by Craig Prescott
Overview of UF Grid Activities Leadership Funded Grid Projects / Activities Middleware R&D Application Integration
Grid Members Professors –Avery –Ranka Researchers –Bourilkov –Cavanaugh –Fu –Kim –Prescott –Rodriguez Ph.D. Students –Chitnis (Sphinx) –In (Sphinx) –Kulkarni (Sphinx) Master Student –Khandelwal (CODESH) Former Master Students –Katageri (now at Linux Labs) –Arbree (now at Cornell) –Padala (now at Michigan)
Grid Leadership Avery –PI of GriPhyN ($11 M ITR Project) –PI of iVDGL ($13 M ITR Project) –Co-PI of CHEPREO –Co-PI of UltraLight –President of SESAPS Ranka –PI of Data Mining & exploration Middleware for Grid and Distributed Computinhg ($1.5 M ITR Project) –Project Co-lead for Sphinx –Senior Personnel on CHEPREO –PI MRI Bourilkov –Project Lead for CAVES and CODESH Cavanaugh –Project Coordinator for UltraLight –Deputy Coordinator for GriPhyN –Project Co-Lead for Sphinx Prescott –Co-organised the Boston OSG Technical Workshop –US-CMS Production Manager Rodriguez –Deputy Coordinator for iVDGL –Deployment Board Co-chair for OSG KIM –Project Lead for GridCAT
GriPhyN / iVDGL Develop the technologies & tools needed to exploit a distributed cyberinfrastructure Apply and evaluate those technologies & tools in challenging scientific problems Develop the technologies & procedures to support a persistent cyberinfrastructure Create and operate a persistent cyberinfrastructure in support of diverse discipline goals
Sphinx Scheduling on a grid has unique requirements –Information –System Decisions based on global views providing a Quality of Service are important –Particularly in a resource limited environment Sphinx is an extensible, flexible grid middleware which –Already implements many required features for effective global scheduling –Provides an excellent “workbench” for future activities! VDT Server VDT Client ? Recommendation Engine
CAVES & CODESH Concentrate on the interactions between scientists collaborating over extended periods of time Seamlessly log, exchange and reproduce results and the corresponding methods, algorithms and programs Automatic and complete logging and reuse of work or analysis sessions (between checkpoints) Extend the power of users working or performing analyses in their habitual way, giving them virtual data capabilities Build functioning collaboration suites (stay close to users!) First prototypes use popular tools: Python, ROOT and CVS; e.g. all ROOT commands and CAVES commands available
Grid-enabled Analysis Environment
Grid3 Task Force –Rodriguez member Operations Group –Online expert consultants –Prescott, Kim, Rodriguez Site Verification –Validates the grid middleware installation on a grid site –Prescottled development Monitoring –Kim, Prescott members of G3 Monitoirng grup Also of OSG mon tech group Grid3-Dev –Fu, Rodriguez, Prescott –Prescott maintains Grid3-Dev deployment at UF
GridCAT
Open Science Grid Avery senior member –Governance board –Steering committee? Deployment board Co-chair Rodriguez Prescott, Kim Monitoring Technical Group members OSG Integration Group: –Kim, Prescott, Rodriguez members –Prescott, Rodriguez SRM Server Integration activity
CHEPREO & Ultralight
GEMS
In-VIGO
FLR
HPC and UF Campus Grid
US-CMS Grid Testbed
CMS Computing UF Coordinates the US-CMS Production Effort Over the past year, Prescott oversaw and was responsible for –40% of the global CMS detector simulation –25% of the global CMS event digitisation Prescott also assists in publishing the MC data to the FNAL Tier-1 and the UF Tier-2 sites Effort started in 2001 (pre-grid) with Bourilkov and Rodriguez –Produced all phases, including PU
Workshops Digital Divide Workshop Brazil –Avery OSG Boston –Prescott PNPA GGF Berlin –Cavanaugh UTB Grid Summer School –Rodriguez, Padala
CMS Application Integration Production and Virtual Data Analysis and Virtual Data ORCA on Grid3 in analysis Mode
Outreach FIU Grid3 Cluster –Rodriguez University of Chicago US-ATLAS Tier-2 Facility –Rodriguez Brazil –Rodriguez Korea –Kim
Publications CHEP –VD in CMS Analysis –VD in CMS Production –UF Proto T2 Facility –GridCAT –Sphinx The GRID II –Federated Analysis for HEP IPDPS –Policy Based Scheduling
Plans Distribution of user MC Production Data on Grid3 –Kim developing a web portal…
Conclusion