Open Science Grid and Gluex Richard Jones Gluex Collaboration Meeting, Newport News, Jan. 28-30, 2010.

Slides:



Advertisements
Similar presentations
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Advertisements

CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Jan 2010 Current OSG Efforts and Status, Grid Deployment Board, Jan 12 th 2010 OSG has weekly Operations and Production Meetings including US ATLAS and.
OSG GUMS CE SE VOMS VOMRS UConn-OSG University of Connecticut GLUEX support center Gluex VO Open Science Grid All-Hands Meeting, Chicago, IL, Mar. 8-11,
1 Status of the ALICE CERN Analysis Facility Marco MEONI – CERN/ALICE Jan Fiete GROSSE-OETRINGHAUS - CERN /ALICE CHEP Prague.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
For more notes and topics visit:
Hall D Online Data Acquisition CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental forces of nature. 75.
Offline Software Status Jan. 30, 2009 David Lawrence JLab 1.
High Throughput Computing on the Open Science Grid lessons learned from doing large-scale physics simulations in a grid environment Richard Jones, University.
Reiknistofnun Háskóla Íslands
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
OSG Site Provide one or more of the following capabilities: – access to local computational resources using a batch queue – interactive access to local.
Integration and Sites Rob Gardner Area Coordinators Meeting 12/4/08.
Computing Infrastructure Status. LHCb Computing Status LHCb LHCC mini-review, February The LHCb Computing Model: a reminder m Simulation is using.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
HEPiX Karlsruhe May 9-13, 2005 Operated by the Southeastern Universities Research Association for the U.S. Department of Energy Thomas Jefferson National.
LIGO Applications Kent Blackburn (Robert Engel, Britta Daudert)
Concept: Well-managed provisioning of storage space on OSG sites owned by large communities, for usage by other science communities in OSG. Examples –Providers:
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
IST E-infrastructure shared between Europe and Latin America High Energy Physics Applications in EELA Raquel Pezoa Universidad.
OSG Council, Aug 22 nd -23 rd 2012 Ruth Pordes, Council Chair.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
Enabling Grids for E-sciencE System Analysis Working Group and Experiment Dashboard Julia Andreeva CERN Grid Operations Workshop – June, Stockholm.
AND Daniel Chen 5/13/2010. W HAT DO THESE TWO PRODUCTS DO ? /collaboration systems Help companies and employees manage s in an efficient manner.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
OSG PKI Transition: Transition Phase Report Von Welch OSG PKI Transition Lead Indiana University Center for Applied Cybersecurity Research.
1 Development of a High-Throughput Computing Cluster at Florida Tech P. FORD, R. PENA, J. HELSBY, R. HOCH, M. HOHLMANN Physics and Space Sciences Dept,
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
1 LHCb on the Grid Raja Nandakumar (with contributions from Greig Cowan) ‏ GridPP21 3 rd September 2008.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
Future Software Needs HDGeant4 project status MC simulation on the OSG Exploiting cheap GPUs for PWA Richard Jones – University of Connecticut GlueX Collaboration.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
April 26, Executive Director Report Executive Board 4/26/07 Things under control Things out of control.
Jan 2010 OSG Update Grid Deployment Board, Feb 10 th 2010 Now having daily attendance at the WLCG daily operations meeting. Helping in ensuring tickets.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
Gluex VO Status Report Richard Jones, University of Connecticut OSG Council Meeting, September 11, 2012.
Gluex VO Status Report Richard Jones, University of Connecticut OSG Council Meeting, September 11, 2012.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
JAliEn Java AliEn middleware A. Grigoras, C. Grigoras, M. Pedreira P Saiz, S. Schreiner ALICE Offline Week – June 2013.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
GlueX Computing GlueX Collaboration Meeting – JLab Edward Brash – University of Regina December 11 th -13th, 2003.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Latest Improvements in the PROOF system Bleeding Edge Physics with Bleeding Edge Computing Fons Rademakers, Gerri Ganis, Jan Iwaszkiewicz CERN.
Feedback from CMS Andrew Lahiff STFC Rutherford Appleton Laboratory Contributions from Christoph Wissing, Bockjoo Kim, Alessandro Degano CernVM Users Workshop.
User Support of WLCG Storage Issues Rob Quick OSG Operations Coordinator WLCG Collaboration Meeting Imperial College, London July 7,
Opensciencegrid.org Operations Interfaces and Interactions Rob Quick, Indiana University July 21, 2005.
Victoria A. White Head, Computing Division, Fermilab Fermilab Grid Computing – CDF, D0 and more..
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
05/14/04Larry Dennis, FSU1 Scale of Hall D Computing CEBAF provides us with a tremendous scientific opportunity for understanding one of the fundamental.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
1 GlueX Software Oct. 21, 2004 D. Lawrence, JLab.
VO Experiences with Open Science Grid Storage OSG Storage Forum | Wednesday September 22, 2010 (10:30am)
Operations Interfaces and Interactions
Boomerang Adds Smart Calendar Assistant and Reminders to Office 365 That Increase Productivity and Simplify Meeting Scheduling OFFICE 365 APP BUILDER.
Bulk production of Monte Carlo
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
OSG Connect and Connect Client
Simulation use cases for T2 in ALICE
Presentation transcript:

Open Science Grid and Gluex Richard Jones Gluex Collaboration Meeting, Newport News, Jan , 2010

Gluex collaboration meeting, Newport News, Jan , Review OSG is a community of scientific collaborations who deploy a common software infrastructure for managing global access to shared storage and computing resources. Member experiments include: cms, atlas, ligo, star, alice, cigi, dosar, d 0, hypercp, i2u2, icecube, ktev, miniboone, minerva, minos, mipp, numi, nova, and a number of scientific collectives (compbiogrid, nebiogrid, astro, glow, grow, gpn, about a dozen more). Gluex joined as a new VO in Sept First experience with Gluex production work on OSG over the last couple of months.

Gluex collaboration meeting, Newport News, Jan , Present status Gluex VO is now in active operation since 12/1/2009. So far UConn is only contributor of Gluex VO resources. 384 available job slots 5 TB of available grid storage for Gluex work 200,000 cpu-hr/mo., with ~50% used by local users opportunistic scheduling, local users get first priority UConn-OSG is running jobs from sbgrid, ligo, others. the full OSG grid In exchange, Gluex users get access to the full OSG grid (one user gets ~1000 simultaneous job slots this week).

Gluex collaboration meeting, Newport News, Jan , Present status Not a free lunch. fair play is expected. Not a free lunch. There is no strict accounting of jobs performed by a VO’s members vs the resources contributed by a VO, but each site selects the VO’s it will accept jobs from – fair play is expected. The advantage is spreading out your load in time, while gaining high instantaneous turn-around. Example: Blake Leverington eta,pi 0 simulation in 12/2009. needed: one week of data in this channel generate + filter + simulate + strip + analyze  root files altogether about 10 khr of cpu time + 500GB storage all jobs were taken by UConn-OSG broken into ~1000 jobs of 10 hr each completed in ~48 hr

Gluex collaboration meeting, Newport News, Jan , Blake’s blog Blake wrote a wiki page detailing everything he learned during the process of getting his jobs set up to run on the grid. HOWTO get your jobs to run on the Grid

Gluex collaboration meeting, Newport News, Jan , Where are we now? Installing the OSG client package in a jlab cue area recently needed some data from Jlab MSS cc.jlab.org documentation says to use srm looked for srm commands – not found (more on this later) installed OSG-Client in ~jonesrt (about 850 MB of quota!) SP volunteered some space in /app for it RJ will install and maintain it accessible to anyone with a cue account Provides a streamlined way for someone to get started using the grid without having to install a lot of software Especially convenient for Gluex members at Jlab

Gluex collaboration meeting, Newport News, Jan , What’s next? This resource is in place now, available to Gluex students, postdocs, whoever is looking for a place available cycles and fast turn-around for jobs with data-intensive or compute-hungry requirements. run physics simulation and analysis run engineering models with fine-mesh grids publish large data sets with high bandwidth access

Gluex collaboration meeting, Newport News, Jan , What’s coming soon? Competition for compute/data resources is modest right now, but the day is not so far away when that will change… We want to be in a place then to marshal shared resources as a collaboration to carry out a variety of parallel simulation + reconstruction + analysis programs. Jlab will provide a set of top-tier services (data warehouse, major mover of raw data for calibration and “cooking”) but we do not want our student’s analysis jobs to sit in the queue behind “mass production” jobs for Gluex, CLAS12 and the other halls. We need a plan for moving and storing working data sets off-site, and for sharing global-access compute and storage resources.

Gluex collaboration meeting, Newport News, Jan , Moving data from Jlab off-site? This seems not to be well-thought through at present. In general, moving data between on-site and off-site is a complicated dance whose steps change from time to time. Users have scripts. My scripts that worked last time don’t work this time. My latest script [runs on ifarml1] check for space on scratch to hold data I want to stage. Gluex space is near the quota limit, write automatic to Mark Ito. sleep for 6 hours, and repeat. got space, stage data from mss quick before the space disappears. send to self to run script on jlabs1 pushing data to UConn from scratch run script on jlabs1 forget to delete files from scratch (you may need it again soon)

Gluex collaboration meeting, Newport News, Jan , Outlook and summary OSG is a very active and growing community, with an extensive network of support (weekly VO conference calls, online “chat” help desk, ticket system, meetings). Fermilab is hosting “All Hands” meeting of OSG users in March, RJ invited to chair a session. I am ready to help students, others to get started using OSG resources for Gluex. Our offline computing requirements that we communicate to the Jlab Computing Group needs to reflect our need to efficiently exchange data with outside institutions.