Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT.

Slides:



Advertisements
Similar presentations
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
Advertisements

Dan Bradley Computer Sciences Department University of Wisconsin-Madison Schedd On The Side.
1 Development of Virtual Supercomputer Service using Academic Network
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
Collaborative Campus Grid - Practices and experiences in Leiden University Campus Grid (LUCGrid) Hui Li Feb 4, 2005.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
GRID Workload Management System Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
Zach Miller Condor Project Computer Sciences Department University of Wisconsin-Madison Flexible Data Placement Mechanisms in Condor.
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Zach Miller Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison What’s New in Condor.
OSG Grid Workshop in KNUST, Kumasi, Ghana August 6-8, 2012 following the AFRICAN SCHOOL OF FUNDAMENTAL PHYSICS AND ITS APPLICATIONS July 15-Aug 04, 2012.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
LCG and HEPiX Ian Bird LCG Project - CERN HEPiX - FNAL 25-Oct-2002.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Zach Miller Computer Sciences Department University of Wisconsin-Madison Bioinformatics Applications.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
OSG Software and Operations Plans Rob Quick OSG Operations Coordinator Alain Roy OSG Software Coordinator.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
VDT 1 The Virtual Data Toolkit 7.th EU DataGrid Internal Project Conference Heidelberg / Germany Todd Tannenbaum (Miron Livny) (Alain.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor RoadMap.
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
The Open Science Grid OSG Ruth Pordes Fermilab. 2 What is OSG? A Consortium of people working together to Interface Farms and Storage to a Grid and Researchers.
Atlas Grid Status - part 1 Jennifer Schopf ANL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
Review of Condor,SGE,LSF,PBS
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
Open Science Grid Open Science Grid: Beyond the Honeymoon Dane Skow Fermilab September 1, 2005.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
Condor Project Computer Sciences Department University of Wisconsin-Madison Grids and Condor Barcelona,
Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center Open Science Grid (OSG) Introduction for the Ohio Supercomputer Center February.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Nick LeRoy Computer Sciences Department University of Wisconsin-Madison Hawkeye.
VDT 1 The Virtual Data Toolkit Todd Tannenbaum (Alain Roy)
Miron Livny Computer Sciences Department University of Wisconsin-Madison The Role of Scientific Middleware in the Future of HEP Computing.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Open Science Grid in the U.S. Vicky White, Fermilab U.S. GDB Representative.
Welcome!!! Condor Week 2006.
US ATLAS – new grid initiatives John Huth Harvard University US ATLAS Software Meeting: BNL Aug 03.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
Planning Session. ATLAS(-CMS) End-to-End Demo Kaushik De is the Demo Czar Need to put team together Atlfast production jobs –Atlfast may be unstable over.
Towards deploying a production interoperable Grid Infrastructure in the U.S. Vicky White U.S. Representative to GDB.
Group # 14 Dhairya Gala Priyank Shah. Introduction to Grid Appliance The Grid appliance is a plug-and-play virtual machine appliance intended for Grid.
The Great Migration: From Pacman to RPMs Alain Roy OSG Software Coordinator.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
OSG Facility Miron Livny OSG Facility Coordinator and PI University of Wisconsin-Madison Open Science Grid Scientific Advisory Group Meeting June 12th.
Defining the Technical Roadmap for the NWICG – OSG Ruth Pordes Fermilab.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
Pegasus and Condor Gaurang Mehta, Ewa Deelman, Carl Kesselman, Karan Vahi Center For Grid Technologies USC/ISI.
Status of Grids for HEP and HENP
Presentation transcript:

Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT

In Brief: What is NMI? › NSF Middleware Initiative › Funding for middleware infrastructure › Build, package, and test grid software

In Brief: What is VDT? › Virtual Data Toolkit—GriPhyN project › Grid middleware + tools for virtual data › Supports physicists in GriPhyN, usable by anyone  Currently in active use in CMS, Atlas, EDG, and more › Very easy to install

Why do you care? › Condor participates in grid activities › Condor team members are part of NMI & VDT › More testing means better Condor and Condor-G › NMI & VDT make it easy for you to start with grid software

NMI at Condor › Three full-time Condor team members work on NMI › Build & package grid software  Only quality software is included.  Ensure that versions that work together (not always easy to do!) › Test infrastructure

NMI Testing: Verification › NMI does basic verification testing  Can we submit a Globus single job?  Can we submit a Condor-G job?  Can we transfer a single file?

NMI Testing: Local Grids › NMI has seven computers that can be organized in a local-area grid › Will expand with more computers, more architectures

NMI Testing: Local Grids › Stress testing with DAGMan  Coordinated tests of thousands of jobs  Jobs distributed across grid sites  Tests Condor-G, Condor, Globus › Real-life testing with CMS application  Tests everything: Globus, Condor-G, Condor, file transfers…  Real-life workload, high stress

NMI Testing: Larger Grids › We can attach more resources to local grids for stress testing  UW Condor pool with hundreds of nodes  Remote resources › Distributed testing across NMI partner sites

VDT at Condor › Two full-time Condor team members › Packaging and installation  NMI packages software  VDT installs it beautifully › Packaging and installation are not glamorous, but are essential › NMI & VDT work together and make each other better

VDT installation › VDT installation goal:  You hit a button, the software is correctly installed and configured.  (You can customize it afterwards.) › What does this depend on?  Excellent packaging  Excellent installation  Excellent testing

VDT example installation › Download Pacman  › pacman –get VDT-Server  Answer a few questions › pacman –get VDT-Client › ls condor/ ftsh/ Pacman.db replica/ vdt/ doc/ globus/ perl/ setup.csh vdt-install.log edg/ gpt/ post-install/ setup.sh

VDT Testing › Wouldn’t it be nice if VDT could share with NMI?  VDT and NMI people are both on the Condor staff  Collaboration to share: Test harness Tests Test infrastructure

Where can you learn more? › NMI: Bill Taylor: › VDT: Alain Roy:

Take-home message › NMI & VDT are pushing us towards:  Heavily tested grid middleware  Supported software infrastructure Better support, better software