Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.

Slides:



Advertisements
Similar presentations
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor: A Project and.
Advertisements

Condor Project Computer Sciences Department University of Wisconsin-Madison Introduction Condor.
Performance-responsive Middleware for Grid Computing Dr Stephen Jarvis High Performance Systems Group University of Warwick, UK High Performance Systems.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Cracow Grid Workshop, November 5-6, 2001 Towards the CrossGrid Architecture Marian Bubak, Marek Garbacz, Maciej Malawski, and Katarzyna Zając.
The DOE Science Grid Computing and Data Infrastructure for Large-Scale Science William Johnston, Lawrence Berkeley National Lab Ray Bair, Pacific Northwest.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
The LHC Computing Grid Project Tomi Kauppi Timo Larjo.
Alain Roy Computer Sciences Department University of Wisconsin-Madison 23-June-2002 Introduction to Condor.
Workload Management Massimo Sgaravatto INFN Padova.
EInfrastructures (Internet and Grids) - 15 April 2004 Sharing ICT Resources – “Think Globally, Act Locally” A point-of-view from the United States Mary.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Peter Couvares Computer Sciences Department University of Wisconsin-Madison High-Throughput Computing With.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor and Grid Challenges.
Vladimir Litvin, Harvey Newman Caltech CMS Scott Koranda, Bruce Loftis, John Towns NCSA Miron Livny, Peter Couvares, Todd Tannenbaum, Jamie Frey Wisconsin.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
1 Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Networked Storage Technologies Douglas Thain University of Wisconsin GriPhyN NSF Project Review January 2003 Chicago.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Taking stock of Grid technologies - accomplishments and challenges.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
1 The Roadmap to New Releases Todd Tannenbaum Department of Computer Sciences University of Wisconsin-Madison
GriPhyN Status and Project Plan Mike Wilde Mathematics and Computer Science Division Argonne National Laboratory.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Condor Team Welcome to Condor Week #10 (year #25 for the project)
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Douglas Thain, John Bent Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, Miron Livny Computer Sciences Department, UW-Madison Gathering at the Well: Creating.
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Condor RoadMap.
The Roadmap to New Releases Derek Wright Computer Sciences Department University of Wisconsin-Madison
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
GRID ARCHITECTURE Chintan O.Patel. CS 551 Fall 2002 Workshop 1 Software Architectures 2 What is Grid ? "...a flexible, secure, coordinated resource- sharing.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT.
Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison Managing and Scheduling Data.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Middleware Camp NMI (NSF Middleware Initiative) Program Director Alan Blatecky Advanced Networking Infrastructure and Research.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
Nick LeRoy Computer Sciences Department University of Wisconsin-Madison Hawkeye.
VDT 1 The Virtual Data Toolkit Todd Tannenbaum (Alain Roy)
Miron Livny Computer Sciences Department University of Wisconsin-Madison The Role of Scientific Middleware in the Future of HEP Computing.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
GriPhyN Project Paul Avery, University of Florida, Ian Foster, University of Chicago NSF Grant ITR Research Objectives Significant Results Approach.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Performance-responsive Scheduling for Grid Computing Dr Stephen Jarvis High Performance Systems Group University of Warwick, UK High Performance Systems.
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Welcome!!! Condor Week 2006.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
Douglas Thain, John Bent Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau, Miron Livny Computer Sciences Department, UW-Madison Gathering at the Well: Creating.
NeST: Network Storage John Bent, Venkateshwaran V Miron Livny, Andrea Arpaci-Dusseau, Remzi Arpaci-Dusseau.
George Kola Computer Sciences Department University of Wisconsin-Madison Data Pipelines: Real Life Fully.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Since computing power is everywhere, how can we make it usable by anyone? (From Condor Week 2003, UW)
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Condor on Dedicated Clusters Peter Couvares and Derek Wright Computer Sciences Department University of Wisconsin-Madison
Todd Tannenbaum Computer Sciences Department University of Wisconsin-Madison Job Delegation and Planning.
Condor A New PACI Partner Opportunity Miron Livny
Pegasus and Condor Gaurang Mehta, Ewa Deelman, Carl Kesselman, Karan Vahi Center For Grid Technologies USC/ISI.
Dean Martin Cadwallader Dean of the Graduate School
Presentation transcript:

Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview

A very warm welcome to the 3 rd Annual ParaDyn-Condor meeting!!! (the cold weather is not part of the plan)

The Condor Project (Established ‘85) Distributed Computing research performed by a team of 30 faculty, full time staff and students who  face software engineering challenges in a UNIX/Linux/NT environment,  are involved in national and international collaborations,  actively interact with users,  maintain and support a distributed production environment,  and educate and train students. Funding – DoD, DoE, NASA, NIH, NSF,AT&T, INTEL, Microsoft and the UW Graduate School.

A Multifaceted Project › Harnessing the power of clusters - opportunistic and/or dedicated (Condor) › Job management services for Grid applications (Condor-G, DaPSched) › Fabric management services for Grid resources (Condor, GlideIns, NeST) › Distributed I/O technology (PFS, Kangaroo, NeST) › Job-flow management (DAGMan, Condor) › Distributed monitoring and management (HawkEye) › Technology for Distributed Systems (ClassAD, MW)

Resource Local Resource Manager Owner Agent Matchmaker Customer Agent Application Agent Application The Layers of Condor Remote Execution Agent Submit (user) Execute (owner)

Harnessing … › More than 300 pools with more than 8500 CPUs worldwide. › More than 1800 CPUs in 10 pools on our campus › Established a “complete” production environment for the UW CMS group › Many new NT/W2K pools (TMC) › Adopted by the “real world” (Galileo, Maxtor, Micron, Oracle, Tigr, … )

the G rid … › Close collaboration and coordination with the Globus Project – joint development, adoption of common protocols, technology exchange, … › Partner in major national Grid R&D 2 (Research, Development and Deployment) efforts (GriPhyN, iVDGL, IPG, TeraGrid) › Close collaboration with Grid projects in Europe (EDG, GridLab, e-Science)

User/Application Fabric ( processing, storage, communication ) Grid

User/Application Fabric ( processing, storage, communication ) Grid Condor Globus Toolkit Condor

distributed I/O … › Close collaboration with the Scientific Data Management Group at LBL. › Provide management services for distributed data storage resources › Provide management and scheduling services for Data Placement jobs (DaPs) › Effective, secure and flexible remote I/O capabilities › Exception handling

job flow management … › Adoption of Directed Acyclic Graphs (DAGs) as a common job flow abstraction. › Adoption of the DAGMan as an effective solution to job flow management.

Challenges Ahead › Ride the “Grid Wave” without losing our balance › Leverage the talent and expertise of our faculty (Distributed I/O, Performance Monitoring, Distributed Scheduling, Networking, Security, Applications) › Integrate “Grid technology” into effective end-to-end solutions › Develop a framework and tools for “trouble shooting” applications, middleware and communication services in a distributed environment › Private networks › Establish a “build and test” facility in support of the NSF National Middleware Initiative (NMI) › Scale our Master Worker framework to 10,000 workers. › Re-evaluate our binary and source code distribution policies

First, get the mechanisms in place.