Www.cs.wisc.edu/condor Welcome!!! Condor Week 2006.

Slides:



Advertisements
Similar presentations
Condor Project Computer Sciences Department University of Wisconsin-Madison Introduction Condor.
Advertisements

Research Councils ICT Conference Welcome Malcolm Atkinson Director 17 th May 2004.
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
E-Science Collaboration between the UK and China Paul Townend ( University of Leeds.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
Miron Livny Computer Sciences Department University of Wisconsin-Madison High Throughput Computing.
Slide 1 Sterling Software Peter Sharer Sterling Software.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
Welcome to HTCondor Week #14 (year #29 for our project)
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Harnessing the Capacity of Computational.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Submit locally and run globally – The GLOW and OSG Experience.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Grid Computing in a Commodity World KCCMG Fall Impact 2005 Lorin Olsen, Sprint Nextel.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Taking stock of Grid technologies - accomplishments and challenges.
Grid Computing - AAU 14/ Grid Computing Josva Kleist Danish Center for Grid Computing
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
N*Grid – Korean Grid Research Initiative Funded by Government (Ministry of Information and Communication) 5 Years from 2002 to million US$ Including.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Condor Team Welcome to Condor Week #10 (year #25 for the project)
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
DV/dt - Accelerating the Rate of Progress towards Extreme Scale Collaborative Science DOE: Scientific Collaborations at Extreme-Scales:
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Data Intensive Science Network (DISUN). DISUN Started in May sites: Caltech University of California at San Diego University of Florida University.
Tools for collaboration How to share your duck tales…
SEEK Welcome Malcolm Atkinson Director 12 th May 2004.
1 Condor Team 2011 Established 1985.
Authors: Ronnie Julio Cole David
Alain Roy Computer Sciences Department University of Wisconsin-Madison Packaging & Testing: NMI & VDT.
Condor week – April 2006Artyom Sharov, Technion, Haifa1 Adding High Availability to Condor Central Manager Artyom Sharov Technion – Israel Institute of.
Tevfik Kosar Computer Sciences Department University of Wisconsin-Madison Managing and Scheduling Data.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
FATCOP: A Mixed Integer Program Solver Michael FerrisQun Chen Department of Computer Sciences University of Wisconsin-Madison Jeff Linderoth, Argonne.
11/15/04PittGrid1 PittGrid: Campus-Wide Computing Environment Hassan Karimi School of Information Sciences Ralph Roskies Pittsburgh Supercomputing Center.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
FATCOP: A Mixed Integer Program Solver Michael FerrisQun Chen University of Wisconsin-Madison Jeffrey Linderoth Argonne National Laboratories.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
U.S. Grid Projects and Involvement in EGEE Ian Foster Argonne National Laboratory University of Chicago EGEE-LHC Town Meeting,
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Welcome to CW 2008!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Since computing power is everywhere, how can we make it usable by anyone? (From Condor Week 2003, UW)
Clouds , Grids and Clusters
Grid related projects CERN openlab LCG EDG F.Fluckiger
Dean Martin Cadwallader Dean of the Graduate School
Welcome to (HT)Condor Week #19 (year 34 of our project)
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

Welcome!!! Condor Week 2006

Celebrating 20 years since we first installed Condor in our department

The Condor Project (Established ‘85) Distributed Computing research performed by a team of ~40 faculty, full time staff and students who  face software/middleware engineering challenges in a UNIX/Linux/Windows/OS X environment,  involved in national and international collaborations,  interact with users in academia and industry,  maintain and support a distributed production environment (more than 3800 CPUs at UW),  and educate and train students. Funding – DOE, NIH, NSF, INTEL, Micron, Microsoft and the UW Graduate School

Software Functionality Research SupportSupport

“ … Since the early days of mankind the primary motivation for the establishment of communities has been the idea that by being part of an organized group the capabilities of an individual are improved. The great progress in the area of inter-computer communication led to the development of means by which stand-alone processing sub- systems can be integrated into multi- computer ‘communities’. … “ Miron Livny, “ Study of Load Balancing Algorithms for Decentralized Distributed Processing Systems.”, Ph.D thesis, July 1983.

Main Threads of Activities › Distributed Computing Research – develop and evaluate new concepts, frameworks and technologies › The Open Science Grid (OSG) – build and operate a national distributed computing and storage infrastructure › Keep Condor “flight worthy” and support our users › The NSF Middleware Initiative (NMI) – develop, build and operate a national Build and Test facility › The Grid Laboratory Of Wisconsin (GLOW) – build, maintain and operate a distributed computing and storage infrastructure on the UW campus

X86/Linux X86/Windows Downloads per month

Condor-Users –Messages per month Condor Team Contributions

The past year: › Two Ph.D students graduated:  Tevfik Kosar went to LSU  Sonny (Sechang) Son went to NetApp › Three staff members left to start graduate studies › Released Condor › Released Condor › Contributed to the formation of the Open Science Grid (OSG) consortium and the OSG Facility › Interfaced Condor with BOINC › Started the NSF funded CondorDB project › Released Virtual Data Toolkit (VDT) › Distributed five instances of the NSF Middleware Initiative (NMI) Build and Test facility

The search for SUSY › Sanjay Padhi is a UW Chancellor Fellow who is working at the group of Prof. Sau Lan Wu at CERN › Using Condor Technologies he established a “grid access point” in his office at CERN › Through this access-point he managed to harness in 3 month (12/05-2/06) more that 500 CPU years from the LHC Computing Grid (LCG) the Open Science Grid (OSG) and UW Condor resources

GAMS/Grid + CPLEX › Work of Prof. Michael Ferris from the optimization group in our department › Commercial modeling system – abundance of real life models to solve › Any model types allowed  Scheduling problems  Radiotherapy treatment planning  World trade (economic) models › New decomposition features facilitate use of grid/condor solution › Mixed Integer Programs can be extremely hard to solve to optimality › MIPLIB has 13 unsolved examples

Tool and expertise combined › Various decomposition schemes coupled with  Fastest commercial solver - CPLEX  Shared file system / condor_chirp for inter-process communication  Sophisticated problem domain branching and cuts › Takes over 1 year of computation and goes nowhere – but knowledge gained! › Adaptive refinement strategy › Dedicated resources › “Timtab2” and “a1c1s1” problems solved to optimality (using over 650 machines running tasks each of which take between 1 hour and 5 days)

Function Shipping, Data Shipping, or maybe simply Object Shipping?

Customer requests: Place S at L! System delivers.

Basic step for  L 1. Allocate size(y) at L, 2. Allocate resources (disk bandwidth, memory, CPU, outgoing network bandwidth) on S 3. Allocate resources (disk bandwidth, memory, CPU, incoming network bandwidth) on L 4. Match S and L

Or in other words, it takes two (or more) to Tango (or to place data)!

When the “source” plays “nice” it “asks” for permission to place data at “destination” in advance

I am L and I have what it takes to place a file I am S and am looking for L to place a file Match! Match!

The SC’05 effort Joint with the Globus GridFTP team

Stork controls number of outgoing connections Destination advertises incoming connections

A Master Worker view of the same effort

Master Worker Files For Workers

When the “source” does not play “nice”, destination must protect itself

NeST Manages storage space and connections for a GridFTP server with commands like:  ADD_NEST_USER  ADD_USER_TO_LOT  ATTACH_LOT_TO_FILE  TERMINATE_LOT

Chirp GridFTP

Thank you for building such a wonderful community