Www.cs.wisc.edu/~miron Welcome to CW 2008!!!. www.cs.wisc.edu/~miron The Condor Project (Established ‘85) Distributed Computing research performed by.

Slides:



Advertisements
Similar presentations
SWITCH Visit to NeSC Malcolm Atkinson Director 5 th October 2004.
Advertisements

SLA-Oriented Resource Provisioning for Cloud Computing
The ADAMANT Project: Linking Scientific Workflows and Networks “Adaptive Data-Aware Multi-Domain Application Network Topologies” Ilia Baldine, Charles.
The 2009 Cloud Consensus Report July 28, 2009 Bringing the Cloud Down to Earth Sponsored by the Merlin Federal Cloud Initiative.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Milos Kobliha Alejandro Cimadevilla Luis de Alba Parallel Computing Seminar GROUP 12.
Knowledge Environments for Science: Representative Projects Ian Foster Argonne National Laboratory University of Chicago
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Client/Server Grid applications to manage complex workflows Filippo Spiga* on behalf of CRAB development team * INFN Milano Bicocca (IT)
Communicating with Users about HTCondor and High Throughput Computing Lauren Michael, Research Computing Facilitator HTCondor Week 2015.
1 Building National Cyberinfrastructure Alan Blatecky Office of Cyberinfrastructure EPSCoR Meeting May 21,
Welcome to HTCondor Week #14 (year #29 for our project)
Assessment of Core Services provided to USLHC by OSG.
Miron Livny Computer Sciences Department University of Wisconsin-Madison From Compute Intensive to Data.
Cloud MapReduce : a MapReduce Implementation on top of a Cloud Operating System Speaker : 童耀民 MA1G Authors: Huan Liu, Dan Orban Accenture.
Supercomputing Center Jysoo Lee KISTI Supercomputing Center National e-Science Project.
Welcome to CW 2007!!!. The Condor Project (Established ‘85) Distributed Computing research performed by.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Future role of DMR in Cyber Infrastructure D. Ceperley NCSA, University of Illinois Urbana-Champaign N.B. All views expressed are my own.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
Communicate with All Workers Involved in the Process of Delivering High-Quality Health Care by Choosing Dossier365 on the Azure Platform MICROSOFT AZURE.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Welcome and Condor Project Overview.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
Miron Livny Center for High Throughput Computing Computer Sciences Department University of Wisconsin-Madison Open Science Grid (OSG)
Condor Team Welcome to Condor Week #10 (year #25 for the project)
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor : A Concept, A Tool and.
BOF: Megajobs Gracie: Grid Resource Virtualization and Customization Infrastructure How to execute hundreds of thousands tasks concurrently on distributed.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
1 4/23/2007 Introduction to Grid computing Sunil Avutu Graduate Student Dept.of Computer Science.
Service - Oriented Middleware for Distributed Data Mining on the Grid ,劉妘鑏 Antonio C., Domenico T., and Paolo T. Journal of Parallel and Distributed.
Interoperability Grids, Clouds and Collaboratories Ruth Pordes Executive Director Open Science Grid, Fermilab.
Uni Innsbruck Informatik - 1 Open Issues and New Challenges for End-to-End Transport E2E RG Meeting - July 28/29, MIT, Cambridge MA Michael Welzl
Major Disciplines in Computer Science Ken Nguyen Department of Information Technology Clayton State University.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
1 Condor Team 2011 Established 1985.
Authors: Ronnie Julio Cole David
The Biomedical Informatics Research Network Carl Kesselman BIRN Principal Investigator Professor of Industrial and Systems Engineering Information Sciences.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Group Science J. Marc Overhage MD, PhD Regenstrief Institute Indiana University School of Medicine.
== Enovatio Delivers a Scalable Project Management Solution Minus Large Upfront Infrastructure Costs, Thanks to the Powerful Microsoft Azure Platform MICROSOFT.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Miron Livny Computer Sciences Department University of Wisconsin-Madison Condor and (the) Grid (one of.
Welcome!!! Condor Week 2006.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
Axis AI Solves Challenges of Complex Data Extraction and Document Classification through Advanced Natural Language Processing and Machine Learning MICROSOFT.
An attempt to summarize…or … some highly subjective observations Matthias Kasemann, CERN & DESY.
Condor Project Computer Sciences Department University of Wisconsin-Madison Condor Introduction.
Collection and storage of provenance data Jakub Wach Master of Science Thesis Faculty of Electrical Engineering, Automatics, Computer Science and Electronics.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Leadership Guide for Strategic Information Management Leadership Guide for Strategic Information Management for State DOTs NCHRP Project Information.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
Update on CHEP from the Computing Speaker Committee G. Carlino (INFN Napoli) on behalf of the CSC ICB, October
Since computing power is everywhere, how can we make it usable by anyone? (From Condor Week 2003, UW)
The Future of Whole Human Genome Data Management and Analysis, Available on the Microsoft Azure Platform Today MICROSOFT AZURE APP BUILDER PROFILE: SPIRAL.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
EGI-InSPIRE RI An Introduction to European Grid Infrastructure (EGI) March An Introduction to the European Grid Infrastructure.
Clouds , Grids and Clusters
Miron Livny John P. Morgridge Professor of Computer Science
Grid Computing Course Development team: Barry Wilkinson and Clayton Ferner (Instructors), and Mark Holliday Student assistants: Jeff House and Sam Daoud.
Dean Martin Cadwallader Dean of the Graduate School
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

Welcome to CW 2008!!!

The Condor Project (Established ‘85) Distributed Computing research performed by a team of ~35 faculty, full time staff and students who  face software/middleware engineering challenges in a UNIX/Linux/Windows/OS X environment,  involved in national and international collaborations,  interact with users in academia and industry,  maintain and support a distributed production environment (more than 4000 CPUs at UW),  and educate and train students.

“ … Since the early days of mankind the primary motivation for the establishment of communities has been the idea that by being part of an organized group the capabilities of an individual are improved. The great progress in the area of inter-computer communication led to the development of means by which stand-alone processing sub- systems can be integrated into multi- computer ‘communities’. … “ Miron Livny, “ Study of Load Balancing Algorithms for Decentralized Distributed Processing Systems.”, Ph.D thesis, July 1983.

Main Threads of Activities › Distributed Computing Research – develop and evaluate new concepts, frameworks and technologies › Keep Condor “flight worthy” and support our users › The Open Science Grid (OSG) – build and operate a national High Throughput Computing infrastructure › The Grid Laboratory Of Wisconsin (GLOW) – build, maintain and operate a distributed computing and storage infrastructure on the UW campus. › The NSF Middleware Initiative - Develop, build and operate a national Build and Test facility powered by Metronome

Miron Livny Computer Sciences Department University of Wisconsin-Madison Future of Grid Computing

The Tulmod says in the name of Rabbi Yochanan, “Since the destruction of the Temple, prophecy has been taken from prophets and given to fools and children.” (Baba Batra 12b)

The Grid Computing Movement I believe that as a movement grid computing ran its course.  No more an easy source of funding  No more an easy way to get the “troops” mobilized  No more an easy sell of software tools  No more an easy way to get your papers published or your press releases posted

Introduction “The term “ the Grid ” was coined in the mid 1990s to denote a proposed distributed computing infrastructure for advanced science and engineering [27]. Considerable progress has since been made on the construction of such an infrastructure (e.g., [10, 14, 36, 47]) but the term “Grid” has also been conflated, at least in popular perception, to embrace everything from advanced networking to artificial intelligence. One might wonder if the term has any real substance and meaning. Is there really a distinct “Grid problem” and hence a need for new “Grid technologies”? If so, what is the nature of these technologies and what is their domain of applicability? While numerous groups have interest in Grid concepts and share, to a significant extent, a common vision of Grid architecture, we do not see consensus on the answers to these questions.” “The Anatomy of the Grid - Enabling Scalable Virtual Organizations” Ian Foster, Carl Kesselman and Steven Tuecke 2001.

Distributed Computing Distributed computing is here to stay and to continue to evolve as processing, storage and communication resources get more powerful and cheaper  Big science is inherently distributed  Most scientific disciplines (and many commercial sectors) depend on High Throughput Computing (HTC) capabilities

Keynote 3: When All Computing Becomes Grid Computing Speaker: Prof. Daniel A. Reed Chancellor’s Eminent Professor Director, Renaissance Computing Institute University of North Carolina at Chapel Hill Abstract: Scientific computing is moving rapidly from a world of “reliable, secure parallel systems” to a world of distributed software, virtual organizations and high-performance, though unreliable parallel and distributed systems with few guarantees of availability and quality of service. In addition, a tsunami of new experimental and computational data poses equally vexing problems in analysis, transport, visualization and collaboration. This transformation poses daunting scaling and reliability challenges and necessitates new approaches to collaboration, software development, performance measurement, system reliability and coordination. This talk describes Renaissance approaches to solving some of today’s most challenging scientific and societal problems using Grids and parallel systems, supported by rich tools for performance analysis, reliability assessment and workflow management.

As we return to the fundamentals and stay away from hype and the technologies of the moment, we will advance the state of the art in distributed computing

Our HTC Community is Stronger than Ever

Downloads per month

Fractions per month

Language Weaver Executive Summary Incorporated in 2002 –USC/ISI startup that commercializes statistical- based machine translation software Continuously improved language pair offering in terms of language pairs coverage and translation quality –More than 50 language pairs –Center of excellence in Statistical Machine Translation and Natural Language Processing

IT Needs The Language Weaver Machine Translation systems are trained automatically on large amounts of parallel data. Training/learning processes implement workflows with hundreds of steps, which use thousands of CPU hours and which generate hundreds of gigabytes of data Robust/fast workflows are essential for rapid experimentation cycles

Solution: Condor Condor-specific workflows adequately manage thousands of atomic computational steps/day. Advantages: –Robustness – good recovery from failures –Well-balanced utilization of existing IT infrastructure

The Road Ahead › Green Computing › Computing in the Clouds › “Launch and Leave” Computing › Turn-on of the LHC › Broader and larger community of contributors › More and bigger campus grids › Fetching work from “other” sources › Multi-Core nodes › Low latency and short jobs › Staging data through Storage Elements

Thank you for building such a wonderful community