AppLeS / Network Weather Service IPG Pilot Project FY’98 Francine Berman U. C. San Diego and NPACI Rich Wolski U.C. San Diego, NPACI and U. of Tennessee.

Slides:



Advertisements
Similar presentations
2/16/2004Sergei Sadilov PH/SFT 1 Bonsai in Integration Testing of Geant4 This presentation will probably involve audience discussion, which will create.
Advertisements

ASCR Data Science Centers Infrastructure Demonstration S. Canon, N. Desai, M. Ernst, K. Kleese-Van Dam, G. Shipman, B. Tierney.
Investigating Learner Autonomy in a Virtual EFL Classroom Jo Mynard Research in ELT Conference Bangkok, April 2003 This presentation will probably involve.
The Network Weather Service A Distributed Resource Performance Forecasting Service for Metacomputing Rich Wolski, Neil T. Spring and Jim Hayes Presented.
Project Status Chemical Engineering Lab Scheduler Team 5 This presentation will probably involve audience discussion, which will create action items. Use.
Achieving Application Performance on the Information Power Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience.
Performance Prediction Engineering Francine Berman U. C. San Diego Rich Wolski U. C. San Diego and University of Tennessee This presentation will probably.
Project Status Group B-4 This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these.
AppLeS, NWS and the IPG Fran Berman UCSD and NPACI Rich Wolski UCSD, U. Tenn. and NPACI This presentation will probably involve audience discussion, which.
Achieving Application Performance on the Computational Grid Francine Berman This presentation will probably involve audience discussion, which will create.
Project Status OOMeter: Measuring Coupling and Cohesion of OO Systems This presentation will probably involve audience discussion, which will create action.
The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing, Rich Wolski, Neil Spring, and Jim Hayes, Journal.
Information & Computer Science Dept.
Al Akhawayn University School of Humanities and Social Sciences Communication Studies Course „Public Relations Communication“ Prof. Dr. Mohammed Ibahrine.
NPACI Alpha Project Review: Cellular Microphysiology on the Data Grid Fran Berman, UCSD Tom Bartol, Salk Institute.
Workload Management Massimo Sgaravatto INFN Padova.
Hospital Management System A complete solution for Hospital Services and Activity This presentation will probably involve audience discussion, which will.
CSE 160/Berman Programming Paradigms and Algorithms W+A 3.1, 3.2, p. 178, 5.1, 5.3.3, Chapter 6, 9.2.8, , Kumar Berman, F., Wolski, R.,
Ecuador National Executive Committee INTERNATIONAL YEAR OF VOLUNTEERS 2001 IN ECUADOR "NATIONAL EXECUTIVE COMMITTEE" This presentation will probably involve.
Achieving Application Performance on the Grid: Experience with AppLeS Francine Berman U. C., San Diego This presentation will probably involve audience.
Computer Science Program Center for Entrepreneurship and Information Technology, Louisiana Tech University This presentation will probably involve audience.
Parallel Tomography Shava Smallen CSE Dept. U.C. San Diego.
Cluster Reliability Project ISIS Vanderbilt University.
Grid Workload Management Massimo Sgaravatto INFN Padova.
APGrid Core Meeting Phuket Asia Pacific BioGRID initiative A/P Tan Tin Wee, Mark De Silva, Lim Kuan Siong – Bioinformatics Centre, National Univ.
Mid Term Report Integrated Framework, Visualization and Analysis of Platforms This presentation will probably involve audience discussion, which will create.
My Life Scott Stillwell This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these.
April 1st, The ASC- GridLab Portal Edward Seidel, Michael Russell, Gabrielle Allen, and the rest of the team Max Plank Institut für Gravitationsphysik.
Perspectives on Grid Technology Ian Foster Argonne National Laboratory The University of Chicago.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Karl Marx The German Ideology A Contribution to the Critique of Political Economy Prepared by: Erin Mustard Jackeline Hernandez This presentation will.
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
CSC 532 Term Paper Topic decision: 10/10/02 This presentation will probably involve audience discussion, which will create action items. Use PowerPoint.
Evolution of the GrADS Software Architecture and Lessons Learned Fran Berman UCSD CSE and SDSC/NPACI.
The Design of MCM1 Cayci Suitt, Sal Ledezma, Jimar Garcia, Gene Wie ICS 125 – Ebert 15 May 2001 This presentation will probably involve audience discussion,
THE BOOK BANK MAKERERE UNIVERSITY LIBRARY This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to.
How abstract components are returned when schema.getElementDeclarations() is called – an animation in 3 slides -- schemas that use and July 26, :00pm.
1 M&V Cx Working Group Report New York City - M&V Summit Steve Dunnivant - Chair This presentation will probably involve audience discussion, which will.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Cyber-Research: Meeting the Challenge of a Terascale Computing Infrastructure Francine Berman Department of Computer Science and Engineering, U. C. San.
Project Status [Project Name] [Presenter Name] This presentation will probably involve audience discussion, which will create action items. Use PowerPoint.
- GMA Athena (24mar03 - CHEP La Jolla, CA) GMA Instrumentation of the Athena Framework using NetLogger Dan Gunter, Wim Lavrijsen,
Application-level Scheduling Sathish S. Vadhiyar Credits / Sources: AppLeS web pages and papers.
Parallel Tomography Shava Smallen SC99. Shava Smallen SC99AppLeS/NWS-UCSD/UTK What are the Computational Challenges? l Quick turnaround time u Resource.
Health Resources and Services Administration Maternal and Child Health Bureau Presentation For DataSpeak May, 2003 Health Resources And Services Administration.
UCSD SAN DIEGO SUPERCOMPUTER CENTER Fran Berman Grids in Context Dr. Francine Berman Director, San Diego Supercomputer Center Professor and HPC Endowed.
At the February meeting we agreed; - the requirements for Network Monitoring found at - 7 sites.
Policy That Works May 5 th 2012 Toronto, Ontario This presentation will probably involve audience discussion, which will create action items. Use PowerPoint.
IV&VS Capabilities. 2 L OADRUNNER C ONTROLLER – S CENARIO DESIGN.
Achieving Application Performance on the Computational Grid Francine Berman U. C. San Diego and NPACI This presentation will probably involve audience.
Bond-Jini Interoperability Mathew Lowery
Continuous Random Variables
Einstein’s Special and General Theories of Relativity.
Final Project Presentation
Martha Grabowski LeMoyne College
واشوقاه إلى رمضان مرحباً رمضان
Presented by: Arlene N. Baratang
San Diego Unified School District
Evaluation of Data Fusion Methods Using Kalman Filtering and TBM
Continuous Random Variables
IEEE Rail Transit Vehicle Interface Standards Committee
Baseline Matching / Grading
Test Flow: Acceptance & defect criteria The First half RUP summary
Final Presentation Wireless mouse over BLUETOOTH By: David Gabay
* From AMD 1996 Publication #18522 Revision E
Project Design Document
G4 Workshop 2002 Detector Description Parallel Session
CBT Management and Process
Integrated Cryptographic Network Interface Controller
Presentation transcript:

AppLeS / Network Weather Service IPG Pilot Project FY’98 Francine Berman U. C. San Diego and NPACI Rich Wolski U.C. San Diego, NPACI and U. of Tennessee This presentation will probably involve audience discussion, which will create action items. Use PowerPoint to keep track of these action items during your presentation In Slide Show, click on the right mouse button Select “Meeting Minder” Select the “Action Items” tab Type in action items as they come up Click OK to dismiss this box This will automatically create an Action Item slide at the end of your presentation with your points entered.

User’s Goal = Program Performance IPG resources + Globus services = “abstract machine” Applications must be able to achieve performance on IPG “abstract machine” Application scheduling promotes program performance

Scheduling makes a difference. Compile-time Blocked Partitioning Run-time AppLeS Partitioning

Scheduling and the IPG Usability, Integration development of basic IPG/Globus infrastructure Development of persistent IPG/Globus testbed Performance “IPG - aware” programming Short-termMedium-termLong-term Application scheduling Resource scheduling Throughput scheduling Multi-scheduling Resource economy Integration of schedulers and other tools, performance interfaces Experience with Pilot IPG Development of prototype large-scale applications Support for multiple user communities Development of necessary research

AppLeS = Application-Level Scheduler AppLeS incorporates –application-specific information –dynamic information –prediction Schedule developed to optimize user’s performance measure –minimal execution time –turnaround time = staging/waiting time + execution time –other measures: precision, resolution, speedup, etc. NWS (Wolski) User Prefs App Perf Model Planner Resource Selector Application Act. IPG /Globus infrastructure

NWS = Network Weather Service (Wolski) NWS is standalone project –used by PACIs, Labs and others for dynamic resource information –provides dynamic forecasts for AppLeS –can be used as part of IPG performance toolkit NWS –monitors current system state –provides best forecast of resource load from multiple models Sensor Interface Reporting Interface Forecaster Model

FY98 Pilot Project: INS2D AppLeS Building prototype AppLeS scheduler for INS2D that will become model for other parameter studies AppLeS INS2D scheduler –first phase targets on interactive UCSD cluster –goal is to minimize turnaround time for user turnaround time = wait time + execution time

INS2D AppLeS Architecture AppLeS schedules work on interactive UCSD cluster AppLeS tuned to leverage underlying resource management system –sockets now, Globus and batch/interactive platforms next –API allows expansion for other parameter study applications AppLe S API Resources App- specific case gen. Exp Act Sched. Act Exp

AppLeS/NWS Pilot Project FY98 AppLeS (Berman,Wolski) Design and deployment of INS2D AppLeS for interactive cluster at UCSD –focus on socket design –developing general API for wider application class Conduct experiments to assess performance of INS2D AppLeS on interactive UCSD cluster NWS (Wolski) Pilot project integrating NWS and Globus –NWS/MDS integration to be demonstrated at SC98 –Performance prediction of Globus services focus on GRAM startup time Deployment of NWS on interactive UCSD cluster to support INS2D AppLeS

Current Status AppLeS (Berman,Wolski) INS2D AppLeS designed, software being developed Comparison experiments being designed Plan to have some results for 11/15 NASA Performance Workshop paper deadline NWS (Wolski) NWS/MDS currently being integrated for SC98 demo Currently running NWS experiments to determine GRAM startup times as part of end-to-end application performance Deploying NWS on PACIs and other sites