VGES Demonstrations Andrew A. Chien, Henri Casanova, Yang-suk Kee, Richard Huang, Dionysis Logothetis, and Jerry Chou CSE, SDSC, and CNS University of.

Slides:



Advertisements
Similar presentations
-Grids and the OptIPuter Software Architecture Andrew A. Chien Director, Center for Networked Systems SAIC Chair Professor, Computer Science and Engineering.
Advertisements

Barcelona Supercomputing Center. The BSC-CNS objectives: R&D in Computer Sciences, Life Sciences and Earth Sciences. Supercomputing support to external.
Test harness and reporting framework Shava Smallen San Diego Supercomputer Center Grid Performance Workshop 6/22/05.
Pegasus on the Virtual Grid: A Case Study of Workflow Planning over Captive Resources Yang-Suk Kee, Eun-Kyu Byun, Ewa Deelman, Kran Vahi, Jin-Soo Kim Oracle.
ANL NCSA PICTURE 1 Caltech SDSC PSC 128 2p Power4 500 TB Fibre Channel SAN 256 4p Itanium2 / Myrinet 96 GeForce4 Graphics Pipes 96 2p Madison + 96 P4 Myrinet.
Condor and GridShell How to Execute 1 Million Jobs on the Teragrid Jeffrey P. Gardner - PSC Edward Walker - TACC Miron Livney - U. Wisconsin Todd Tannenbaum.
From Grid to Global Computing: Deploying Parameter Sweep Applications Henri Casanova Grid Research And Innovation Laboratory (GRAIL)
Distributed Application Management Using PLuSH Jeannie Albrecht, Christopher Tuttle, Alex C. Snoeren, and Amin Vahdat UC San Diego CSE {jalbrecht, ctuttle,
AppLeS / Network Weather Service IPG Pilot Project FY’98 Francine Berman U. C. San Diego and NPACI Rich Wolski U.C. San Diego, NPACI and U. of Tennessee.
Simo Niskala Teemu Pasanen
Core Services I & II David Hart Area Director, UFP/CS TeraGrid Quarterly Meeting December 2008.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
CONDOR DAGMan and Pegasus Selim Kalayci Florida International University 07/28/2009 Note: Slides are compiled from various TeraGrid Documentations.
GIG Software Integration: Area Overview TeraGrid Annual Project Review April, 2008.
The Grid is a complex, distributed and heterogeneous execution environment. Running applications requires the knowledge of many grid services: users need.
The MicroGrid: A Scientific Tool for Modeling Grids Andrew A. Chien SAIC Chair Professor Department of Computer Science and Engineering University of California,
Large-Scale Science Through Workflow Management Ewa Deelman Center for Grid Technologies USC Information Sciences Institute.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Scalable Systems Software Center Resource Management and Accounting Working Group Face-to-Face Meeting October 10-11, 2002.
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
China Grid Activity on SIG Presented by Guoqing Li At WGISS-21, Budapest 8 May, 2006.
TMC BioGrid A GCC Consortium Ken Kennedy Center for High Performance Software Research (HiPerSoft) Rice University
Development Timelines Ken Kennedy Andrew Chien Keith Cooper Ian Foster John Mellor-Curmmey Dan Reed.
Crystal Ball Panel ORNL Heterogeneous Distributed Computing Research Al Geist ORNL March 6, 2003 SOS 7.
TeraGrid Quarterly Meeting Dec 5 - 7, 2006 Data, Visualization and Scheduling (DVS) Update Kelly Gaither, DVS Area Director.
Predicting Queue Waiting Time in Batch Controlled Systems Rich Wolski, Dan Nurmi, John Brevik, Graziano Obertelli Computer Science Department University.
Parallelization of Classification Algorithms For Medical Imaging on a Cluster Computing System 指導教授 : 梁廷宇 老師 系所 : 碩光通一甲 姓名 : 吳秉謙 學號 :
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
Kurt Mueller San Diego Supercomputer Center NPACI HotPage Updates.
ServiceSs, a new programming model for the Cloud Daniele Lezzi, Rosa M. Badia, Jorge Ejarque, Raul Sirvent, Enric Tejedor Grid Computing and Clusters Group.
Rochester Institute of Technology Cyberaide Shell: Interactive Task Management for Grids and Cyberinfrastructure Gregor von Laszewski, Andrew J. Younge,
Pegasus: Running Large-Scale Scientific Workflows on the TeraGrid Ewa Deelman USC Information Sciences Institute
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
Scalarm: Scalable Platform for Data Farming D. Król, Ł. Dutka, M. Wrzeszcz, B. Kryza, R. Słota and J. Kitowski ACC Cyfronet AGH KU KDM, Zakopane, 2013.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
4/19/20021 TCPSplitter: A Reconfigurable Hardware Based TCP Flow Monitor David V. Schuehler.
US CMS Centers & Grids – Taiwan GDB Meeting1 Introduction l US CMS is positioning itself to be able to learn, prototype and develop while providing.
SAN DIEGO SUPERCOMPUTER CENTER Inca Control Infrastructure Shava Smallen Inca Workshop September 4, 2008.
AliEn AliEn at OSC The ALICE distributed computing environment by Bjørn S. Nilsen The Ohio State University.
TeraGrid Gateway User Concept – Supporting Users V. E. Lynch, M. L. Chen, J. W. Cobb, J. A. Kohl, S. D. Miller, S. S. Vazhkudai Oak Ridge National Laboratory.
Sergiu April 2006June 2006 Overview of TeraGrid Resources and Services Sergiu Sanielevici, TeraGrid Area Director for User.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
STAR Scheduler Gabriele Carcassi STAR Collaboration.
GridShell/Condor: A virtual login Shell for the NSF TeraGrid (How do you run a million jobs on the NSF TeraGrid?) The University of Texas at Austin.
Scheduling Strategies for Mapping Application Workflows Onto the Grid A. Mandal, K. Kennedy, C. Koelbel, G. Marin, J. Mellor- Crummey, B. Liu, L. Johnsson.
Run-time Adaptation of Grid Data Placement Jobs George Kola, Tevfik Kosar and Miron Livny Condor Project, University of Wisconsin.
LEAD Workflow Orchestration Lavanya Ramakrishnan Renaissance Computing Institute University of North Carolina – Chapel Hill Duke University North Carolina.
VgES Version 0.7 Release Overview UCSD VGrADS Team Andrew A. Chien, Henri Casanova, Yang-suk Kee, Jerry Chou, Dionysis Logothetis, Richard.
MicroGrid Update & A Synthetic Grid Resource Generator Xin Liu, Yang-suk Kee, Andrew Chien Department of Computer Science and Engineering Center for Networked.
18 May 2006CCGrid2006 Dynamic Workflow Management Using Performance Data Lican Huang, David W. Walker, Yan Huang, and Omer F. Rana Cardiff School of Computer.
VGrADS and GridSolve Asim YarKhan Jack Dongarra, Zhiao Shi, Fengguang Song Innovative Computing Laboratory University of Tennessee VGrADS Workshop – September.
Resource Characterization Rich Wolski, Dan Nurmi, and John Brevik Computer Science Department University of California, Santa Barbara VGrADS Site Visit.
Lessons from LEAD/VGrADS Demo Yang-suk Kee, Carl Kesselman ISI/USC.
The Virtual Grid Application Development Software (VGrADS) Project Overview Ken Kennedy Center for High Performance Software Rice University
The EMAN Application: An Update. EMAN Oversimplified Preliminary 3D Model Preliminary 3D model Particles Electron Micrographs Refine Final 3D model.
SC’07 Demo Draft VGrADS Team June 2007.
Center for High Performance Software
EMAN, Scheduling, Performance Prediction, and Virtual Grids
LEAD-VGrADS Day 1 Notes.
Seismic Hazard Analysis Using Distributed Workflows
Resource Characterization
New Workflow Scheduling Techniques Presentation: Anirban Mandal
VGrADS Tools Activities
VGrADS Execution System or “the Virtual Grid”
Abstract Machine Layer Research in VGrADS
NCSA Supercluster Administration
Genre1: Condor Grid: CSECCR
R, Zhang, A. Chien, A. Mandal, C. Koelbel,
Presentation transcript:

VGES Demonstrations Andrew A. Chien, Henri Casanova, Yang-suk Kee, Richard Huang, Dionysis Logothetis, and Jerry Chou CSE, SDSC, and CNS University of California, San Diego VGrADS Site Visit April 28, 2005

vgES: Prototype Research Infrastructure We have developed a functional prototype of vgES — vgES 0.7, March 2005 Two demonstrations: — vgFAB: Finding and Selecting Resources — vgES: Full application run on the VGrADS testbed

The vgES Research Prototype Application vgFAB vgDL

The vgES Research Prototype Application vgAgent vgFAB vgDL Resource Info

The vgES Research Prototype Application vgAgent vgFAB vgLAUNCH vgDL bind Resource Info

The vgES Research Prototype Application vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

Demonstration #1: vgFAB Application vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

Demonstration #1: vgFAB Simple vgDL query program vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

Demonstration #1: vgFAB Simple vgDL query program vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

VGrADS and TeraGrid Resources Xeon Pentium III Itanium UCSD UH UTK Itanium Rice

VGrADS and TeraGrid Resources Xeon Pentium III Itanium UCSD UH UTK ANL CalTech PSC ORNL SDSC TACC Purdue Alpha Itanium NCSA Altix Power4 Itanium Xeon Itanium Power3 Xeon Pentium 4 UltraSparc Itanium Rice

A simple vgDL query Xeon Pentium III Itanium UCSD UH UTK ANL CalTech PSC ORNL SDSC TACC Purdue Alpha Itanium NCSA Altix Power4 Itanium Xeon Itanium Power3 Xeon Pentium 4 UltraSparc VG1 = ClusterOf(node) [4:64] { node =[ (Processor == Xeon) && (Clock >= 1000) && (Memory >= 1000) ] } Itanium Rice

Switch to live demo

A simple vgDL query Xeon Pentium III Itanium UCSD UH UTK ANL CalTech PSC SDSC TACC Purdue Alpha Itanium NCSA Altix Power4 Itanium Xeon Itanium Power3 Xeon Pentium 4 UltraSparc ORNL VG1 = ClusterOf(node) [4:64] { node =[ (Processor == Xeon) && (Clock >= 1000) && (Memory >= 1000) ] } Itanium Rice

A more complex vgDL query Xeon Pentium III Itanium UCSD UH UTK ANL CalTech PSC ORNL SDSC TACC Purdue Alpha Itanium NCSA Altix Power4 Itanium Xeon Itanium Power3 Xeon Pentium 4 UltraSparc VG2 = rsc1 = ClusterOf(node)[4:64] { node = [ (Processor == Xeon) && (Clock>=1000) && (Memory >=1000) ] } FAR rsc2 = LooseBagOf(cluster1)[1:20] { cluster1 = ClusterOf(node)[4:128] { node = [ (Processor == Itanium) && (Memory >= 2048) ] } } Itanium Rice

Switch to live demo

A more complex vgDL query Xeon Pentium III Itanium UCSD UH UTK PSC TACC Alpha Itanium Altix Power4 Itanium Xeon Itanium Power3 Pentium 4 UltraSparc VG1 = foo + foo + SHOW RESULTS VG2 = rsc1 = ClusterOf(node)[4:64] { node = [ (Processor == Xeon) && (Clock>=1000) && (Memory >=1000) ] } FAR rsc2 = LooseBagOf(cluster1)[1:20] { cluster1 = ClusterOf(node)[4:128] { node = [ (Processor == Itanium) && (Memory >= 2048) ] } } Xeon ORNL ANL CalTech NCSA Itanium Rice SDSC Purdue

Synthetic Resource Environments Resource selection is a difficult problem — It’s NP-hard — We have a heuristic and a prototype implementation Research: result quality, scalability, response time, contention,... —What is needed: Simulation studies in large and realistic environments We have developed a “resource environment generator” — Based on survey of existing systems and analysis of technology trends — Kee, Casanova,Chien, Realistic Modeling and Synthesis of Resources for Computational Grids, SC2004. A sample synthetic environment — 1 million hosts, 10,000 clusters, Pentium 2-4, Itanium, Opteron, Athlon. — VG3 = rsc1= ClusterOf (node) [4:64] { node = [ (Processor==Pentium4) && (Clock>=2000) && (Memory>=4096) ] } FAR rsc2 = LooseBagOf (nest_cluster) [1:20] { nest_cluster = ClusterOf (node) [4:128] { node = [ (Processor==Itanium) && (Memory>=8192) ] } }

Switch to live demo

Take-away from Demonstration #1 We have a working research prototype for vgFAB within vgES — interfaces with the application via vgDL — interfaces with resource information systems — returns sets of matching resources Makes it possible to use a high-level description of resource requirements Makes it possible to find resources over different resource environments Research — evaluating scalability, result quality, etc. [CCGrid’05] [SC’05 submission]

Demonstration #2: vgES Application vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

Demonstration #2: vgES Application vgAgent vgFAB vgLAUNCH vgDL VG bind return VG to App Resource Info

Application: EMAN EMAN: Electro Micrograph ANalysis — Performs 3-D reconstructions — Workflow Application — See Chuck Koelbel’s talk this afternoon, and the EMAN poster Demonstration: Run EMAN on the VGrADS testbed with vgES seq par Xeon Pentium III Itanium UCSD UH UTK Itanium Rice

EMAN and the Virtual Grid VGES myVGES = new VGES() // new vgES instance myVG = myVGES.createVG( vgDL_string ) // found & bound VG vgRoot = myVG.getRoot();... // Traverse VG tree to find resource information vgES.copyToNode(someNode, “input1”); // Send input files vgES.runCmd(someNode,”command”); // Start command vgES.copyFromNode(someNode, “output1”); // Get output files vgES.terminateVG(myVG); // Destroy VG VG = LooseBagOf (cluster) [1:4] { cluster = ClusterOf (node) [4:10] { node = [Clock >= 900] }

Switch to live demo

Take-away from Demonstration #2 2-D images 3-D model The vgES prototype is functional for a real-application VG provides a simple abstraction that integrates — access to resources (Globus) — access to resource information (MDS, Ganglia, NWS, etc.)

VGrADS and the Virtual Grid VG The VG abstraction and its runtime implementation are the focal point of the VGrADS multi-team collaboration — see afternoon presentations and posters Interface to applications (vgDL + VG) Program preparation and optimization Application scheduling Monitoring, fault-tolerance, and adaptation Resource Management Single access/interface to various resource information sources Research platform for all the above