Https://portal.futuregrid.org What FutureGrid Can Do for You? TeraGrid’11 BOF Session 1 Salt Lake City, Utah July 20 th 2011.

Slides:



Advertisements
Similar presentations
Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
Advertisements

Advanced Computing and Information Systems laboratory Virtual Appliances and Education using Clouds Dr. Renato Figueiredo ACIS Lab - University of Florida.
Overview of the FutureGrid Software
Configuration management
INTRODUCTION TO SIMULATION WITH OMNET++ José Daniel García Sánchez ARCOS Group – University Carlos III of Madrid.
2010 FutureGrid User Advisory Meeting Architecture Roadmap Status Now – October –Next year 11:15-12:00, Monday, August 2, 2010 Pittsburgh, PA Gregor von.
Education and training on FutureGrig Salt Lake City, Utah July 18 th 2011 Presented by Renato Figueiredo
FutureGrid related presentations at TG and OGF Sun. 17th: Introduction to FutireGrid (OGF) Mon. 18th: Introducing to FutureGrid (TG) Tue. 19th –Educational.
Experiences with the FutureGrid Testbed UC Cloud Summit UCLA April 19, 2011 Shava Smallen
FutureGrid and US Cyberinfrastructure Collaboration with EU Symposium on transatlantic EU-U.S. cooperation in the field of large scale research infrastructures.
University of Minnesota Optimizing MapReduce Provisioning in the Cloud Michael Cardosa, Aameek Singh†, Himabindu Pucha†, Abhishek Chandra
FutureGrid Overview NSF PI Science of Cloud Workshop Washington DC March Geoffrey Fox
Cosmic Issues and Analysis of External Comments on FutureGrid TG11 Salt Lake City July Geoffrey Fox
Clouds from FutureGrid’s Perspective April Geoffrey Fox Director, Digital Science Center, Pervasive.
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images Javier Diaz, Gregor von Laszewski, Fugang Wang,
Overview Presented at OGF31 Salt Lake City, July 2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo Contact:
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
FutureGrid Summary TG’10 Pittsburgh BOF on New Compute Systems in the TeraGrid Pipeline August Geoffrey Fox
Panel Session The Challenges at the Interface of Life Sciences and Cyberinfrastructure and how should we tackle them? Chris Johnson, Geoffrey Fox, Shantenu.
SC2010 Gregor von Laszewski (*) (*) Assistant Director of Cloud Computing, CGL, Pervasive Technology Institute.
FutureGrid Summary FutureGrid User Advisory Board TG’10 Pittsburgh August Geoffrey Fox
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
FutureGrid Overview David Hancock HPC Manger Indiana University.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid Overview CTS Conference 2011 Philadelphia May Geoffrey Fox
Purdue RP Highlights TeraGrid Round Table September 23, 2010 Carol Song Purdue TeraGrid RP PI Rosen Center for Advanced Computing Purdue University.
FutureGrid SOIC Lightning Talk February Geoffrey Fox
Science of Cloud Computing Panel Cloud2011 Washington DC July Geoffrey Fox
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Science Clouds and FutureGrid’s Perspective June Science Clouds Workshop HPDC 2012 Delft Geoffrey Fox
Gregor von Laszewski*, Geoffrey C. Fox, Fugang Wang, Andrew Younge, Archit Kulshrestha, Greg Pike (IU), Warren Smith, (TACC) Jens Vöckler (ISI), Renato.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid Design and Implementation of a National Grid Test-Bed David Hancock – HPC Manager - Indiana University Hardware & Network.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid Overview Geoffrey Fox
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
FutureGrid Overview Geoffrey Fox
FutureGrid SC10 New Orleans LA IU Booth November Geoffrey Fox
FutureGrid Overview Geoffrey Fox
FutureGrid SOIC Lightning Talk February Geoffrey Fox
FutureGrid Cyberinfrastructure for Computational Research.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
FutureGrid TeraGrid Science Advisory Board San Diego CA July Geoffrey Fox
Hosting Cloud, HPC and Grid Educational Activities on FutureGrid Renato Figueiredo – U. of Florida Geoffrey Fox, Barbara Ann O’Leary – Indiana University.
FutureGrid Overview Geoffrey Fox
Tutorial Presented at TG2011 Geoffrey Fox, Gregor von Laszewski, Renato Figueiredo, Kate Keahey, Andrew Younge Contact:
FutureGrid BOF Overview TG 11 Salt Lake City July Geoffrey Fox
Virtual Appliances CTS Conference 2011 Philadelphia May Geoffrey Fox
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
Award # funded by the National Science Foundation Award #ACI Jetstream: A Distributed Cloud Infrastructure for.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Purdue RP Highlights TeraGrid Round Table May 20, 2010 Preston Smith Manager - HPC Grid Systems Rosen Center for Advanced Computing Purdue University.
Directions in eScience Interoperability and Science Clouds June Interoperability in Action – Standards Implementation.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Private Public FG Network NID: Network Impairment Device
Digital Science Center Overview
Clouds from FutureGrid’s Perspective
Gregor von Laszewski Indiana University
Using and Building Infrastructure Clouds for Science
Presentation transcript:

What FutureGrid Can Do for You? TeraGrid’11 BOF Session 1 Salt Lake City, Utah July 20 th 2011

Agenda FutureGrid from User’s Perspective, Geoffrey Fox How to Access FutureGrid, Gregor von Laszewski HPC on FutureGrid, Warren Smith Cloud Computing on FutureGrid, Kate Keahey Training, Education and Outreach, Renato Figueiredo Experimental Framework Support, Warren Smith Open discussion 2

FutureGrid BOF Overview TG 11 Salt Lake City July Geoffrey Fox Director, Digital Science Center, Pervasive Technology Institute Associate Dean for Research and Graduate Studies, School of Informatics and Computing Indiana University Bloomington

FutureGrid key Concepts I FutureGrid supports Computer Science and Computational Science research in cloud, grid and parallel computing (HPC) The FutureGrid testbed provides to its users: – An interactive development and testing platform for middleware and application users looking at interoperability, functionality, performance or evaluation with or without virtualization – A rich education and teaching platform for advanced cyberinfrastructure (computer science) classes FutureGrid has a complementary focus to both the Open Science Grid and the other parts of XSEDE. Note that significant current use in Education, Computer Science Systems and Biology/Bioinformatics

FutureGrid key Concepts II Rather than loading images onto VM’s, FutureGrid supports Cloud, Grid and Parallel computing environments by dynamically provisioning software as needed onto “bare-metal” using Moab/xCAT –Image library for MPI, OpenMP, MapReduce (Hadoop, Dryad, Twister), gLite, Unicore, Xen, Genesis II, ScaleMP (distributed Shared Memory), Nimbus, Eucalyptus, OpenNebula, OpenStack, KVM, Windows ….. Growth comes from users depositing novel images in library FutureGrid has ~4300 (will grow to ~5000) distributed cores with a dedicated network and a Spirent XGEM network fault and delay generator Image1 Image2 ImageN … LoadChooseRun

FutureGrid Partners Indiana University (Architecture, core software, Support) Purdue University (HTC Hardware) San Diego Supercomputer Center at University of California San Diego (INCA, Monitoring) University of Chicago/Argonne National Labs (Nimbus) University of Florida (ViNE, Education and Outreach) University of Southern California Information Sciences (Pegasus to manage experiments) University of Tennessee Knoxville (Benchmarking) University of Texas at Austin/Texas Advanced Computing Center (Portal) University of Virginia (OGF, Advisory Board and allocation) Center for Information Services and GWT-TUD from Technische Universtität Dresden. (VAMPIR) Red institutions have FutureGrid hardware

FutureGrid: a Grid/Cloud/HPC Testbed Private Public FG Network NID : Network Impairment Device

Compute Hardware NameSystem type# CPUs # Cores TFLOPS Total RAM (GB) Secondary Storage (TB) Site Status india IBM iDataPlex IU Operational alamo Dell PowerEdge TACC Operational hotel IBM iDataPlex UC Operational sierra IBM iDataPlex SDSC Operational xray Cray XT5m IU Operational foxtrot IBM iDataPlex UF Operational Bravo* Large Disk & memory (192GB per node) 144 (12 TB per Server) IU Early user Aug. 1 general Delta* Large Disk & memory With Tesla GPU’s GPU’s 96? (192GB per node) 96 (12 TB per Server)IU ~Sept 15 Total TB * Teasers for next machine

Storage Hardware System TypeCapacity (TB)File SystemSiteStatus DDN 9550 (Data Capacitor) 339 shared with IU + 16 TB dedicated LustreIUExisting System DDN GPFSUCNew System SunFire x417096ZFSSDSCNew System Dell MD300030NFSTACCNew System IBM24NFSUFNew System

Network Impairment Device Spirent XGEM Network Impairments Simulator for jitter, errors, delay, etc Full Bidirectional 10G w/64 byte packets up to 15 seconds introduced delay (in 16ns increments) 0-100% introduced packet loss in.0001% increments Packet manipulation in first 2000 bytes up to 16k frame size TCL for scripting, HTML for manual configuration

FutureGrid: Inca Monitoring

5 Use Types for FutureGrid 122 approved projects July – Training Education and Outreach (13) – Semester and short events; promising for small universities Interoperability test-beds (4) – Grids and Clouds; Standards; from Open Grid Forum OGF Domain Science applications (42) – Life science highlighted (21) Computer science (50) – Largest current category Computer Systems Evaluation (35) – TeraGrid (TIS, TAS, XSEDE), OSG, EGI Clouds are meant to need less support than other models; FutureGrid needs more user support ……. 12

Create a Portal Account and apply for a Project 13

14

Selected Current Education projects System Programming and Cloud Computing, Fresno State, Teaches system programming and cloud computing in different computing environments REU: Cloud Computing, Arkansas, Offers hands-on experience with FutureGrid tools and technologies Workshop: A Cloud View on Computing, Indiana School of Informatics and Computing (SOIC), Boot camp on MapReduce for faculty and graduate students from underserved ADMI institutions Topics on Systems: Distributed Systems, Indiana SOIC, Covers core computer science distributed system curricula (for 60 students) 15

Selected Current Interoperability Projects SAGA, Louisiana State, Explores use of FutureGrid components for extensive portability and interoperability testing of Simple API for Grid Applications, and scale-up and scale-out experiments Unicore,Genesis, gLite, Virginia, OGF standard end points 16

Selected Current Bio Application Projects Metagenomics Clustering, North Texas, Analyzes metagenomic data from samples collected from patients Genome Assembly, Indiana SOIC, De novo assembly of genomes and metagenomes from next generation sequencing data 17

Selected Current Non-Bio Application Projects Physics: Higgs boson, Virginia, Matrix Element calculations representing production and decay mechanisms for Higgs and background processes Business Intelligence on MapReduce, Cal State - L.A., Market basket and customer analysis designed to execute MapReduce on Hadoop platform 18

Selected Current Computer Science Projects Data Transfer Throughput, Buffalo, End-to-end optimization of data transfer throughput over wide- area, high-speed networks Elastic Computing, Colorado, Tools and technologies to create elastic computing environments using IaaS clouds that adjust to changes in demand automatically and transparently The VIEW Project, Wayne State, Investigates Nimbus and Eucalyptus as cloud platforms for elastic workflow scheduling and resource provisioning 19

Selected Current Technology Projects ScaleMP for Gene Assembly, Indiana Pervasive Technology Institute (PTI) and Biology, Investigates distributed shared memory over 16 nodes for SOAPdenovo assembly of Daphnia genomes XSEDE, Virginia, Uses FutureGrid resources as a testbed for XSEDE software development Globus Online, Indiana PTI, Chicago, Investigates the feasibility of providing DemoGrid and its Globus services on FutureGrid IaaS clouds 20

21 Typical FutureGrid Performance Study Linux, Linux on VM, Windows, Azure, Amazon Bioinformatics

ADMI Cloudy View on Computing Workshop June 2011 Jerome took two courses from IU in this area Fall 2010 and Spring 2011 on FutureGrid ADMI: Association of Computer and Information Science/Engineering Departments at Minority Institutions Offered on FutureGrid 10 Faculty and Graduate Students from ADMI Universities The workshop provided information from cloud programming models to case studies of scientific applications on FutureGrid. At the conclusion of the workshop, the participants indicated that they would incorporate cloud computing into their courses and/or research. Concept and Delivery by Jerome Mitchell: Undergraduate ECSU, Masters Kansas, PhD Indiana

ADMI Cloudy View on Computing Workshop Participants

FutureGrid Viral Growth Model Users apply for a project Users improve/develop some software in project This project leads to new images which are placed in FutureGrid repository Project report and other web pages document use of new images Images are used by other users And so on ad infinitum ……… Please bring your nifty software up on FutureGrid!! 24

Questions? 25

Elementary FG Access Services Gregor von Laszewski Indiana University

Getting on FG is simple

FG Portal Coordination of Projects and users – Project management Membership Results – User Management Contact Information Keys, OpenID Coordination of Information – Manuals, tutorials, FAQ, Help – Status Resources, outages, usage, … Coordination of the Community – Information exchange: Forum, comments, community pages – Feedback: rating, polls Technology has been established Transition technical development to TACC as much as possible so we can focus on other areas at IU Focus on support of additional FG processes through the Portal

Apply for a Portal Account

Apply for a Portal Account

Check your Account Status Goto: – Accounts-My Portal Account Check if the account status bar is green – Errors will indicate an issue or a task that requires waiting Since you are already here: – Upload a portrait – Check if you have other things that need updating – Add ssh keys if needed

Get access Project Lead 1.Create a portal account 2.Create a project 3.Add project members Project Member 1.Create a portal account 2.Ask your project lead to add you to the project Once the project you participate in is approved 1.Apply for an HPC & Nimbus account You will need an ssh key 2.Apply for a Eucalyptus Account

Which Services can you find?

Selected List of Services Offered PaaS Hadoop (Twister) (Sphere/Sector) IaaS Nimbus Eucalyptus ViNE (OpenStack) (OpenNebula) Grid Genesis II Unicore SAGA (Globus) HPCC MPI OpenMP ScaleMP (XD Stack) Others Portal Inca Ganglia (Exper. Manag./(Pegasus (Rain) (will be added in future)

Services Offered 1.ViNe can be installed on the other resources via Nimbus  2.Access to the resource is requested through the portal  3.Pegasus available via Nimbus and Eucalyptus images

Questions? 36

HPC on FutureGrid Warren Smith Texas Advanced Computing Center (TACC) 37

HPC on FutureGrid HPC-style usage is supported Many of the clusters have an HPC partition Clusters well suited to HPC – Infiniband networks – Attached parallel file systems 38

Compute Hardware NameSystem type# CPUs # Cores TFLOPS Total RAM (GB) Secondary Storage (TB) Site Status india IBM iDataPlex IU Operational alamo Dell PowerEdge TACC Operational hotel IBM iDataPlex UC Operational sierra IBM iDataPlex SDSC Operational xray Cray XT5m IU Operational foxtrot IBM iDataPlex UF Operational Bravo* Large Disk & memory (192GB per node) 144 (12 TB per Server) IU Early user Aug. 1 general Delta* Large Disk & memory With Tesla GPU’s GPU’s 96? (192GB per node) 96 (12 TB per Server)IU ~Sept 15 Total TB * Teasers for next machine

HPC Access ssh to login nodes – alamo.futuregrid.org, hotel.futuregrid.org, … – Uses the public key you’ve uploaded to the portal Modules to manage your environment Intel and Gnu compilers (others wanted?) MPI, OpenMP Torque and Moab to schedule access to compute nodes – Reservations? Scientific libraries? 40

Performance Tools Provide a number of tools to analyze performance Full support of partner tools Best effort support of external tools 41

Questions? 42

Cloud Computing on FutureGrid with Nimbus Kate Keahey Argonne National Laboratory, University of Chicago 43

What is Nimbus? 10/11/ Enable providers to build IaaS clouds Enable users to use IaaS clouds Nimbus Infrastructure Nimbus Platform Workspace Service Cumulus Context Broker Cloudinit.d High-quality, extensible, customizable, open source implementation Gateway Elastic Scaling Tools Enable developers to extend, experiment and customize

Using Nimbus Infrastructure 10/11/ Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Nimbus

Using Nimbus Infrastructure 10/11/ Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Pool node Nimbus publishes information about each VM Users can find out information about their VM (e.g. what IP the VM was bound to) Users can interact directly with their VM in the same way the would with a physical machine. Nimbus

Nimbus on FutureGrid Hotel (University of Chicago) -- Xen 41 nodes, 328 cores Foxtrot (University of Florida) -- Xen 26 nodes, 208 cores Sierra (SDSC) -- Xen 18 nodes, 144 cores Alamo (TACC) -- KVM 15 nodes, 120 cores 10/11/201447

Sky Computing Sky Computing = a Federation of Clouds Approach: – Combine resources obtained in multiple Nimbus clouds in FutureGrid and Grid’ 5000 – Combine Context Broker, ViNe, fast image deployment – Deployed a virtual cluster of over 1000 cores on Grid5000 and FutureGrid – largest ever of this type Grid’5000 Large Scale Deployment Challenge award Demonstrated at OGF 29 06/10 TeraGrid ’10 poster More at: 10/11/ Work by Pierre Riteau et al, University of Rennes 1 “Sky Computing” IEEE Internet Computing, September 2009

Backfill: Lower the Cost of Your Cloud Challenge: utilization, catch-22 of on-demand computing Solution: new instances – Backfill Bottom line: up to 100% utilization Who decides what backfill VMs run? Spot pricing Research by Paul Marshall, University of Colorado Open Source community contributions via Google Summer of Code (GSoC), Paolo Gomez Nimbus release 2.7 CCGrid /11/ % 31 % 47 % 62 % 78 % 94 % 1 March 2010 through 28 February 2011

BarBar Experiment at SLAC in Stanford, CA Using clouds to simulating electron-positron collisions in their detector Exploring virtualization as a vehicle for data preservation Approach: – Appliance preparation and management – Distributed Nimbus clouds – Cloud Scheduler Running production BaBar workloads 10/11/ UVIC Efforts Work by the UVIC team

Cloud Computing on FutureGrid Several Infrastructure-as-a-Service clouds – Nimbus, Eucalyptus, OpenStack (experimental) Supported patterns – Experimenting with middleware on top of infrastructure clouds – Modifying and experimenting with infrastructure clouds – Paradigm testing What would you like to work on? 51

Questions? 52

FutureGrid Training, Education and Outreach Presented by Renato Figueiredo Associate Professor, University of Florida

Overview Traditional ways of delivering hands-on training and education in parallel/distributed computing have non-trivial dependences on the environment Difficult to replicate same environment on different resources (e.g. HPC clusters, desktops) Difficult to cope with changes in the environment (e.g. software upgrades) Virtualization technologies remove key software dependences through a layer of indirection

TEO Infrastructure - guiding principles Fidelity: TEO activities should use full-fledged, executable software: education/training modules – Learn using the proper tools Reproducibility: Creators of content should be able to install, configure, and test their modules once, and be assured of the same functional behavior regardless of where the module is deployed – Incentive to invest effort in developing, testing and documenting new modules

TEO Infrastructure - guiding principles Deployability: Students and users should be able to deploy modules in a simple manner, and in a variety of resources – Reduce barriers to entry; avoid dependences upon a particular infrastructure Community-oriented: Modules should be simple to share, discover, reuse, and expand – Create conditions for growth

Towards this vision in FutureGrid Executable modules – virtual appliances – Deployable on FutureGrid resources – Deployable on other cloud platforms, as well as on virtualized desktops Community sharing – Web 2.0 portal, appliance image repositories – An aggregation hub for executable modules and documentation

Virtual appliances Leverage existing virtual networking software and virtual appliance images used in other projects Focus: integration with FutureGrid resources – Leverage network virtualization software FutureGrid includes ViNe and GroupVPN – Image deployment, testing, documentation, tutorials KVM/Xen, Nimbus/Eucalyptus – FutureGrid portal, ability for users to contribute content

Virtual appliance clusters Same image, different VPNs copy instantiate Hadoop + Virtual Network A Hadoop worker Another Hadoop worker Repeat… Virtual machine Group VPN GroupVPN Credentials Virtual IP - DHCP Virtual IP - DHCP

University of Arkansas Indiana University University of California at Los Angeles Penn State Iowa State Univ.Illinois at Chicago University of Minnesota Michigan State Notre Dame University of Texas at El Paso IBM Almaden Research Center Washington University San Diego Supercomputer Center University of Florida Johns Hopkins July 26-30, 2010 NCSA Summer School Workshop Students (200 on sites from 10 institutes; 100 online) IU MapReduce and UF Virtual Appliance technologies are supported by FutureGrid. (Slide courtesy of Judy Qiu) Activities: Big Data for Science

Activities: Courses Graduate-level “Cloud computing for Data- Intensive Sciences” (Judy Qiu, Fall 2010) – Virtualization technologies and tools – Infrastructure as a service – Parallel programming (MPI, Hadoop) FutureGrid supported activities in a new semester-long class offered Fall 2010 at LSU (Gabrielle Allen, Shantenu Jha) – A practical and comprehensive graduate course preparing students for research involving scientific computing

Activities: Cloud computing class 62

Activities: ADMI Workshop Cloudy View on Computing workshop – 10 faulty members and graduate students from HBCUs interested in cloud computing. – Cloud programming models, case studies of scientific applications on FutureGrid. 63

Questions? 64

Experiment Management on FutureGrid Warren Smith Texas Advanced Computing Center (TACC) 65

Experiment Management Goals Support rigorous experimentation – Define experiments in detail – Record experimental results User-specified measurements (placement and granularity) – Share experiment information Experiments can be repeated and verified Variations on experiments can be performed Convenient execution of experiments – FutureGrid has distributed resources and services – Isn’t one true way to run an experiment

Experiment Management Approach Provide tools to execute distributed experiments – Access (potentially many) resources – Interact with a number of services – Support execution of experiment plans Support several usage models – Workflow (often large, automatic, batched, unattended) – Interactive (attended) – Hybrid Store experiment information for later use – Plans (workflows or recordings) and results – Searchable and shareable – Re-run experiments or run modified versions

Experiment Management Available Components Pegasus – Workflow-based experiment management – Builds on existing Pegasus software Kickstart to record job execution and its environment Details of Pegasus presented elsewhere TakTuk – Basic interactive experiment management – Reuse tool deployed on Grid 5000 Host List Manager – Organize provisioned systems into groups, generate host lists for TakTuk – Set of simple command line programs

Experiment Management Planned Components Messaging-based Execution and Monitoring System (MEMS) – More sophisticated interactive experiment management – Integrated message streams for commands, results, and monitoring Pegasus provisioning workflows – Include resource provisioning into workflow Experiment Repository – Store and retrieve information about experiments – Uses the FG Image Repository as component. User Portal integration Convert experiment plans – Help users migrate from one tool to another – TakTuk commands MEMS messages Pegasus Workflows

Questions? 70

Open Discussion 71

Research on FutureGrid Were there ever experiments you could not run and if so what were the obstacles? What do you need to obtain results for your next paper? – Resources, repositories, middleware? What kind of experiment management tools do you use today and how could they be improved? How do you collaborate with colleagues on developing complex experiments? What would make you come to FutureGrid rather than use resources at your institution? 72

Education on FutureGrid What types of resources would help you teach a class? – Access to hardware? Integrated set of readymade course materials? Ease of use? What would help you teach your next tutorial? How would you like to share teaching materials with others? 73

Usage Modalities and Outreach What is your ideal scenario of usage? What would prevent you from using infrastructure such as FG? Where and how do you typically find information about services that enhance your mode of work? What concerns do you have about using FG? 74