Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During.

Slides:



Advertisements
Similar presentations
Virtual Machine Technology Dr. Gregor von Laszewski Dr. Lizhe Wang.
Advertisements

By Adam Balla & Wachiu Siu
Clouds C. Vuerli Contributed by Zsolt Nemeth. As it started.
Future Grid Introduction March MAGIC Meeting Gregor von Laszewski Community Grids Laboratory, Digital Science.
Advanced Computing and Information Systems laboratory Educational Virtual Clusters for On- demand MPI/Hadoop/Condor in FutureGrid Renato Figueiredo Panoat.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
FutureGrid Image Repository: A Generic Catalog and Storage System for Heterogeneous Virtual Machine Images Javier Diaz, Gregor von Laszewski, Fugang Wang,
Student Visits August Geoffrey Fox
Jefferson Ridgeway 2, Ifeanyi Rowland Onyenweaku 3, Gregor von Laszewski 1*, Fugang Wang 1 1* Indiana University, Bloomington, IN 47408, U.S.A.,
Virtualization for Cloud Computing
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
Utility Computing Casey Rathbone 1http://cyberaide.org.edu.
E-Science Workflow Support with Grid-Enabled Microsoft Project Gregor von Laszewski and Leor E. Dilmanian, Rochester Institute of Technology Abstract von.
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
Cyberaide Virtual Appliance: On-demand Deploying Middleware for Cyberinfrastructure Tobias Kurze, Lizhe Wang, Gregor von Laszewski, Jie Tao, Marcel Kunze,
FutureGrid Overview David Hancock HPC Manger Indiana University.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
A Web 2.0 Portal for Teragrid Fugang Wang Gregor von Laszewski May 2009.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
Towards a Javascript CoG Kit Gregor von Laszewski Fugang Wang Marlon Pierce Gerald Guo
Andrew J. Younge Golisano College of Computing and Information Sciences Rochester Institute of Technology 102 Lomb Memorial Drive Rochester, New York
Introduction to Cloud Computing
CoG Kit Overview Gregor von Laszewski Keith Jackson.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Through the development of advanced middleware, Grid computing has evolved to a mature technology in which scientists and researchers can leverage to gain.
Future Grid FutureGrid Overview Dr. Speaker. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research on the future.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Future Grid FutureGrid Overview Geoffrey Fox SC09 November
Evaluation of Agent Teamwork High Performance Distributed Computing Middleware. Solomon Lane Agent Teamwork Research Assistant October 2006 – March 2007.
Image Generation and Management on FutureGrid CTS Conference 2011 Philadelphia May Geoffrey Fox
Image Management and Rain on FutureGrid Javier Diaz - Fugang Wang – Gregor von.
Power-Aware Scheduling of Virtual Machines in DVFS-enabled Clusters
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October Geoffrey Fox
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Acknowledgement: NSF NMI, CMMI, DDDAS
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
RAIN: A system to Dynamically Generate & Provision Images on Bare Metal by Application Users Presented by Gregor von Laszewski Authors: Javier Diaz, Gregor.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Gregor von Laszewski Rochester Institute of Technology.
Rochester Institute of Technology Cyberaide Shell: Interactive Task Management for Grids and Cyberinfrastructure Gregor von Laszewski, Andrew J. Younge,
Nara Institute of Science and technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
GAAIN Virtual Appliances: Virtual Machine Technology for Scientific Data Analysis Arihant Patawari USC Stevens Neuroimaging and Informatics Institute July.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
Experiment Management with Microsoft Project Gregor von Laszewski Leor E. Dilmanian Link to presentation on wiki 12:13:33Service Oriented Cyberinfrastructure.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Grid Appliance The World of Virtual Resource Sharing Group # 14 Dhairya Gala Priyank Shah.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
IEEE Cloud 2011 Panel: Opportunities of Services Business in Cloud Age Fundamental Research Challenges for Cloud-based Service Delivery Gregor von Laszewski.
Grappling Cloud Infrastructure Services with a Generic Image Repository Javier Diaz Andrew J. Younge, Gregor von Laszewski, Fugang.
PARALLEL AND DISTRIBUTED PROGRAMMING MODELS U. Jhashuva 1 Asst. Prof Dept. of CSE om.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Shaowen Wang 1, 2, Yan Liu 1, 2, Nancy Wilkins-Diehr 3, Stuart Martin 4,5 1. CyberInfrastructure and Geospatial Information Laboratory (CIGI) Department.
Lizhe Wang, Gregor von Laszewski, Jai Dayal, Thomas R. Furlani
Chapter 6: Securing the Cloud
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
Blueprint of Persistent Infrastructure as a Service
Shaowen Wang1, 2, Yan Liu1, 2, Nancy Wilkins-Diehr3, Stuart Martin4,5
Cloud Computing Dr. Sharad Saxena.
Versatile HPC: Comet Virtual Clusters for the Long Tail of Science SC17 Denver Colorado Comet Virtualization Team: Trevor Cooper, Dmitry Mishin, Christopher.
Leigh Grundhoefer Indiana University
Using and Building Infrastructure Clouds for Science
Sky Computing on FutureGrid and Grid’5000
Presentation transcript:

Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During a 2 year leave of absence from Argonne National Laboratory he was an associate Professor at Rochester Institute of Technology (RIT). He worked between 1996 and 2007 for Argonne National Laboratory and as a fellow at University of Chicago. He is involved in Grid computing since the term was coined. Current research interests are in the areas of GreenIT, Grid & Cloud computing, and GPGPUs. He is best known for his efforts in making Grids usable and initiating the Java Commodity Grid Kit which provides a basis for many Grid related projects including the Globus toolkit ( His Web page is located at Recently worked on FutureGrid, Masters Degree in 1990 from the University of Bonn, Germany Ph.D. in 1996 from Syracuse University in computer science.

Cyberaide Creative: On-Demand Cyberinfrastructure Provision in Clouds Casey Rathbone, Lizhe Wang, Gregor von Laszewski, Fugang Wang

Outline Background and related work Problem definition System design Prototype performance results Current progress FutureGrid Conclusion 12/13/093Gregor von Laszewski,

Why are we dong it?12/13/09Gregor von Laszewski, PastNow

Grid/Cloud Computing Effective computing paradigm for distributed high performance computing applications A number of production Grid infrastructures, projects, applications: – TeraGrid, EGEE, WLCG, FutureGrid, D-Grid … Disadvantages of current production Grids: – Overloaded Grid middleware – Complicated access interfaces and policies – Limited QoS support – No personalized computing environment provision 12/13/09Gregor von Laszewski,

Grid/Cloud Computing Features: – On demand service provision – Utility computing model: pay-as-you-go – Customized computing environment provision – Automatic and autonomous service management – User centric interfaces with broad network access – Scalable services with resource pooling …… 12/13/09Gregor von Laszewski,

Cyberaide An open source project – Originally created at Argonne Nat. Lab. – Now Indiana University Some students from RIT PI: Dr. von Laszewski A middleware for Cyberinfrastructure – Including Grids and Clouds Cyberaide virtual appliance Cyberaide shell Cyberaide mediator, cyberaide server Cyberadie creative 12/13/09Gregor von Laszewski,

Cyberaide shell, mediator and server12/13/098Gregor von Laszewski,

Motivation: Cyberaide Creative Todays heterogeneous network architectures require teams of IT specialists to effectively deploy services. Decreasing accessibility to computing resources. Cyberaide Creative addresses this issue by providing a platform for individuals to utilize resources without needing intimate knowledge of the hardware platform. 12/13/09Gregor von Laszewski,

Research Topic Increasing accessibility to computing resources with on-demand deployment on virtualized hardware resources. Effectively abstracting the end-user from configuring specifications for each system 12/13/09Gregor von Laszewski,

System Design 12/13/0911Gregor von Laszewski,

Use Case End-user configures a virtual appliance image with the web interface Cyberaide Creative builds and stores the virtual appliance End-user then has the capability to deploy instances of the virtual appliance onto Cloud resources 12/13/09Gregor von Laszewski,

Virtual Cluster Deployment 12/13/0913Gregor von Laszewski,

Cyberaide Gridshell Deployment 12/13/0914Gregor von Laszewski,

Single Workstation Deployment 12/13/0915Gregor von Laszewski,

Demonstrates that there is performance sacrifice for virtual deployments. Virtual Machine Linpack Performance Results 12/13/0916Gregor von Laszewski,

On demand access Cyberinfrastructures Now users can on-demand build desired cyberinfrastructures, for example production Grid environments. Then how to access them? Interfaces of Production Grids are strictly defined: Resource information Security Job submission and management Access resources of production Grid from ad-hoc clients without special client software & Grid expertise on-demand access at runtime 12/13/09Gregor von Laszewski,

Cyberaide Virtual Appliance: overview Cyberaide Virtual Appliance – Put cyberadie shell, mediator and server into a virtual machine, – On demand deploy cyberaide virtual appliance to access production Grid – User can access production Grid via cyberaide virtual appliance Advantages – Cyberaide virtual appliance can be dynamically deployed with policy customization, like user account, access URI,.. – Multiple users can share a cyberaide virtual appliance, then build a VO – A cyberaide virtual appliance can be managed easily, for example, start, shutdown, migration, duplication,.. 12/13/09Gregor von Laszewski,

Cyberaide virtual appliance: Solutions Vmware Studio vs. JeOS VMBuilder JeOS VMBuilder is selected CriteriaVmware StudioJeOS VMBuilder User interfaceVery goodLess comfortable Support OSUbuntu, SUSE, RedHat, CentOS Ubuntu JeOS only Support hypervisorVmwareVmware, Xen and KVM Auto support on hypervisor Yesno Ease of useSome technical problems good 12/13/0919Gregor von Laszewski,

Cyberaide virtual appliance: Implementation Four configuration files for Boot and Login: – A basic configuration file that allows to define some basic parameters such as: platform type (i386), amount of memory of the virtual appliance, packages that should be directly installed, etc. – A hard-disk configuration file that defines the size of each available (virtual) hard-disk and the number and size of all the partitions that will be created on these hard-disks. – Boot.sh: Shell script that will be executed during the first boot of the new appliance. – Login.sh: Shell script that will be executed after the first logon in the new appliance. One script is for adapting the VMbuilder configuration files One script is for transferring the appliance to the target host and starting it on the specified hypervisor. 12/13/09Gregor von Laszewski,

Cyberaide Virtual Appliance: Build process12/13/0921Gregor von Laszewski,

Test result: Web portal on TeraGrid12/13/0922Gregor von Laszewski,

Test result: performance evaluation on TeraGrid Metricsvalue Building time (basic OS packages)10 minutes Building time (full system image)20 minutes Deployment time15 minutes Total time40 ~ 60 minutes Virtual machine image size (basic OS package)400 MB Virtual machine image size (full system image)2.8 GB 12/13/0923Gregor von Laszewski,

Our work on Cloud computing Cyberaide virtual appliance (CloudComp’09) Cyberaide creative (GridCAT’09) Cyberaide onServe (submitted) On-demand ESD (accepted as a book chapter) e-Molst (accepted by CCPE) 12/13/0924Gregor von Laszewski,

FutureGrid The goal of FutureGrid is to support the research that will invent the future of distributed, grid, and cloud computing. FutureGrid will build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. The environment will mimic TeraGrid and/or general parallel and distributed systems This test-bed will enable dramatic advances in science and engineering through collaborative evolution of science applications and related software. 12/13/0925Gregor von Laszewski,

FutureGrid Partners12/13/09Gregor von Laszewski, Indiana University Purdue University University of Florida University of Virginia University of Chicago/Argonne National Labs University of Texas at Austin/Texas Advanced Computing Center San Diego Supercomputer Center at University of California San Diego University of Southern California Information Sciences Institute, University of Tennessee Knoxville Center for Information Services and GWT-TUD from Technische Universtität Dresden.

FutureGrid Hardware 12/13/09 27 Gregor von Laszewski,

FutureGrid Architecture12/13/0928Gregor von Laszewski,

FutureGrid Architecture Open Architecture allows to configure resources based on images Shared images allows to create similar experiment environments Experiment management allows management of reproducible activities Through our “stratosphere” design we allow different clouds and images to be “rained” upon hardware. 12/13/0929Gregor von Laszewski,

FutureGrid Usage Scenarios Developers of end-user applications who want to develop new applications in cloud or grid environments, including analogs of commercial cloud environments such as Amazon or Google. – Is a Science Cloud for me? Developers of end-user applications who want to experiment with multiple hardware environments. Grid middleware developers who want to evaluate new versions of middleware or new systems. Networking researchers who want to test and compare different networking solutions in support of grid and cloud applications and middleware. (Some types of networking research will likely best be done via through the GENI program.) Interest in performance requires that bare metal important 12/13/0930Gregor von Laszewski,

Selected FutureGrid Timeline October Project Starts November SC09 Demo/F2F Committee Meetings March 2010 FutureGrid network complete March 2010 FutureGrid Annual Meeting September 2010 All hardware (except Track IIC lookalike) accepted October FutureGrid allocatable via TeraGrid process – first two years by user/science board led by Andrew Grimshaw 12/13/0931Gregor von Laszewski,

Cyberaide: a lightweight middleware for Clusters, Grids and Clouds Cyberaide creative: on-demand build cyberinfrastructures Cyberaide virtual appliance: – on demand deploy middelware to access cyberinfrastructures FutureGrid: 12/13/09Gregor von Laszewski,

Future work Cyberaide: a lightweight middleware for Clusters, Grids and Clouds Cyberaide creative: on-demand build cyberinfrastructures Cyberaide virtual appliance: on demand deploy middelware to access cyberinfrastructures 12/13/0933Gregor von Laszewski,

Acknowledgement Work conducted by Gregor von Laszewski is supported (in part) by NSF CMMI and NSF SDCI NMI FutureGrid Is supported by NSF grant # FutureGrid:NSF grant # FutureGrid: – An Experimental, High-Performance Grid Test-bed. An Experimental, High-Performance Grid Test-bed.12/13/09Gregor von Laszewski,