Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During.

Similar presentations


Presentation on theme: "Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During."— Presentation transcript:

1 Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During a 2 year leave of absence from Argonne National Laboratory he was an associate Professor at Rochester Institute of Technology (RIT). He worked between 1996 and 2007 for Argonne National Laboratory and as a fellow at University of Chicago. He is involved in Grid computing since the term was coined. Current research interests are in the areas of GreenIT, Grid & Cloud computing, and GPGPUs. He is best known for his efforts in making Grids usable and initiating the Java Commodity Grid Kit which provides a basis for many Grid related projects including the Globus toolkit (http://www.cogkits.org). His Web page is located at http://cyberaide.orghttp://www.cogkits.org http://cyberaide.org Recently worked on FutureGrid, http://futuregird.orghttp://futuregird.org Masters Degree in 1990 from the University of Bonn, Germany Ph.D. in 1996 from Syracuse University in computer science.

2 Cyberaide Creative: On-Demand Cyberinfrastructure Provision in Clouds Casey Rathbone, Lizhe Wang, Gregor von Laszewski, Fugang Wang

3 Outline Background and related work Problem definition System design Prototype performance results Current progress FutureGrid Conclusion 12/13/093Gregor von Laszewski, laszewski@gmail.com

4 Why are we dong it?12/13/09Gregor von Laszewski, laszewski@gmail.com4 PastNow

5 Grid/Cloud Computing Effective computing paradigm for distributed high performance computing applications A number of production Grid infrastructures, projects, applications: – TeraGrid, EGEE, WLCG, FutureGrid, D-Grid … Disadvantages of current production Grids: – Overloaded Grid middleware – Complicated access interfaces and policies – Limited QoS support – No personalized computing environment provision 12/13/09Gregor von Laszewski, laszewski@gmail.com5

6 Grid/Cloud Computing Features: – On demand service provision – Utility computing model: pay-as-you-go – Customized computing environment provision – Automatic and autonomous service management – User centric interfaces with broad network access – Scalable services with resource pooling …… 12/13/09Gregor von Laszewski, laszewski@gmail.com6

7 Cyberaide An open source project – Originally created at Argonne Nat. Lab. – Now Indiana University Some students from RIT PI: Dr. von Laszewski A middleware for Cyberinfrastructure – Including Grids and Clouds Cyberaide virtual appliance Cyberaide shell Cyberaide mediator, cyberaide server Cyberadie creative 12/13/09Gregor von Laszewski, laszewski@gmail.com7

8 Cyberaide shell, mediator and server12/13/098Gregor von Laszewski, laszewski@gmail.com

9 Motivation: Cyberaide Creative Todays heterogeneous network architectures require teams of IT specialists to effectively deploy services. Decreasing accessibility to computing resources. Cyberaide Creative addresses this issue by providing a platform for individuals to utilize resources without needing intimate knowledge of the hardware platform. 12/13/09Gregor von Laszewski, laszewski@gmail.com9

10 Research Topic Increasing accessibility to computing resources with on-demand deployment on virtualized hardware resources. Effectively abstracting the end-user from configuring specifications for each system 12/13/09Gregor von Laszewski, laszewski@gmail.com10

11 System Design 12/13/0911Gregor von Laszewski, laszewski@gmail.com

12 Use Case End-user configures a virtual appliance image with the web interface Cyberaide Creative builds and stores the virtual appliance End-user then has the capability to deploy instances of the virtual appliance onto Cloud resources 12/13/09Gregor von Laszewski, laszewski@gmail.com12

13 Virtual Cluster Deployment 12/13/0913Gregor von Laszewski, laszewski@gmail.com

14 Cyberaide Gridshell Deployment 12/13/0914Gregor von Laszewski, laszewski@gmail.com

15 Single Workstation Deployment 12/13/0915Gregor von Laszewski, laszewski@gmail.com

16 Demonstrates that there is performance sacrifice for virtual deployments. Virtual Machine Linpack Performance Results 12/13/0916Gregor von Laszewski, laszewski@gmail.com

17 On demand access Cyberinfrastructures Now users can on-demand build desired cyberinfrastructures, for example production Grid environments. Then how to access them? Interfaces of Production Grids are strictly defined: Resource information Security Job submission and management Access resources of production Grid from ad-hoc clients without special client software & Grid expertise on-demand access at runtime 12/13/09Gregor von Laszewski, laszewski@gmail.com17

18 Cyberaide Virtual Appliance: overview Cyberaide Virtual Appliance – Put cyberadie shell, mediator and server into a virtual machine, – On demand deploy cyberaide virtual appliance to access production Grid – User can access production Grid via cyberaide virtual appliance Advantages – Cyberaide virtual appliance can be dynamically deployed with policy customization, like user account, access URI,.. – Multiple users can share a cyberaide virtual appliance, then build a VO – A cyberaide virtual appliance can be managed easily, for example, start, shutdown, migration, duplication,.. 12/13/09Gregor von Laszewski, laszewski@gmail.com18

19 Cyberaide virtual appliance: Solutions Vmware Studio vs. JeOS VMBuilder JeOS VMBuilder is selected CriteriaVmware StudioJeOS VMBuilder User interfaceVery goodLess comfortable Support OSUbuntu, SUSE, RedHat, CentOS Ubuntu JeOS only Support hypervisorVmwareVmware, Xen and KVM Auto support on hypervisor Yesno Ease of useSome technical problems good 12/13/0919Gregor von Laszewski, laszewski@gmail.com

20 Cyberaide virtual appliance: Implementation Four configuration files for Boot and Login: – A basic configuration file that allows to define some basic parameters such as: platform type (i386), amount of memory of the virtual appliance, packages that should be directly installed, etc. – A hard-disk configuration file that defines the size of each available (virtual) hard-disk and the number and size of all the partitions that will be created on these hard-disks. – Boot.sh: Shell script that will be executed during the first boot of the new appliance. – Login.sh: Shell script that will be executed after the first logon in the new appliance. One script is for adapting the VMbuilder configuration files One script is for transferring the appliance to the target host and starting it on the specified hypervisor. 12/13/09Gregor von Laszewski, laszewski@gmail.com20

21 Cyberaide Virtual Appliance: Build process12/13/0921Gregor von Laszewski, laszewski@gmail.com

22 Test result: Web portal on TeraGrid12/13/0922Gregor von Laszewski, laszewski@gmail.com

23 Test result: performance evaluation on TeraGrid Metricsvalue Building time (basic OS packages)10 minutes Building time (full system image)20 minutes Deployment time15 minutes Total time40 ~ 60 minutes Virtual machine image size (basic OS package)400 MB Virtual machine image size (full system image)2.8 GB 12/13/0923Gregor von Laszewski, laszewski@gmail.com

24 Our work on Cloud computing Cyberaide virtual appliance (CloudComp’09) Cyberaide creative (GridCAT’09) Cyberaide onServe (submitted) On-demand ESD (accepted as a book chapter) e-Molst (accepted by CCPE) 12/13/0924Gregor von Laszewski, laszewski@gmail.com

25 FutureGrid The goal of FutureGrid is to support the research that will invent the future of distributed, grid, and cloud computing. FutureGrid will build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications. The environment will mimic TeraGrid and/or general parallel and distributed systems This test-bed will enable dramatic advances in science and engineering through collaborative evolution of science applications and related software. 12/13/0925Gregor von Laszewski, laszewski@gmail.com

26 FutureGrid Partners12/13/09Gregor von Laszewski, laszewski@gmail.com26 Indiana University Purdue University University of Florida University of Virginia University of Chicago/Argonne National Labs University of Texas at Austin/Texas Advanced Computing Center San Diego Supercomputer Center at University of California San Diego University of Southern California Information Sciences Institute, University of Tennessee Knoxville Center for Information Services and GWT-TUD from Technische Universtität Dresden.

27 FutureGrid Hardware 12/13/09 27 Gregor von Laszewski, laszewski@gmail.com

28 FutureGrid Architecture12/13/0928Gregor von Laszewski, laszewski@gmail.com

29 FutureGrid Architecture Open Architecture allows to configure resources based on images Shared images allows to create similar experiment environments Experiment management allows management of reproducible activities Through our “stratosphere” design we allow different clouds and images to be “rained” upon hardware. 12/13/0929Gregor von Laszewski, laszewski@gmail.com

30 FutureGrid Usage Scenarios Developers of end-user applications who want to develop new applications in cloud or grid environments, including analogs of commercial cloud environments such as Amazon or Google. – Is a Science Cloud for me? Developers of end-user applications who want to experiment with multiple hardware environments. Grid middleware developers who want to evaluate new versions of middleware or new systems. Networking researchers who want to test and compare different networking solutions in support of grid and cloud applications and middleware. (Some types of networking research will likely best be done via through the GENI program.) Interest in performance requires that bare metal important 12/13/0930Gregor von Laszewski, laszewski@gmail.com

31 Selected FutureGrid Timeline October 1 2009 Project Starts November 16-19 SC09 Demo/F2F Committee Meetings March 2010 FutureGrid network complete March 2010 FutureGrid Annual Meeting September 2010 All hardware (except Track IIC lookalike) accepted October 1 2011 FutureGrid allocatable via TeraGrid process – first two years by user/science board led by Andrew Grimshaw 12/13/0931Gregor von Laszewski, laszewski@gmail.com

32 Cyberaide: a lightweight middleware for Clusters, Grids and Clouds http://cyberaide.org Cyberaide creative: on-demand build cyberinfrastructures Cyberaide virtual appliance: – on demand deploy middelware to access cyberinfrastructures FutureGrid: http://futuregrid.org 12/13/09Gregor von Laszewski, laszewski@gmail.com32

33 Future work Cyberaide: a lightweight middleware for Clusters, Grids and Clouds http://cyberaide.org Cyberaide creative: on-demand build cyberinfrastructures Cyberaide virtual appliance: on demand deploy middelware to access cyberinfrastructures 12/13/0933Gregor von Laszewski, laszewski@gmail.com

34 Acknowledgement Work conducted by Gregor von Laszewski is supported (in part) by NSF CMMI 0540076 and NSF SDCI NMI 0721656. FutureGrid Is supported by NSF grant #0910812 - FutureGrid:NSF grant #0910812 - FutureGrid: – An Experimental, High-Performance Grid Test-bed. An Experimental, High-Performance Grid Test-bed.12/13/09Gregor von Laszewski, laszewski@gmail.com34


Download ppt "Bio Gregor von Laszewski is conducting state-of-the-art work in Cloud computing and GreenIT at Indiana University as part of the Future Grid project. During."

Similar presentations


Ads by Google