Indiana University/Anabas, Inc. Measured Characteristics of FutureGrid Clouds For Scalable Collaborative Sensor-Centric Grid Applications Geoffrey C. Fox.

Slides:



Advertisements
Similar presentations
 What Is Desktop Virtualization?  How Does Application Virtualization Help?  How does V3 Systems help?  Getting Started AGENDA.
Advertisements

Distributed Clouds for Scalable Collaborative Sensor-Centric Grid
Architecture and Measured Characteristics of a Cloud Based Internet of Things May 22, 2012 The 2012 International Conference.
Chapter 22: Cloud Computing and Related Security Issues Guide to Computer Network Security.
Cloud Computing to Satisfy Peak Capacity Needs Case Study.
Clouds C. Vuerli Contributed by Zsolt Nemeth. As it started.
Scalable and Crash-Tolerant Load Balancing based on Switch Migration
1 Clouds and Sensor Grids CTS2009 Conference May Alex Ho Anabas Inc. Geoffrey Fox Computer Science, Informatics, Physics Chair Informatics Department.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 4.
Be Smart, Use PwrSmart What Is The Cloud?. Where Did The Cloud Come From? We get the term “Cloud” from the early days of the internet where we drew a.
WORKFLOWS IN CLOUD COMPUTING. CLOUD COMPUTING  Delivering applications or services in on-demand environment  Hundreds of thousands of users / applications.
Cloud computing Tahani aljehani.
Principles for Collaboration Systems Geoffrey Fox Community Grids Laboratory Indiana University Bloomington IN 47404
Plan Introduction What is Cloud Computing?
Cloud Computing Why is it called the cloud?.
“ Does Cloud Computing Offer a Viable Option for the Control of Statistical Data: How Safe Are Clouds” Federal Committee for Statistical Methodology (FCSM)
A Sensor-Centric Grid Middleware Management Systems by Geoffrey Fox, Alex Ho, Rui Wang, Edward Chu and Isaac Kwan (Anabas, Inc. and Indiana University)
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
Software to Data model Lenos Vacanas, Stelios Sotiriadis, Euripides Petrakis Technical University of Crete (TUC), Greece Workshop.
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over the Internet. Cloud is the metaphor for.
Simulation of Cloud Environments
OOI CI R2 Life Cycle Objectives Review Aug 30 - Sep Ocean Observatories Initiative OOI CI Release 2 Life Cycle Objectives Review CyberPoPs & Network.
Business Data Communications, Stallings 1 Chapter 1: Introduction William Stallings Business Data Communications 6 th Edition.
Distributed FutureGrid Clouds for Scalable Collaborative Sensor-Centric Grid Applications For AMSA TO 4 Sensor Grid Technical Interchange Meeting By Anabas,
Net-Centric Sensor Grid Phase 3 Advanced Cloud Computing Technology for Sensor Grid FA8650-D Final Presentation and Demo Anabas, Inc. November.
DISTRIBUTED COMPUTING
Authors: Jiann-Liang Chenz, Szu-Lin Wuy,Yang-Fang Li, Pei-Jia Yang,Yanuarius Teofilus Larosa th International Wireless Communications and Mobile.
OpenQuake Infomall ACES Meeting Maui May Geoffrey Fox
Improving Network I/O Virtualization for Cloud Computing.
A Transport Framework for Distributed Brokering Systems Shrideep Pallickara, Geoffrey Fox, John Yin, Gurhan Gunduz, Hongbin Liu, Ahmet Uyar, Mustafa Varank.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
SBIR Final Meeting Collaboration Sensor Grid and Grids of Grids Information Management Anabas July 8, 2008.
Image Generation and Management on FutureGrid CTS Conference 2011 Philadelphia May Geoffrey Fox
1 NETE4631 Course Wrap-up and Benefits, Challenges, Risks Lecture Notes #15.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
Investigating the Performance of Audio/Video Service Architecture II: Broker Network Ahmet Uyar & Geoffrey Fox Tuesday, May 17th, 2005 The 2005 International.
SBIR Final Meeting Collaboration Sensor Grid and Grids of Grids Information Management Anabas July 8, 2008.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Chapter 5 McGraw-Hill/Irwin Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
August 3, March, The AC3 GRID An investment in the future of Atlantic Canadian R&D Infrastructure Dr. Virendra C. Bhavsar UNB, Fredericton.
Distributed FutureGrid Clouds for Scalable Collaborative Sensor-Centric Grid Applications For AMSA TO 4 Sensor Grid Technical Interchange Meeting March.
A Grid-enabled Multi-server Network Game Architecture Tianqi Wang, Cho-Li Wang, Francis C.M.Lau Department of Computer Science and Information Systems.
Chapter 8 – Cloud Computing
3/12/2013Computer Engg, IIT(BHU)1 CLOUD COMPUTING-1.
Web Technologies Lecture 13 Introduction to cloud computing.
COPYRIGHT © 2012 ALCATEL-LUCENT. ALL RIGHTS RESERVED. lightRadio TM Network Demonstration October 22, 2013 The LTE End User Experience.
Authors: Jiann-Liang Chenz, Szu-Lin Wuy, Yang-Fang Li, Pei-Jia Yang,
Efficient Opportunistic Sensing using Mobile Collaborative Platform MOSDEN.
Scaling and Fault Tolerance for Distributed Messages in a Service and Streaming Architecture Hasan Bulut Advisor: Prof. Geoffrey Fox Ph.D. Defense Exam.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
© 2007 IBM Corporation IBM Software Strategy Group IBM Google Announcement on Internet-Scale Computing (“Cloud Computing Model”) Oct 8, 2007 IBM Confidential.
AMSA TO 4 Advanced Technology for Sensor Clouds 09 May 2012 Anabas Inc. Indiana University.
SEMINAR ON.  OVERVIEW -  What is Cloud Computing???  Amazon Elastic Cloud Computing (Amazon EC2)  Amazon EC2 Core Concept  How to use Amazon EC2.
Connected Infrastructure
Organizations Are Embracing New Opportunities
2016 Citrix presentation.
Grid Computing.
Connected Infrastructure
Introduction to Edge Computing
Chapter 21: Cloud Computing and Related Security Issues
Introduction to Cloud Computing
Chapter 22: Cloud Computing Technology and Security
Design and Implementation of Audio/Video Collaboration System Based on Publish/subscribe Event Middleware CTS04 San Diego 19 January 2004 PTLIU Laboratory.
Mobile edge computing Report by Weiqing huang.
MWCN`03 Singapore 28 October 2003
Presentation transcript:

Indiana University/Anabas, Inc. Measured Characteristics of FutureGrid Clouds For Scalable Collaborative Sensor-Centric Grid Applications Geoffrey C. Fox Indiana University & Alex Ho, Eddy Chan Anabas, Inc. May 25, 2011

Indiana University/Anabas, Inc. Agenda of The Talk

Indiana University/Anabas, Inc. Background The emergence of cloud technologies has raised a renewed emphasis on the issue of scalable on-demand computing. Motivation Cloud back-end support of a large number of small devices such as sensors and mobile phones being used for collaboration is one important application. Our Effort We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid. Our Results and Future Plan We report our preliminary findings in areas of measured performance, scalability and reliability; and discuss follow-on plan.

Indiana University/Anabas, Inc. Background Cloud computing promises infrastructure resources to support application scalability. Few studies on collaboration applications in clouds. Even fewer work on leveraging heterogeneous, distributed clouds for real-time, distributed collaborative sensor-centric applications – for instances, applications for situation awareness.

Indiana University/Anabas, Inc. Motivation Technology has enabled a noticeable shift from using a few expensive and feature-riched sensors to a large number of small, inexpensive commodity sensors. This technology trend will drive increased use of collaborative sensing for better information about the environment or operational picture of interest. An example is an urban-scale deployment of parking space sensors and the sharing of real-time, parking space availability information with users of smartphones. Another example is crowd-sourcing apps. We expect a growing demand for scalable support of collaborative applications that could utilize a massive number of geographically dispersed sensors of different types for timely and actionable decision-support scenarios.

Indiana University/Anabas, Inc. Our Effort We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid. Terminology: Collaboration – we broadly define it as the sharing of digital objects Sensor - a source of time-dependent stream of information Real time – it is application-dependent Grids – they represent the systems formed by the distributed collection of digital capabilities that are managed and coordinated to support some sort of enterprises. Clouds – they are commercially supported data-center models competing with general-purpose computing centers.

Indiana University/Anabas, Inc. Methodology to measure performance, scalability and reliability characteristics of the FutureGrid: Use standard network performance tools at the network level Use the IU NaradaBrokering system, which supports many practical communication protocols, to measure data at the message level Use the Anabas sensor-centric grid framework, a message-based sensor service management and application development framework, to enable measuring data at the collaboration applications level

Indiana University/Anabas, Inc. An Overview of The Anabas Collaborative Sensor-Centric Grid Framework In order to generate and measure collaborative sensor-centric application traffic on distributed clouds, we need a tool to Build a sensor-centric grid Deploy sensors Manage sensors Support development of collaborative sensor-centric applications The Anabas collaborative sensor-centric grid framework was designed and built for an earlier project in partnership with Indiana University and Ball Aerospace. IU is leading the technology development aspect of a Ball Aerospace project, that takes the sensor-centric grid framework to sensor cloud.

Indiana University/Anabas, Inc. An Overview of The Anabas Collaborative Sensor-Centric Grid Framework (cont’d) GB, the grid builder tool, supports assembling subgrids into a mission-specific grid application provides services for defining sensor properties deploying sensors according to defined properties monitoring deployment status of sensors remote management of deployed sensors distributed management of deployed sensors A deployed sensor-centric grid communicates with deployed sensors irrespective of sensor locations deployed sensor-centric applications irrespective of application locations GB mediates the collaboration among these three modules.

Indiana University/Anabas, Inc. An Illustration of A Collaborative Sensor-Centric Application

Indiana University/Anabas, Inc. Supported Services Sensor Services: RFID GPS Wii remote Webcam video Lego Mindstorm NXT Ultrasonic Sound Light Touch Gyroscope Compass Accelerometer Thermistor Nokia N800 Internet Tablet Computational Service VED (Video Edge Detection) RFID positioning

Indiana University/Anabas, Inc. ANABAS

Indiana University/Anabas, Inc.

ANABAS

Indiana University/Anabas, Inc. ANABAS

Indiana University/Anabas, Inc. Overview of FutureGrid

Indiana University/Anabas, Inc. An Overview of FutureGrid It is an experimental testbed that could support large-scale research on distributed and parallel systems, algorithms, middleware and applications running on virtual machines (VM) or bare metal. It supports several cloud environments including Eucalyptus and Nimbus clouds. Eucalyptus and Nimbus are open source software platforms that implement IaaS-style cloud computing. Both support AWS-compliant, EC2-based web service interface. Eucalyptus supports AWS storage-compliant service. Nimbus supports saving of customized-VMs to Nimbus image repository.

Indiana University/Anabas, Inc. General Experimental Setup We use four distributed, heterogeneous clouds on FutureGrid Hotel (Nimbus at University of Chicago) Foxtrot (Nimbus at University of Florida) India (Eucalyptus at Indiana University) Sierra (Eucalyptus at UCSD) Distributed cloud scenarios are either pairs of clouds, or a group of four clouds In Nimbus we use 2-cores with 12 GB RAM in a CentOS VM In Eucalyptus clouds we use m1.xlarge instances. Each m1.xlarge instance is roughly equivalent to a 2-core Intel Xeon X5570 with 12 GB RAM We use ntp to synchronize the cloud instances before experiments

Indiana University/Anabas, Inc. Network Level Measurement We run two types of experiments: Using iperf to measure bi-directional throughput on pairs of cloud instances, one instance on each cloud in the pairs. Using ping in conjunction with iperf to measure packet loss and round-trip latency under loaded and unloaded network on pairs of cloud instances, one instance on each cloud in the pairs.

Indiana University/Anabas, Inc. Network Level - Throughput

Indiana University/Anabas, Inc. Network Level – Packet Loss Rate Instance PairUnloaded Packet Loss Rate Loaded (32 iperf connections) Packet Loss Rate India-Sierra0%0.33% India-Hotel0%0.67% India-Foxtrot0% Sierra-Hotel0%0.33% Sierra-Foxtrot0% Hotel-Foxtrot0%0.33%

Indiana University/Anabas, Inc. Network Level – Ping RTT with 32 iperf connections

Indiana University/Anabas, Inc. Network Level – Ping RTT with 32 iperf connections

Indiana University/Anabas, Inc. Network Level – Round-trip Latency Due to VM

Indiana University/Anabas, Inc. Network Level – Round-trip Latency Due to Distance

Indiana University/Anabas, Inc. Message Level Measurement We run a 2-cloud distributed experiment. Use Nimbus clouds on Foxtrot and Hotel A NaradaBrokering (NB) broker runs on Foxtrot Use simulated participants for single and multiple video conference session(s) on Hotel Use NB clients to generate video traffic patterns instead of using Anabas Impromptu multipoint conferencing platform for large scale and practical experimentation. Single video conference session has up to 2,400 participants Up to 150 video conference sessions with 20 participants each

Indiana University/Anabas, Inc. Messages Level Measurement – Round-trip Latency

Indiana University/Anabas, Inc. Message Level Measurement The average inter-cloud round-trip latency incurred between Hotel and Foxtrot in a single video conference session with up to 2,400 participants is about 50 ms. Average round-trip latency jumps when there are more than 2,400 participants in a single session. Message backlog is observed at the broker when there are more than 2,400 participants in a single session. Average round-trip latency can be maintained at about 50 ms with 150 simultaneous sessions, each with 20 participants. An aggregate total of 3,000 participants. Multiple smaller sessions allow NB broker to balance its work better. Limits shown are due to use of single broker and not of the system.

Indiana University/Anabas, Inc. Collaborative Sensor-Centric Application Level Measurement We report initial observations of an application using the Anabas collaborative sensor-centric grid framework. Use virtual GPS sensors to stream information to a sensor-centric grid at a rate of 1 message per second. A sensor-centric application consumes all the GPS sensor streams and computes latency and jitter. We run two types of experiments A single VM in a cloud to establish a baseline - India In 4 clouds – India, Hotel, Foxtrot, Sierra – each with a single VM

Indiana University/Anabas, Inc. Collaborative Sensor-Centric Application Level – Round-trip Latency

Indiana University/Anabas, Inc. Collaborative Sensor-Centric Application Level – Jitter

Indiana University/Anabas, Inc. Collaborative Sensor-Centric Application Level Measurement Observations: In the case of of a single VM in a cloud, we could stretch to support 100 virtual GPS sensors, with critically low idle CPU at 7% and un- used RAM at 1 GB. Not good for long running applications or simulations. The average round-trip latency and jitter grow rapidly beyond 60 sensors. In the case of using four geographically distributed clouds of two different types to run a total of 200 virtual GPS sensors, average round-trip latency and jitter remain quite stable. Average idle CPU at about 35% level which is more suitable for long running simulations or applications.

Indiana University/Anabas, Inc. Preliminary Results Network Level Measurement FutureGrid can sustain at least 1 Gbps inter-cloud throughput and is a reliable network with low packet loss rate. Message Level Measurement FutureGrid can sustain a throughput close to its implemented capacity of 1 Gbps between Foxtrot and Hotel. The multiple video conference sessions shows clouds can support publish and subscribe brokers effectively. Note the limit around 3,000 participants in the figure was reported as 800 in earlier work, showing any degradation in server performance from using clouds is more than compensated by improved server performance. Collaborative Sensor-Centric Application Level Measurement Distributed clouds has an encouraging potential to support scalable collaborative sensor-centric applications that have stringent throughput, latency, jitter and reliability requirements.

Indiana University/Anabas, Inc. Future Plan Repeat current experiments to get better statistics Include scalability in the number of instances in a each cloud Study impact on latency along the line of bare metal vs VMs, commercial vs academic clouds, different cloud infrastructures (OpenStack, Nimbus, Eucalyptus) Look at server side limits with distributed brokers versus number of clients (where virtual clients run so client side not bottlenecked) Look at effect of use of secure communication mechanisms

Indiana University/Anabas, Inc. Acknowledgments We thank Bill McQuay of AFRL, Ryan Hartman of Indiana University and Gary Whitted of Ball Aerospace for their important support of the work. This material is based on work supported in part by the National Science Foundation under Grant No to Indiana University for “FutureGrid: An Experimental, High-Performance Grid Test-bed.” Other partners in the FutureGrid project include U. Chicago, U. Florida, U. Southern California, U. Texas at Austin, U. Tennessee at Knoxville, U. of Virginia.