Distributed FutureGrid Clouds for Scalable Collaborative Sensor-Centric Grid Applications For AMSA TO 4 Sensor Grid Technical Interchange Meeting By Anabas,

Slides:



Advertisements
Similar presentations
Distributed Clouds for Scalable Collaborative Sensor-Centric Grid
Advertisements

Architecture and Measured Characteristics of a Cloud Based Internet of Things May 22, 2012 The 2012 International Conference.
Infrastructure as a Service (IaaS) Amazon EC2
Clouds C. Vuerli Contributed by Zsolt Nemeth. As it started.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 6 2/13/2015.
Presented by Sujit Tilak. Evolution of Client/Server Architecture Clients & Server on different computer systems Local Area Network for Server and Client.
Principles for Collaboration Systems Geoffrey Fox Community Grids Laboratory Indiana University Bloomington IN 47404
Plan Introduction What is Cloud Computing?
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid Geoffrey Fox, Andrew J. Younge, Gregor von Laszewski, Archit Kulshrestha, Fugang.
Cloud Computing Why is it called the cloud?.
“ Does Cloud Computing Offer a Viable Option for the Control of Statistical Data: How Safe Are Clouds” Federal Committee for Statistical Methodology (FCSM)
FOSS4G: 52°North WPS Behind the buzz of Cloud Computing - 52°North Open Source Geoprocessing Software in the Clouds FOSS4G 2009.
A Brief Overview by Aditya Dutt March 18 th ’ Aditya Inc.
A Sensor-Centric Grid Middleware Management Systems by Geoffrey Fox, Alex Ho, Rui Wang, Edward Chu and Isaac Kwan (Anabas, Inc. and Indiana University)
Big Data and Clouds: Challenges and Opportunities NIST January Geoffrey Fox
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
Software to Data model Lenos Vacanas, Stelios Sotiriadis, Euripides Petrakis Technical University of Crete (TUC), Greece Workshop.
FutureGrid: an experimental, high-performance grid testbed Craig Stewart Executive Director, Pervasive Technology Institute Indiana University
 Cloud computing  Workflow  Workflow lifecycle  Workflow design  Workflow tools : xcp, eucalyptus, open nebula.
OOI CI R2 Life Cycle Objectives Review Aug 30 - Sep Ocean Observatories Initiative OOI CI Release 2 Life Cycle Objectives Review CyberPoPs & Network.
JMS Compliance in NaradaBrokering Shrideep Pallickara, Geoffrey Fox Community Grid Computing Laboratory Indiana University.
A Performance Evaluation of Azure and Nimbus Clouds for Scientific Applications Radu Tudoran KerData Team Inria Rennes ENS Cachan 10 April 2012 Joint work.
Experimenting with FutureGrid CloudCom 2010 Conference Indianapolis December Geoffrey Fox
Cloud Computing 1. Outline  Introduction  Evolution  Cloud architecture  Map reduce operation  Platform 2.
Net-Centric Sensor Grid Phase 3 Advanced Cloud Computing Technology for Sensor Grid FA8650-D Final Presentation and Demo Anabas, Inc. November.
Improving Network I/O Virtualization for Cloud Computing.
A Transport Framework for Distributed Brokering Systems Shrideep Pallickara, Geoffrey Fox, John Yin, Gurhan Gunduz, Hongbin Liu, Ahmet Uyar, Mustafa Varank.
FutureGrid Dynamic Provisioning Experiments including Hadoop Fugang Wang, Archit Kulshrestha, Gregory G. Pike, Gregor von Laszewski, Geoffrey C. Fox.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
Experiences Using Cloud Computing for A Scientific Workflow Application Jens Vöckler, Gideon Juve, Ewa Deelman, Mats Rynge, G. Bruce Berriman Funded by.
FutureGrid Connection to Comet Testbed and On Ramp as a Service Geoffrey Fox Indiana University Infra structure.
Indiana University/Anabas, Inc. Measured Characteristics of FutureGrid Clouds For Scalable Collaborative Sensor-Centric Grid Applications Geoffrey C. Fox.
Plan  Introduction  What is Cloud Computing?  Why is it called ‘’Cloud Computing’’?  Characteristics of Cloud Computing  Advantages of Cloud Computing.
Image Generation and Management on FutureGrid CTS Conference 2011 Philadelphia May Geoffrey Fox
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
1 NETE4631 Course Wrap-up and Benefits, Challenges, Risks Lecture Notes #15.
Building Effective CyberGIS: FutureGrid Marlon Pierce, Geoffrey Fox Indiana University.
Investigating the Performance of Audio/Video Service Architecture II: Broker Network Ahmet Uyar & Geoffrey Fox Tuesday, May 17th, 2005 The 2005 International.
SBIR Final Meeting Collaboration Sensor Grid and Grids of Grids Information Management Anabas July 8, 2008.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
Distributed Computing Systems CSCI 4780/6780. Geographical Scalability Challenges Synchronous communication –Waiting for a reply does not scale well!!
Distributed FutureGrid Clouds for Scalable Collaborative Sensor-Centric Grid Applications For AMSA TO 4 Sensor Grid Technical Interchange Meeting March.
Design Discussion Rain: Dynamically Provisioning Clouds within FutureGrid PI: Geoffrey Fox*, CoPIs: Kate Keahey +, Warren Smith -, Jose Fortes #, Andrew.
Computing Research Testbeds as a Service: Supporting large scale Experiments and Testing SC12 Birds of a Feather November.
Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.
Distributed Computing Systems CSCI 4780/6780. Scalability ConceptExample Centralized servicesA single server for all users Centralized dataA single on-line.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
Web Technologies Lecture 13 Introduction to cloud computing.
- A. Celesti et al University of Messina, Italy Enhanced Cloud Architectures to Enable Cross-Federation Presented by Sanketh Beerabbi University of Central.
European Middleware Initiative (EMI) Alberto Di Meglio (CERN) Project Director.
Evaluating Clouds for Smart Grid Computing: early Results using GE MARS App Ketan Maheshwari
Efficient Opportunistic Sensing using Mobile Collaborative Platform MOSDEN.
Scaling and Fault Tolerance for Distributed Messages in a Service and Streaming Architecture Hasan Bulut Advisor: Prof. Geoffrey Fox Ph.D. Defense Exam.
© 2015 MetricStream, Inc. All Rights Reserved. AWS server provisioning © 2015 MetricStream, Inc. All Rights Reserved. By, Srikanth K & Rohit.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
St. Petersburg, 2016 Openstack Disk Storage vs Amazon Disk Storage Computing Clusters, Grids and Cloud Erasmus Mundus Master Program in PERCCOM Author:
Daniele Lezzi Execution of scientific workflows on federated multi-cloud infrastructures IBERGrid Madrid, 20 September 2013.
AMSA TO 4 Advanced Technology for Sensor Clouds 09 May 2012 Anabas Inc. Indiana University.
SEMINAR ON.  OVERVIEW -  What is Cloud Computing???  Amazon Elastic Cloud Computing (Amazon EC2)  Amazon EC2 Core Concept  How to use Amazon EC2.
Md Baitul Al Sadi, Isaac J. Cushman, Lei Chen, Rami J. Haddad
Organizations Are Embracing New Opportunities
Private Public FG Network NID: Network Impairment Device
6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:
Design and Implementation of Audio/Video Collaboration System Based on Publish/subscribe Event Middleware CTS04 San Diego 19 January 2004 PTLIU Laboratory.
Hasan Bulut Scaling and Fault Tolerance for Distributed Messages in a Service and Streaming Architecture Hasan Bulut
MWCN`03 Singapore 28 October 2003
Using and Building Infrastructure Clouds for Science
Presentation transcript:

Distributed FutureGrid Clouds for Scalable Collaborative Sensor-Centric Grid Applications For AMSA TO 4 Sensor Grid Technical Interchange Meeting By Anabas, Inc. & Indiana University July 27, 2011

Anabas, Inc. & Indiana University Agenda of The Talk

Anabas, Inc. & Indiana University Our Effort We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid. Our Results and Future Plan We report our preliminary findings in areas of measured performance, scalability and reliability; and discuss follow-on plan.

Anabas, Inc. & Indiana University Our Effort We focus on understanding the characteristics of distributed cloud computing infrastructure for collaborative sensor-centric applications on the FutureGrid.

Anabas, Inc. & Indiana University Methodology to measure performance, scalability and reliability characteristics of the FutureGrid: Use standard network performance tools at the network level Use the IU NaradaBrokering system, which supports many practical communication protocols, to measure data at the message level Use the Anabas sensor-centric grid framework, a message-based sensor service management and application development framework, to enable measuring data at the collaboration applications level

Anabas, Inc. & Indiana University Overview of FutureGrid

Anabas, Inc. & Indiana University An Overview of FutureGrid It is an experimental testbed that could support large-scale research on distributed and parallel systems, algorithms, middleware and applications running on virtual machines (VM) or bare metal. It supports several cloud environments including Eucalyptus, Nimbus and OpenStack. Eucalyptus, Nimbus and OpenStack are open source software platforms that implement IaaS-style cloud computing. Both support AWS-compliant, EC2-based web service interface. Eucalyptus supports AWS storage-compliant service. Nimbus supports saving of customized-VMs to Nimbus image repository.

Anabas, Inc. & Indiana University General Experimental Setup Using Nimbus & Eucalyptus We use four distributed, heterogeneous clouds on FutureGrid clusters Hotel (Nimbus at University of Chicago) Foxtrot (Nimbus at University of Florida) India (Eucalyptus at Indiana University) Sierra (Eucalyptus at UCSD) Distributed cloud scenarios are either pairs of clouds, or a group of four clouds In Nimbus cloud each instance uses 2-cores with 12 GB RAM in a CentOS VM In Eucalyptus clouds we use m1.xlarge instances. Each m1.xlarge instance is roughly equivalent to a 2-core Intel Xeon X5570 with 12 GB RAM We use ntp to synchronize the cloud instances before experiments

Anabas, Inc. & Indiana University Network Level Measurement We run two types of experiments: Using iperf to measure bi-directional throughput on pairs of cloud instances, one instance on each cloud in the pairs. Using ping in conjunction with iperf to measure packet loss and round-trip latency under loaded and unloaded network on pairs of cloud instances, one instance on each cloud in the pairs.

Anabas, Inc. & Indiana University Network Level - Throughput

Anabas, Inc. & Indiana University Network Level – Packet Loss Rate Instance PairUnloaded Packet Loss Rate Loaded (32 iperf connections) Packet Loss Rate India-Sierra0%0.33% India-Hotel0%0.67% India-Foxtrot0% Sierra-Hotel0%0.33% Sierra-Foxtrot0% Hotel-Foxtrot0%0.33%

Anabas, Inc. & Indiana University Network Level – Round-trip Latency Due to VM

Anabas, Inc. & Indiana University Network Level – Round-trip Latency Due to Distance

Anabas, Inc. & Indiana University Network Level – Ping RTT with 32 iperf connections

Anabas, Inc. & Indiana University Network Level – Ping RTT with 32 iperf connections

Anabas, Inc. & Indiana University Message Level Measurement We run a 2-cloud distributed experiment. Use Nimbus clouds on Foxtrot and Hotel A NaradaBrokering (NB) broker runs on Foxtrot Use simulated participants for single and multiple video conference session(s) on Hotel Use NB clients to generate video traffic patterns instead of using Anabas Impromptu multipoint conferencing platform for large scale and practical experimentation. Single video conference session has up to 2,400 participants Up to 150 video conference sessions with 20 participants each

Anabas, Inc. & Indiana University Messages Level Measurement – Round-trip Latency

Anabas, Inc. & Indiana University Message Level Measurement The average inter-cloud round-trip latency incurred between Hotel and Foxtrot in a single video conference session with up to 2,400 participants is about 50 ms. Average round-trip latency jumps when there are more than 2,400 participants in a single session. Message backlog is observed at the broker when there are more than 2,400 participants in a single session. Average round-trip latency can be maintained at about 50 ms with 150 simultaneous sessions, each with 20 participants. An aggregate total of 3,000 participants. Multiple smaller sessions allow NB broker to balance its work better. Limits shown are due to use of single broker and not of the system.

Anabas, Inc. & Indiana University Collaborative Sensor-Centric Application Level Measurement We report initial observations of an application using the Anabas collaborative sensor-centric grid framework. Use virtual GPS sensors to stream information to a sensor-centric grid at a rate of 1 message per second. A sensor-centric application consumes all the GPS sensor streams and computes latency and jitter. We run two types of experiments A single VM in a cloud to establish a baseline - India In 4 clouds – India, Hotel, Foxtrot, Sierra – each with a single VM

Anabas, Inc. & Indiana University Collaborative Sensor-Centric Application Level – Round-trip Latency

Anabas, Inc. & Indiana University Collaborative Sensor-Centric Application Level – Jitter

Anabas, Inc. & Indiana University Collaborative Sensor-Centric Application Level Measurement Observations: In the case of of a single VM in a cloud, we could stretch to support 100 virtual GPS sensors, with critically low idle CPU at 7% and un- used RAM at 1 GB. Not good for long running applications or simulations. The average round-trip latency and jitter grow rapidly beyond 60 sensors. In the case of using four geographically distributed clouds of two different types to run a total of 200 virtual GPS sensors, average round-trip latency and jitter remain quite stable. Average idle CPU at about 35% level which is more suitable for long running simulations or applications.

Anabas, Inc. & Indiana University Preliminary Results Network Level Measurement FutureGrid can sustain at least 1 Gbps inter-cloud throughput and is a reliable network with low packet loss rate. Message Level Measurement FutureGrid can sustain a throughput close to its implemented capacity of 1 Gbps between Foxtrot and Hotel. The multiple video conference sessions shows clouds can support publish and subscribe brokers effectively. Note the limit around 3,000 participants in the figure was reported as 800 in earlier work, showing any degradation in server performance from using clouds is more than compensated by improved server performance. Collaborative Sensor-Centric Application Level Measurement Distributed clouds has an encouraging potential to support scalable collaborative sensor-centric applications that have stringent throughput, latency, jitter and reliability requirements.

Anabas, Inc. & Indiana University Future Plan Repeat current experiments to get better statistics Include scalability in the number of instances in each cloud Research impact on latency along the line of bare metal vs VMs, commercial vs academic clouds, different cloud infrastructures (OpenStack, Nimbus, Eucalyptus) Research hybrid clouds for collaborative sensor grid Research server side limits with distributed brokers versus number of clients (where virtual clients run so client side not bottlenecked) Research effect of use of secure communication mechanisms

Anabas, Inc. & Indiana University Hybrid Clouds Community Cloud Private Internal Cloud Public Cloud

Anabas, Inc. & Indiana University Private Cloud infrastructure solely operated by a single organization Community Cloud shares infrastructure among several organizations coming from specific COI with common concerns Public Cloud shared infrastructure by the public Hybrid Cloud composition of 2 of more clouds that remain unique entities integrated together at some levels

Anabas, Inc. & Indiana University Preliminary Hybrid Clouds Experiment Scalability & Interoperability FutureGrid Cloud Private 3-Community Cloud Public Cloud Amazon EC2 Private Community Cloud OpenStack(IU) 3 private clouds FutureGrid Cloud Alamo OpenStack (UT) 88 VMs Sierra Nimbus (UCSD) 11 VMs Foxtrot Nimbus (UFL) 10 VMs Public Cloud Amazon EC2 (N. Virginia) 1 VM

Anabas, Inc. & Indiana University Network Level Round-trip Latency Due to VM Number of iperf connections = 0 Ping RTT = 0.58 ms Round-trip Latency Due to OpenStack

Anabas, Inc. & Indiana University Network Level – Round-trip Latency Due to Distance

Anabas, Inc. & Indiana University Acknowledgments We thank Bill McQuay of AFRL, Ryan Hartman of Indiana University and Gary Whitted of Ball Aerospace for their important support of the work. This material is based on work supported in part by the National Science Foundation under Grant No to Indiana University for “FutureGrid: An Experimental, High-Performance Grid Test-bed.” Other partners in the FutureGrid project include U. Chicago, U. Florida, U. Southern California, U. Texas at Austin, U. Tennessee at Knoxville, U. of Virginia.