Future Usage Environments & Systems Integration November 16 th 2004 HCMDSS planning workshop Douglas C. Schmidt (moderator) David Forslund, Cognition Group.

Slides:



Advertisements
Similar presentations
1 From Grids to Service-Oriented Knowledge Utilities research challenges Thierry Priol.
Advertisements

Kensington Oracle Edition: Open Discovery Workflow Meets Oracle 10g Professor Yike Guo.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Design 2.
Applying the SOA RA Utah Public Safety ESB Project Utah Department of Technology Services April 10, 2008 Prepared by Robert Woolley.
High Performance Computing Course Notes Grid Computing.
Background Chronopolis Goals Data Grid supporting a Long-term Preservation Service Data Migration Data Migration to next generation technologies Trust.
U.S. Department of the Interior U.S. Geological Survey National Geospatial Technical Operations Center Towards a More Consistent Framework for Disseminated.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
MICROSOFT PLATFORM  Microsoft is a platform company is committed to providing a rich ecosystem for building and managing connected systems.  Microsoft.
Automated Analysis and Code Generation for Domain-Specific Models George Edwards Center for Systems and Software Engineering University of Southern California.
Using Gossip to Build Scalable Services Ken Birman, CS514 Dept. of Computer Science Cornell University.
Distributed Systems Architectures
Ch 12 Distributed Systems Architectures
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
Ken Birman Cornell University. CS5410 Fall
Epidemic Techniques Chiu Wah So (Kelvin). Database Replication Why do we replicate database? – Low latency – High availability To achieve strong (sequential)
Real-Time and Multimedia Systems Laboratory Carnegie Mellon System Integration Raj Rajkumar Professor, ECE and CS Director, Real-Time and Multimedia Systems.
Business Intelligence Dr. Mahdi Esmaeili 1. Technical Infrastructure Evaluation Hardware Network Middleware Database Management Systems Tools and Standards.
Navigating in the Dark: New Options for Building Self- Configuring Embedded Systems Ken Birman Cornell University.
SPRING 2011 CLOUD COMPUTING Cloud Computing San José State University Computer Architecture (CS 147) Professor Sin-Min Lee Presentation by Vladimir Serdyukov.
WORKFLOWS IN CLOUD COMPUTING. CLOUD COMPUTING  Delivering applications or services in on-demand environment  Hundreds of thousands of users / applications.
23 September 2004 Evaluating Adaptive Middleware Load Balancing Strategies for Middleware Systems Department of Electrical Engineering & Computer Science.
1 Copyright 2008 NexJ Systems Inc. Confidential and Proprietary - Not for Distribution. Open Source Strategy NexJ Systems Inc.
Real-Time Systems Programming ECE Fall 2002 Instructor : Aniruddha Gokhale Guest Instructors : Bala Natarajan, Doug Schmidt {a.gokhale,
US NITRD LSN-MAGIC Coordinating Team – Organization and Goals Richard Carlson NGNS Program Manager, Research Division, Office of Advanced Scientific Computing.
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over the Internet. Cloud is the metaphor for.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 12 Slide 1 Distributed Systems Architectures.
26 Sep 2003 Transparent Adaptive Resource Management for Distributed Systems Department of Electrical Engineering and Computer Science Vanderbilt University,
COM vs. CORBA Computer Science at Azusa Pacific University September 19, 2015 Azusa Pacific University, Azusa, CA 91702, Tel: (800) Department.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
October 8, 2015 Research Sponsored by NASA Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation Nanbor.
Department of Information Engineering The Chinese University of Hong Kong A Framework for Monitoring and Measuring a Large-Scale Distributed System in.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Approved for Public Release, Distribution Unlimited QuickSilver: Middleware for Scalable Self-Regenerative Systems Cornell University Ken Birman, Johannes.
1 Introduction to Middleware. 2 Outline What is middleware? Purpose and origin Why use it? What Middleware does? Technical details Middleware services.
HPEC’02 Workshop September 24-26, 2002, MIT Lincoln Labs Applying Model-Integrated Computing & DRE Middleware to High- Performance Embedded Computing Applications.
Janos Sztipanovits Dr. Janos Sztipanovits E. Bronson Ingram Distinguished Professor of EECS Director of ISIS Vanderbilt University Nashville, TN Overview.
Middleware for FIs Apeego House 4B, Tardeo Rd. Mumbai Tel: Fax:
NOVA Networked Object-based EnVironment for Analysis P. Nevski, A. Vaniachine, T. Wenaus NOVA is a project to develop distributed object oriented physics.
Introduction Infrastructure for pervasive computing has many challenges: 1)pervasive computing is a large aspect which includes hardware side (mobile phones,portable.
Investigating Survivability Strategies for Ultra-Large Scale (ULS) Systems Vanderbilt University Nashville, Tennessee Institute for Software Integrated.
Peer to Peer Networks Distributed Hash Tables Chord, Kelips, Dynamo Galen Marchetti, Cornell University.
SEEK Welcome Malcolm Atkinson Director 12 th May 2004.
Authors: Ronnie Julio Cole David
Leiden; Dec 06Gossip-Based Networking Workshop1 Epidemic Algorithms and Emergent Shape Ken Birman.
Leiden; Dec 06Gossip-Based Networking Workshop1 Gossip Algorithms and Emergent Shape Ken Birman.
GRIDS Center Middleware Overview Sandra Redman Information Technology and Systems Center and Information Technology Research Center National Space Science.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
Scalable Self-Repairing Publish/Subscribe Robbert van Renesse Ken Birman Werner Vogels Cornell University.
OOPSLA Oct Towards a Pattern Language for NEST Middleware Venkita Subramonian & Chris Gill, Washington University, St.Louis David Sharp, The Boeing.
A QoS Policy Modeling Language for Publish/Subscribe Middleware Platforms A QoS Policy Modeling Language for Publish/Subscribe Middleware Platforms Joe.
Clusters, Fault Tolerance, and Other Thoughts Daniel S. Katz JPL/Caltech SOS7 Meeting 4 March 2003.
7. Grid Computing Systems and Resource Management
RIA to visualize the health of a project Team #4 Midterm presentation February 28,2008.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
GRID ANATOMY Advanced Computing Concepts – Dr. Emmanuel Pilli.
August 20, 2002 Applying RT-Policies in CORBA Component Model Nanbor Wang Department of Computer Science Washington University in St. Louis
Model-Driven Optimizations of Component Systems Vanderbilt University Nashville, Tennessee Institute for Software Integrated Systems OMG Real-time Workshop.
Tackling Challenges of Scale in Highly Available Computing Systems Ken Birman Dept. of Computer Science Cornell University.
CoSMIC: An MDA Tool Suite for Distributed Real-time and Embedded Systems Tao Lu, Aniruddha Gokhale, Emre Turkay, Balachandran Natarajan, Jeff Parsons,
Distributed Systems Architectures Chapter 12. Objectives  To explain the advantages and disadvantages of different distributed systems architectures.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Cloud Computing.
TRUST:Team for Research in Ubiquitous Secure Technologies
The Globus Toolkit™: Information Services
Tools for Composing and Deploying Grid Middleware Web Services
Automated Analysis and Code Generation for Domain-Specific Models
TRUST:Team for Research in Ubiquitous Secure Technologies
Mark Quirk Head of Technology Developer & Platform Group
Presentation transcript:

Future Usage Environments & Systems Integration November 16 th 2004 HCMDSS planning workshop Douglas C. Schmidt (moderator) David Forslund, Cognition Group Jane W. S. Liu, Academia Sinica Raj Rajkumar, CMU Victoria L. Rich, U. Penn Douglas Rosendale, Veterans Affairs John R. Zaleski, Siemens

The Need: Better Tools Software developers depend on complex platforms, and increasingly work by extending or customizing with extra code –Web Services, J2EE, CORBA –Even Microsoft.NET The quality of these platforms and “tools” is a direct determinant of the quality of their applications and solutions TRUST September 13th 2004 NSF STC Review 2

Existing Platforms are Inadequate But existing platforms –Scale poorly and can “melt down” under stress –Are insecure; child’s play to disrupt or intrude –Are human intensive to deploy, configure –Are hard to repair when disruption occurs –Are costly to own and operate Example: publish-subscribe scalability issues within Yahoo, Amazon, other big data centers TRUST September 13th 2004 NSF STC Review 3

Why don’t platforms scale? Prevailing client-server communications model, combined with strong reliability goals, impose O(N) delays and O(N 2 ) performance degradation New technologies offer hope –Based on peer-to-peer interaction styles –Substitute probabilistic objectives for classic deterministic ones TRUST September 13th 2004 NSF STC Review 4

Kelips Affinity Groups: peer membership thru consistent hash 1N  N members per affinity group Map nodes to affinity groups

Kelips Affinity Groups: peer membership thru consistent hash 1N  Affinity group pointers N members per affinity group idhbeatrtt ms ms Affinity group view 110 knows about other members – 230, 30…

Kelips Affinity Groups: peer membership thru consistent hash 1N  Contact pointers N members per affinity group idhbeatrtt ms ms Affinity group view groupcontactNode …… 2202 Contacts 202 is a “contact” for 110 in group 2

Kelips Affinity Groups: peer membership thru consistent hash 1N  Gossip protocol replicates data cheaply N members per affinity group idhbeatrtt ms ms Affinity group view groupcontactNode …… 2202 Contacts resourceinfo …… cnn.com110 Resource Tuples “cnn.com” maps to group 2. So 110 tells group 2 to “route” inquiries about cnn.com to it.

Kelips Affinity Groups: peer membership thru consistent hash 1N  Gossip protocol replicates data cheaply N members per affinity group Lookup is routed by a contact node in group 2, then handled by node 110

Astrolabe: Scalable Monitoring Infrastructure. Captures system state hierarchically, using P2P protocol that “assembles a puzzle” without any servers NameLoadWeblogic?SMTP?Word Version … swift falcon cardinal NameLoadWeblogic?SMTP?Word Version … gazelle zebra gnu NameAvg Load WL contactSMTP contact SF ITH Paris San Francisco Ithaca SQL query “summarizes” data Dynamically changing query output is visible system-wide

Our team has many ideas like the two you’ve seen We’re challenging assumptions industry accepts as dogma (like client-server structure) Building real solutions that really work And finding ways to apply them in critical infrastructure settings –For example, we’ve worked with EPRI to explore applications of Astrolabe and Kelips to monitoring the electric power grid –Poor monitoring contributed to ’03 blackout TRUST September 13th 2004 NSF STC Review 11

Bio Dr. Douglas C. Schmidt is Full Professor in the Electrical Engineering and Computer Science Department, Associate Chair of the Computer Science and Engineering programs, and Senior Researcher, Institute for Software Integrated Systems (ISIS), at Vanderbilt University. His research has facilitated development of distributed real-time and embedded (DRE) middleware and applications on parallel platforms running over high-speed networks and embedded system interconnects. Dr. Schmidt is an internationally renowned and widely cited expert on distributed computing middleware patterns, middleware frameworks, and Real-time CORBA. He has published over 300 works in technical journals, conferences, and books, and has featured as guest editor and co-editor for various publications, editor of C++ Report magazine, and co-author for C/C++ Users Journal. Dr. Schmidt was Program Manager in the DARPA Information Exploitation Office (IXO) and Information Technology Office (ITO), where he led the national effort on QoS-enabled component middleware research in the Program Composition for Embedded Systems (PCES) program. He was Co-Chair for the Software Design and Productivity (SDP) Coordinating Group of the Federal government's multi-agency Information Technology Research and Development (IT R&D) Program. He served as Deputy Director of the DARPA Information Technology Office (ITO), where he helped set the national IT research and development agenda. Dr. Schmidt was formerly an Associate Professor in the Electrical and Computer Engineering Department at University of California, Irvine and an Associate Professor and Director of the Center for Distributed Object Computing, at Washington University in St. Louis. He has served in many capacities up to program chair, at leading industry conferences, and presented nearly 300 keynote addresses, invited talks, and tutorials. Dr. Schmidt has over fifteen years of experience developing DRE middleware and model-driven development tools. He led development of ADAPTIVE Communication Environment (ACE), and with colleagues at ISIS, Vanderbilt, used ACE to develop a high performance, real-time CORBA ORB endsystem called The ACE ORB (TAO). In turn, ACE and TAO form the basis for Component-Integrated ACE ORB (CIAO), a real-time CORBA Component Model implementation built by the DOC Group (which developed CoSMIC). ACE, TAO, CIAO, and CoSMIC have been used successfully by developers at hundreds of companies, and Dr. Schmidt and colleagues have applied these middleware platforms and tools on large-scale projects at Fortune 500 companies.