Download presentation
Presentation is loading. Please wait.
Published byHorace Charles Modified over 6 years ago
1
Engineering Assistant for the System Execution Laboratory (EASEL)
Sumant Tambe, Nilabja Roy, James Hill, Will Otte, Krishnakumar Balasubramanian
2
New Demands on Distributed Real-time & Embedded (DRE) Systems
Mapping & integrating problem artifacts to solution artifacts is very hard Key challenges in the problem space Network-centric, dynamic, very large-scale “systems of systems” Stringent simultaneous quality of service (QoS) demands Highly diverse & complex problem domains Key challenges in the solution space Enormous accidental & inherent complexities Continuous evolution & change Highly heterogeneous platform, language, & tool environments
3
Evolution of DRE System Development
Technology Problems Legacy DRE systems tend to be: Stovepiped Proprietary Brittle & non-adaptive Expensive Vulnerable Air Frame Vehicle Nav HUD Nav TNA AP FLIR AGS FLIR GPS IFF GPS IFF Cyclic Exec RTOS Mission-critical DRE systems have historically been built directly atop hardware Tedious Error-prone Costly over lifecycles Consequence: Small changes to legacy software often have big impact on DRE system QoS, integration, & maintenance
4
Evolution of DRE System Development
Technology Problems Legacy DRE systems tend to be: Stovepiped Proprietary Brittle & non-adaptive Expensive Vulnerable Middleware Services DRE Applications Operating Sys & Protocols Hardware & Networks Middleware Services DRE Applications Operating Sys & Protocols Hardware & Networks Mission-critical DRE systems have historically been built directly atop hardware Tedious Error-prone Costly over lifecycles Middleware factors out many reusable services from DRE application responsibility Essential for product-line architectures Middleware is no longer primary DRE system performance bottleneck Middleware alone is insufficient to solve key large-scale DRE system challenges!
5
DRE Systems: The Challenges Ahead
Gigabit Ethernet Limit to how much application functionality can be refactored into reusable COTS middleware Middleware itself has become very hard to use & provision statically & dynamically RT-CORBA Services Apps J2ME DRTSJ DRE Applications Middleware Services IntServ + Diffserv RTOS + RT Java RT/DP CORBA + DRTSJ Load Balancer FT CORBA Network latency & bandwidth Workload & Replicas CPU & memory Connections & priority bands Middleware Operating System & Protocols Component-based DRE systems are also very hard to deploy & configure There are many middleware platform technologies to choose from Hardware & Networks
6
Promising Solution: Model Driven Development (MDD)
Gigabit Ethernet Develop, validate, & standardize generative software technologies that: Model Analyze Synthesize & Provision multiple layers of middleware & application components that require simultaneous control of multiple QoS properties end-to-end Partial specialization is essential for inter-/intra-layer optimization & advanced product-line architectures RT-CORBA Services Apps J2ME DRTSJ DRE Applications Middleware Services <CONFIGURATION_PASS> <HOME> <…> <COMPONENT> <ID> <…></ID> <EVENT_SUPPLIER> <…events this component supplies…> </EVENT_SUPPLIER> </COMPONENT> </HOME> </CONFIGURATION_PASS> Middleware Operating System & Protocols Hardware & Networks Goal is to enhance developer productivity & software quality by providing higher-level languages & tools for middleware/application developers & users
7
Introduction to EASEL Challenge Lack of tools for
Effectively analyzing the behavior of software systems Predicting the performance of different execution architectures Goal/Objective Provide an integrated means to co-evolve static and dynamic elements of system design
8
EASEL Research Challenges (1/2)
Distributed system deployment & execution architectures are: Hard to simulate accurately due to inherent complexities e.g., managing shared resources in distributed systems under various time/space constraints Closely coupled to the underlying hardware/software infrastructure & therefore affected directly by changes in platforms e.g., need to support emerging & legacy platform technologies Complex, varied & continuously evolving e.g., middleware, component technology, programming languages, operating systems, networks, hardware Largely designed & evaluated using manual and/or ad hoc techniques that are tedious, error-prone & non-scalable Existing solutions often have no formal basis for validating & verifying that the configured software will deliver the performance requirements throughout a distributed system
9
EASEL Research Challenges (2/2)
Distributed systems also have their own concerns: Geographical dispersion increases complexity Network behavior can be very unpredictable Distributed fault tolerance & security mechanisms may incur significant overhead in distributed deployments compared with local deployments Software technologies that enhance system decomposition also complicate system integration by yielding new unresolved challenges, including: Lack of tools for composing, configuring & deploying distributed system components on heterogeneous platforms Distributed resource allocation & control policies (e.g. first-fit decreasing) often leads to less-than-optimal performance Contention for resources shared by distributed components Need for global synchronization mechanisms Deployment & execution architectures are often thought of as second-class concerns, but they are first-class problems!
10
EASEL Approach i.e. networked component integration
EASEL focuses on modeling the structural & dynamic behavior of application & infrastructure components in distributed systems i.e. networked component integration EASEL MDD tools will: Enable software developers & integrators to improve their system deployment & configuration activities by capturing, validating & synthesizing “correct-by-construction” system component deployments Provide the ability to model distributed execution architectures & conduct “what if” experiments on behavior to identify bottlenecks & quantify scalability under various workloads Support code instrumentation for run-time performance analysis Support different component technologies (e.g., .NET, CCM, EJB) Leverage best-of-breed MDD technologies (e.g., GME, Eclipse, Microsoft)
11
Modeling Distributed Systems
The EASEL models will address the following aspects Expressing System Architecture Expressing Inter/Intra-Component Behavior Configuring System Parameters Upstream Integration Testing & Evaluation
12
EASEL Technologies Build a proof-of-concept tool based on existing CCM technologies CIAO – QoS-enabled middleware based on Lightweight CCM spec CoSMIC – MDD tool-chain supporting development, configuration & deployment DAnCE – Deployment & Configuration Engine RACE – Resource Allocation & Control Engine CUTS – Component workload emulator & Utilization Test Suite CIAO DAnCE RACE CoSMIC CUTS All platforms/tools are influenced by - & influence – open standards
13
EASEL Technologies Integration of existing work
CoSMIC/GME modeling tools (Vanderbilt University & LMCO ATL) CIAO/DAnCE/CUTS/RACE Real-time CCM C++ component middleware platforms (Vanderbilt University & LMCO ATL) System evaluation using ISISLab 4x14 dual-processor 2.8 Mhz Xeon Blades running Linux Gigabit Ethernet Resource Pool Layer Application Plane Software Infrastructure Hardware CUTS CoSMIC ISISlab
14
EASEL Objectives Use the prototype EASEL tool to:
Construct a PICML model for representative application e.g., component interfaces, component logical interconnections, target domain nodes, static deployment planning Develop a CUTS-based execution emulation architecture Defines the mapping of emulated components to logical & physical distributed computing resources Demonstrate the MDD tool-based design & experimentation process & its flow down into the CIAO/DAnCE/RACE middleware platform Demonstrate “what if” performance speculation for a given design by manual analysis using LMCO ATL scheduling tools Augment & validate speculation by actual run-time performance analysis using MDD-based tools & middleware platforms
15
EASEL Workflow Demonstrate workflow:
Use PICML to define component behavior, interactions & generate system artifacts & deployment metadata Deploy system using DAnCE Perform resource allocations using RACE Evaluate system behavior & performance using CUTS Monitor application performance & provide feedback Use feedback from CUTS to perform application behavior & reconfiguration Redeploy system using DAnCE
16
Model Distributed Components & Interconnections
Use Platform-Independent Component Modeling Language (PICML) Developed using GME Core of CoSMIC toolchain Capture elements & dependencies visually Define “static semantics” using Object Constraint Language (OCL) Define “dynamic semantics” via model interpreters Generates system artifacts and metadata Goal: “Correct-by-construction”
17
Defining Intra-Component Behavior
Motivation Representing component behavior is crucial to system analysis Problems PICML had no means for capturing behavior Can't capture dependencies between different ports of a component Workload Modeling Language (WML) is a point-solution Not generalizable ? Sorry about the European-specific context – hope it’s familiar enough to help. Seen lots of trouble in CCM presentations – hope this approach will be a better one. Germ of idea from CORBA book.
18
Enhancement to PICML Define behavior using I/O Automata
Formal method for defining behavior of discrete event-based systems Define behavior as a sequence of alternating actions and states Represent “execution flow” Receive inputs on ports, execute actions, send output on ports Emphasis on execution “trace semantics” As opposed to state transitions
19
Behavior Modeling Benefits
Trace “execution flow” at system level Enables analysis of compositions, i.e., whole system analysis Exchange behavioral information with other tools for sophisticated analysis e.g., generate characterization file
20
Defining Inter-Component Behavior
Motivation Analysis of multiple concurrent execution flows in complex systems Problems Lack of tools for visualizing behavior at multiple levels of abstraction Lack of tools for specifying context-dependent personalities for application components Specifying roles in multiple different simultaneous flows Massive growth in the development of large scale distributed applications. Several techniques such as re-usable patterns , libraries , framework , layering etc. Lack of corresponding debugging support – causes Inherent Complexity Presence of a distributed global context of an application Synchronization of components Accidental Complexity New software techniques make debugging more challenging Lack of research Lack of tools support to capture the top level view of a system Distributed architecture implies distributed application contexts
21
Enhancement to PICML Path Diagram
Build Graph with (component, port) tuple as vertices & inter-component connections as edges Generate “execution flows”, i.e., path diagrams, using standard graph algorithms Allow assignment of properties to each path Deadlines Criticality Enables system analysis at multiple levels of detail
22
Path Diagram Benefits (1/2)
Critical Component Calculates execution flow between components Key to visualizing system level flows Enables design-time detection of system anomalies Detect critical path Detect bottlenecks Critical components Critical Path: Assign extra Resource
23
Path Diagram Benefits (2/2)
Possible Bottleneck? Configure components in a flow-specific fashion e.g., criticality, resource usage, deadline Enables to view the application components from a top level Differentiate components based on load levels Useful in visualizing information obtained from emulation runs Longest Path
24
Defining Application Quality of Service (QoS)
Motivation Declarative specification of application QoS parameters Platform independent specification Challenges Existing tools are tied tightly to platforms and applications Compromise between high-level generality and platform specificity may be difficult Specializing to a target platform may be difficult This component requires a classified environment Critical Component: Requires higher priority This component requires the VxWorks operating system Potential bottleneck: This link should be allocated additional bandwidth
25
Enhancement to PICML Provide QoS modeling capability at two levels of abstraction High level modeling elements from the UML Profile for Modeling QoS Used to define characterizations to describe QoS A set of common QoS characterizations CPU Network Memory Pre-defined characterizations may be extended using the high level elements Provide facility for plugging in platform-specific mappings to low level enforcement mechanisms Initially mapping to QoSPML OS_Type: VxWorks OS_Sched: FIFO OS_Prio: +10 Net_Link: 10mbps Sec_Level: CLASSIFIED
26
QoS Modeling Benefits Raises the level of abstraction
Modelers need not be familiar with platform specific QoS mechanisms Avoids tying the model to a particular platform/technology Enables concurrent deployment across multiple middleware technologies Enables enforcement of decisions made during earlier phases of model analysis Raise priority of critical components Reserve CPU/Bandwidth along the critical path
27
Emulation of Target System Behavior with CUTS
Component Workload Emulator (CoWorkEr) Utilization Test Suite (CUTS) consists of a test network of CoWorkEr nodes Outside the test network is a Benchmark Data Collector (BDC) for collecting test metrics CoWorkEr - Emulates application components Emulates CPU operations, database operations, memory (de)allocations & background workloads Participate in component interactions James, can you please add a slide or two on the WLGs
28
Defining Behavior of System Components
Workload Modeling Language (WML) is used to define CoWorkEr behavior e.g., series of actions to complete Properties associated with each element define amount of work to perform e.g., number of CPU operations, database transactions, & memory allocations WML is translated to XML metadata descriptors Behavior file is assigned using attribute in CoWorkEr models
29
Measuring Performance of System Components
All data is collected by the Benchmark Data Collector (BDC) & stored in a database for offline evaluation Critical paths can be selected & evaluated to determine if end-to-end QoS deadlines are meet Time to transmit an event is measured Time to process an event is measured Nishanth, it’s
30
CUTS Workflow Use PICML to define system artifacts and generate deployment metadata Use WML to define component behavior Deploy system using DAnCE & evaluate system behavior & performance CUTS performance analysis & visualization tools Provide monitoring applications performance feedback to modify CoWorkEr behavior Change QoS specification Use PICML to redefine deployment strategies & DAnCE to redeploy system
31
Current Status of CUTS Application strings can be connected by connecting multiple CoWorkEr components to each other, which is done at deployment time Events transmitted between CoWorkEr components associate data payload Combinations of events can be specified, which at as workload conditions/guards (i.e. workload X may require Event A & B to begin processing) Can create background workloads in a CoWorkEr that are triggered periodically & at a user-defined probability & rate, which can emulate non-deterministic, or fault, behavior All performance data is pushed to the Benchmark Data Collector & stored in a database of offline data analysis End-to-end deadline is analyzed in a critical path, instead of component-to-component deadline James, can you please add more information about the current status of the WLGs
32
Assumptions & Required Extensions to CUTS Architecture
All CoWorkers are in the “critical path” of an operational string Pull model for data acquisition from individual CoWorkEr that make up the operational string One end-to-end data collector per operational string No logging of performance information – no data base Required Extensions Need to redefine CoWorkEr behavior to include user defined workers that operate on the payload as well as generate payload Implement the pull model for data acquisition by the BDC Perform online end-to-end data analysis Need to modify CoWorkEr resource utilization One thread per event type Multiple threads per event type
33
EASEL Benefits More effective reuse of distributable components on emerging & more effective reuse legacy platforms Organizations can maintain coherent & correct coupling of design choices across iterative software lifecycle Reduced cost & time-to-market by alleviating numerous integration problems that arise today e.g., Inherent & accidental complexities in developing large-scale software in large multi-organization & iterative development processes w/ legacy system integration & reuse Improved performance through quantitative evaluation of distributed system design- & run-time space Improved quality & reliability through enhanced understanding of concurrent execution behavior of distributed components in integrated systems early in the lifecycle MDD tools will decrease software defect rates, enhance performance, & automate tracing from logical design artifacts to deployed software
34
EASEL Roadmap Enrich model of distributed deployment & execution architecture Additional MDD tools for deployment & configuration Enhance CUTS modeling & analysis capabilities e.g., specification of performance properties, place constraints on resource availability & usage, permit specification of adaptive behavior for dynamic performance evaluation Broaden support for multi-platform run-time analysis e.g., Windows XP, Linux(es), & Solaris operating systems Add support for EJB & .NET e.g., enable applications to write in C++, C#, or Java Add support for other COTS MDD tools, as available e.g., Eclipse Graphical Modeling Framework (GMF) & Microsoft Software Factory DSL tools Demonstrate MDD tools on IS&S-related application & code base
35
Questions?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.