Download presentation
Presentation is loading. Please wait.
Published byClement Edwards Modified over 9 years ago
1
Evaluating the Correctness and Effectiveness of a Middleware QoS Configuration Process in DRE Systems Vanderbilt University Nashville, Tennessee Institute for Software Integrated Systems Amogh Kavimandan, Anantha Narayanan, Aniruddha Gokhale, Gabor Karsai a.gokhale@vanderbilt.edu www.dre.vanderbilt.edu/~gokhale
2
Distributed Real-time and Embedded (DRE) Systems DRE systems Are highly dynamic, composed from diverse, complex sub-systems; large- scale Have strict network, computing resource and Quality of Service (QoS) requirements at local- (sub- system) and global- (application) level Increasingly built as component- based applications 2 Exhibit heterogeneity with respect to execution platform QoS requirements that need to be configured –for individual QoS dimensions (e.g., fault-tolerance, security) as well as each of its execution platforms
3
Automated Middleware QoS Configuration: Hard Challenges 3 Realizing DRE system QoS requirements necessitates performing middleware QoS configuration DRE systems have varied application domains (e.g., shipboard computing environment (SCE), emergency response services (ERS)), exhibit various domain- specific QoS requirements Need to express QoS requirements in platform-independent manner – generalizing is hard Must deal with plethora of configuration mechanisms of hosting middleware platforms Need to bridge the gap between domain-specific requirements and configuration mechanisms
4
4 What vs how - Middleware platforms provide what is required to achieve system QoS not always how it can be achieved No centralized orchestrator to realize QoS from options providing individual QoS control Non-trivial to manually perform QoS configuration activity Choose appropriate configuration mechanisms in an application-specific manner, particularly for large applications Middleware does not prevent developers from choosing semantically invalid configurations to achieve QoS Lack of effective QoS configuration tools result in QoS policy mis-configurations that are hard to analyze & debug Automated Middleware QoS Configuration: Hard Challenges
5
–Shields developers from configuration semantics Automated translation using model transformation to generate system QoS configurations –Reusable, one-step translation –Encodes best practices in QoS mapping QUality of service pICKER (QUICKER): Domain-independent QoS modeling languages – express system QoS in terms of requirements semantics –Easier to model, evolve, lesser modeling effort 5 Solution Approach: QUICKER
6
–Resolves sub- system non- functional dependencies, verifies correctness of configurations Verification of correctness of transformation algorithms using structural correspondence techniques 6 Solution Approach: QUICKER QUality of service pICKER (QUICKER) (contd.): Design-time verification of generated QoS configurations using model- checking –Considerably faster system QoS design, and evolution than manual approach We focus on the correctness & effectiveness of our QoS configuration process
7
Overview of QUICKER: Specifying QoS Requirements Challenge 1. QoS requirements specification DRE developers are domain experts who understand domain-level issues, system QoS specification must be expressible at the same level of abstraction 7
8
8 Challenge 1. QoS requirements specification DRE developers are domain experts who understand domain-level issues, system QoS specification must be expressible at the same level of abstraction Large gap between what is required (by the application) and how it can be achieved (by the middleware platform) Configurations can not be reused; difficult to scale to large-scale systems Overview of QUICKER: Specifying QoS Requirements
9
9 Application requirements are expressed as QUICKER QoS policy models QUICKER captures policies in a platform-independent manner –Specifying QoS is tantamount to answering questions about application; rather than using low-level mechanisms (such as type of publisher proxy collection, event dispatching mechanism etc.) to achieve QoS Overview of QUICKER: Specifying QoS Requirements
10
Representation at multiple levels of granularity –e.g., component- or assembly-level 10 Application requirements are expressed as QUICKER QoS policy models QUICKER captures policies in a platform-independent manner –Specifying QoS is tantamount to answering questions about application; rather than using low-level mechanisms (such as type of publisher proxy collection, event dispatching mechanism etc.) to achieve QoS Overview of QUICKER: Specifying QoS Requirements
11
11 Benefits of QUICKER QoS policy modeling QoS policy specifications can be containment inherited, reused QoS policy inherited by all contained objects More than one connections can share QoS policies Scalable, flexible QoS policy models Overview of QUICKER: Specifying QoS Requirements
12
12 iterator: COPY_ON_READ COPY_ON_WRITE DELAYED IMMEDIATE dispatching: REACTIVE PRIORITY MT scheduling: NULL PRIORITY bands: low_prio high_prio fltrgrp: DISJUNCTION CONJUNCTION LOGICAL_AND TPool: stacksize lane_borrowing request_bufferring lanes: static_thrds dyna_thrds Challenge 2. QoS realization Very large configuration space providing high degree of flexibility and configurability Semantic compatibility of QoS configurations enforced via low-level mechanisms – tedious, error-prone Prune middleware configuration space, instantiate configuration set selected, validate their values Overview of QUICKER: Realizing System QoS
13
13 Mapping application QoS policies onto configuration options using model transformation developed in GReAT Semantic translation algorithms specified in terms of input & output languages –e.g., rules that translate multiple application service requests & service level policies to corresponding QoS options Transformation output as system model – allow for further analysis & translation Simplifies application development & enhances traceability Provider Service Request Provider Service Levels Level 1 Level 2 Level 3 Multiple Service RequestsService Levels Priority Model Policy Thread Pool Lanes Overview of QUICKER: Realizing System QoS
14
14 Challenge 2 Resolved: Realizing System QoS Algorithm shown is RT-CCM QoS mapping that uses Application structural properties to automatically deduce configurations –Lines 7-16 show thread resource allocation scheme –Line 27 shows client-side QoS configurations QoS policies specified used for the remaining configurations –Service invocation profile used to assign thread resources –Line 28 resolves priority dependency of connected components # of client components and interface operations used to calculate # of threads required
15
15 Challenge 3. QoS options dependency resolution of application sub-systems Configurations of connected components may exhibit dependency relationship –e.g., server-side priority model and client-side priority bands must match Manually tracking dependencies between components is hard Overview of QUICKER: Realizing System QoS
16
16 Container COMPONENT EXECUTORS Component Home POA Callback Interfaces I n t e r n a l I n t e r f a c e s E v e n t S i n k s F a c e t s R e c e p t a c l e s E v e n t S o u r c e s Component Reference C o m p o n e n t C o n t e x t COMPONENT SERVER 1 Container COMPONENT EXECUTORS Component Home POA Callback Interfaces I n t e r n a l I n t e r f a c e s E v e n t S i n k s F a c e t s R e c e p t a c l e s E v e n t S o u r c e s Component Reference C o m p o n e n t C o n t e x t COMPONENT SERVER 2 ORB End-to-End Priority Propagation Thread Pools Portable Priorities Protocol Properties Priority Band Dependencies may span beyond “immediate neighbors”, e.g., –application execution path –components belonging to separate assemblies Empirically validating configuration changes slows down development & QA process considerably Several iterations before desired QoS is achieved (if at all) Assembly 1Assembly n Priority Model Priority Band Verifying the Generated QoS Configurations
17
options of dependent component(s) triggers detection of potential mismatches e.g., dependency between Gizmo invocation priority & Comm lane priority 17 Leveraging Bogor model checking framework Dependency structure maintained in Bogor used to track dependencies between QoS options of components, e.g.: –Analysis & Comm are connected –Gizmo & Comm are dependent Change(s) in QoS Detect mismatch if either values change Verifying the Generated QoS Configurations
18
18 Representation of middleware QoS options in Bogor model-checker BIR extensions allow representing domain-level concepts in a system model QUICKER defines new BIR extensions for QoS options –Allows representing QoS options & domain entities directly in a Bogor input model –e.g., CCM components, Real-time CORBA lanes/bands are first-class Bogor data types Reduces size of system model by avoiding multiple low-level variables to represent domain concepts & QoS options Verifying the Generated QoS Configurations
19
19 Representation of properties (that a system should satisfy) in Bogor BIR primitives define language constructs to access & manipulate domain-level data types, e.g.: –Used to define rules that validate QoS options & check if property is satisfied Automatic generation of BIR of DRE system from QUICKER-generated output models Model interpreters auto-generate Bogor Input Representation of a system from its model Verifying the Generated QoS Configurations
20
Verification framework Specify correctness properties at meta-level Add annotations for each instance (correspondence rules) Use annotations to automatically verify whether the instances satisfy the correctness properties We do not attempt to prove the general correctness of the transformation itself Source Meta Target Meta Source Model Target Model Correctness Specification Model Transformation Correctness Checker Annotations Certificate Verifying the Correctness of QoS Mapping Algorithms 20
21
Add cross links to identify corresponding elements Rules specify correspondence conditions for selected types At the end of the transformation, the instance models are checked if they satisfy all the correspondence conditions Input Model Output Model Correspondence Rules crosslink Verifying the Correctness of QoS Mapping Algorithms 21
22
22 DRE System Case Study Basic single processor (BasicSP) scenario Components use event-based communication paradigm Position updated periodically at 20Hz GPS generates data which is ultimately comsumed by NavDisplay in a event-push, data-pull fashion
23
23 Evaluation conducted on ISISlab Each node was 2.8 GHz Intel Xeon dual processor, 1GB physical memory, 1 GHz network interface, and 40GB hard disks Used CIAO Version 0.6 middleware platform Applied QUICKER to BasicSP to generate its configurations which were used in our evaluations Empirically Evaluating QoS Configurations
24
24 Empirically Evaluating QoS Configurations Avg. latency = ~1925 us Variation in std. deviation was quite small
25
of correctness of QUICKER’s QoS configuration process Verified the generated QoS configurations through model-checking QUICKER toolchain provides QoS requirements modeling languages QoS mapping algorithms for mapping requirements to middleware QoS options We discussed verification 25 Concluding Remarks QUICKER can be downloaded from www.dre.vanderbilt.edu/CoSMIC/ Verified the correctness of QoS Mapping Algorithms through structural correspondence Empirically validated the configurations by applying the QUICKER process to representative DRE system case study
26
Questions? 26 1-510-2021-100
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.