Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Key design time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex.

Similar presentations


Presentation on theme: "1 Key design time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex."— Presentation transcript:

1 1 Key design time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex & dynamic QoS problem & solution spaces to evolve effective solutions Assure QoS in face of interactive and/or autonomous adaptation to fluid environment Pollux & Race R&D Challenges: Design Time Goal: Significantly ease task of creating new QoS-enabled information management TSoS & integrating them with existing artifacts in new/larger contexts/constraints.. Artifact Generator if (inactiveInterval != -1) { int thisInterval = (int)(System.currentTimeMillis() - lastAccessed) / 1000; if (thisInterval > inactiveInterval) { invalidate(); ServerSessionManager ssm = ServerSessionManager.getManager(); ssm.removeSession(this); } private long lastAccessedTime = creationTime; /** * Return the last time the client sent a request associated with this * session, as the number of milliseconds since midnight, January 1, 1970 * GMT. Actions that your application takes, such as getting or setting * a value associated with the session, do not affect the access time. */ public long getLastAccessedTime() { return (this.lastAccessedTime); } this.lastAccessedTime = time; Configuration Specification Analysis Tool Code

2 2 Key run time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies Enforce integrated QoS policies at all layers (e.g., application, middleware, OS, transport, network) to support COIs within multiple domains Manage resources in the face of intermittent communication connectivity e.g., power, mission, environments, silence/chatter Compensate for limited resources in tactical environments e.g., bandwidth, compute cycles, primary/secondary storage Goal: Regulating & adapting to (dis)continuous changes in difficult runtime environments Pollux & RACE R&D Challenges: Run Time

3 3 Resource Allocation & Control Engine (RACE) Resource management framework atop CORBA Component Model (CCM) middleware (CIAO/DAnCE) Motivating Applications: –NASA’s Magnetospheric Multi-scale (MMS) mission Spacecraft constellation Adaptation to varying: –Regions of interest (ROI) –Modes of operation –Total Ship Computing Environment (TSCE) ~1000 nodes ~5000 applications Task (re)distribution Switch modes of operation Adaptation to –Loss of resources –Changing task priorities

4 4 Performance Evaluation of RACE Overhead of the RACE framework: –Monitoring overhead : 37.97 micro seconds – Control overhead: 799.82 nano seconds Baseline System Performance (Without RACE) System Performance with RACE

5 5 RACE MDD Tools – Design Time Challenges Carry out commander’s intent by – focusing on generic functionality in Platform Independent Model (PIM) – using transformation engine to generate detailed Platform Specific Model (PSM) Platform Independent Real-Time Policies Model Platform Specific (CCM) Real-Time Policies Model

6 6 Carry out commander’s intent by – focusing on generic functionality in Platform Independent Model (PIM) – using transformation engine to generate detailed Platform Specific Model (PSM) Explore prob & soln space by – easily modifying visual model – passing generated artifacts to Bogor Model Checker – getting all possible valid & invalid states RACE MDD Tools – Design Time Challenges

7 7 Carry out commander’s intent by – focusing on generic functionality in Platform Independent Model (PIM) – using transformation engine to generate detailed Platform Specific Model (PSM) Assure QoS by – performing safety, validity & behavioral checks – passing set of valid states to RACE middleware Explore prob & soln space by – easily modifying visual model – passing generated artifacts to Bogor model checker – getting all possible valid & invalid states Bogor Model Checker (with RT Extensions) SS1SS2SS3 SS4 SS5 B C A Distributed Application with RT Config Description RACE MDD Tools – Design Time Challenges

8 8 RACE Middleware –Run Time Challenges Carry out commander’s intent by executing deployment plan from mission planner Enforce QoS policies (at the middleware level) with: – multiple, pluggable algorithms for allocation & control – automation of D & C with uniform interfaces Manage resources by monitoring & adapting component resource allocation Compensate for limited resources by – migrating/swapping components – adjusting application parameters (QoS settings)

9 9 Applying RACE to DDS-Based DRE Systems All DRE systems have architectural features in common Adapting RACE to DDS-based DRE systems won’t require major mods DDS will make some of RACE’s tasks simpler –QoS validation & matching –QoS enforcement

10 10 DDS Implementation Architectures Decentralized Architecture –embedded threads to handle communication, reliability, QoS etc node Network

11 11 DDS Implementation Architectures Decentralized Architecture –embedded threads to handle communication, reliability, QoS etc Federated Architecture –a separate daemon process to handle communication, reliability, QoS, etc. node Network node Network daemon node daemon

12 12 node DDS Implementation Architectures Decentralized Architecture –embedded threads to handle communication, reliability, QoS etc Federated Architecture –a separate daemon process to handle communication, reliability, QoS, etc. Centralized Architecture –one single daemon process for domain node Network node Network daemon node Network daemon node daemon control data

13 13 Pub/Sub Benchmarking Lessons Learned Performance of DDS is significantly faster than other pub/sub architectures Even the slowest was 2x faster than other pub/sub services DDS scales better to larger payloads, especially for simple data types

14 14 Pub/Sub Benchmarking Lessons Learned Performance of DDS is significantly faster than other pub/sub architectures Even the slowest was 2x faster than other pub/sub services DDS scales better to larger payloads, especially for simple data types DDS implementations are optimized for different use cases & design spaces payload size # of subscribers collocation http://www.dre.vanderbilt.edu/DDS/DDS_RTWS06.pdf

15 15 Configuration Aspect Problems Middleware developers  Documentation & capability synchronization  Semantic constraints & QoS evaluation of specific configurations XML Configuration Files XML Property Files CIAO/CCM provides ~500 configuration options Application developers  Must understand middleware constraints & semantics – Increases accidental complexity  Different middleware uses different configuration mechanisms 21 interrelated QoS policies

16 16 QoS Policies Supported by DDS DCPS entities (e.g., topics, data readers/writers) configurable via QoS policies QoS tailored to data distribution in tactical information systems Request/offered compatibility checked by DDS at Runtime Consistency checked by DDS at Runtime –DEADLINE Establishes contract regarding rate at which periodic data is refreshed –LATENCY_BUDGET Establishes guidelines for acceptable end-to-end delays –TIME_BASED_FILTER Mediates exchanges between slow consumers & fast producers –RESOURCE_LIMITS Controls resources utilized by service –RELIABILITY (BEST_EFFORT, RELIABLE) Enables use of real-time transports for data –HISTORY (KEEP_LAST, KEEP_ALL) Controls which (of multiple) data values are delivered –DURABILITY (VOLATILE, TRANSIENT, PERSISTENT) Determines if data outlives time when they are written –… and 15 more … Implications for Trustworthiness

17 17 DDS QoS Policies Interactions of QoS Policies have implications for: Consistency/Validity e.g., Deadline period < TimeBasedFilter minimum separation (for a DataReader) Compatibility/Connectivity e.g., best-effort communication offered (by DataWriter), reliable communication requested (by DataReader) DataWriter Durability- Volatile Durability- Transient Reliability- Best Effort Reliability- Reliable Deadline- 10ms Deadline- 20ms Liveliness- Manual By Topic Liveliness- Automatic Topic Will Settings Be Consistent? Or Will QoS Settings Need Updating? Timebased- 15ms DataWriter DataReader Will Data Flow? Or Will QoS Settings Need Updating? DataReader

18 18 DDS Trustworthiness Needs (1/2) Compatibility and Consistency of QoS Settings –Data needs to flow as intended Close software loopholes that might be maliciously exploited –Fixing at code time untenable Implies long turn-around times Code, compile, run, check status, iterate Introduces accidental complexity DDS QoS Modeling Language (DQML) models QoS configurations and allows checking at design/modeling time –Supports quick and easy fixes by “sharing” QoS policies –Supports correct-by-construction configurations –Fixing at run-time untenable Updating QoS settings on the fly Introduces inherent complexity Unacceptable for certain systems (e.g., RT, mission critical, provable properties)

19 19 DDS Trustworthiness Needs (2/2) QoS configurations generated automatically –Eliminate accidental complexities Close configuration loopholes for malicious exploitation –Decouple configurations from application logic Refinement of configuration separate from refinement of code DQML generates QoS settings files for DDS Applications –Creates consistent configurations –Promotes separation of concerns Configuration changes unentangled with business logic changes –Increases confidence QoS Settings

20 20 Typical DDS Application Development Business/application logic mixed with QoS configuration code –Accidental complexity –Obfuscation of configuration concerns DQML decouples QoS configuration from business logic –Facilitates configuration analysis –Reduces accidental complexity DataWriter QoS configuration & datawriter creation Publisher QoS configuration & publisher creation QoS Configuration Business logic = Higher confidence DDS application

21 21 DQML Design Decisions No Abortive Errors User can ignore constraint errors Useful for developing pieces of a distributed application Initially focused on flexibility QoS Associations vs. Containment Entities and QoS Policies associated via connections rather than containment Provides flexibility, reusability Eases resolution of constraint violations

22 22 Use Case: DDS Benchmark Environment (DBE) Part of Real-Time DDS Examination & Evaluation Project (RT-DEEP) http://www.dre.vanderbilt.edu/DDS DataReader DataWriter QoS DataReader QoS Developed by DRE Group at ISIS DBE runs Perl scripts to deploy DataReaders and DataWriters onto nodes Passes QoS settings files (generated by hand) Requirement for testing and evaluating non-trivial QoS configurations

23 23 DBE Interpreter Model the Desired QoS Policies via DQML Invoke the DBE Interpreter Generates One QoS Settings File for Each DBE DataReader and DataWriter to Use DBE DataReader DataWriter Have DBE Launch DataReaders and DataWriters with Generated QoS Settings Files No Manual Intervention QoS Settings

24 24 DQML Demonstration Create DDS entities, QoS policies, and connections Run constraint checking consistency check compatibility check fix at design time Invoke DBE Interpreter automatically generate QoS settings files

25 25 Future Work Incorporate into Larger Scale Tool Chains –e.g., Deployment and Configuration Engine (DAnCE) in CoSMIC Tool Chain Incorporate with TRUST Trustworthy Systems –Combine QoS polices and patterns to provide higher level services Build on DDS patterns 1 –Continuous data, state data, alarm/event data, hot-swap and failover, controlled data access, filtered by data content 1 Gordon Hunt, OMG Workshop Presentation, 10-13 July, 2006 Fault-tolerance service (e.g., using ownership/ownership strength, durability policies, multiple readers and writers, hot- swap and failover pattern) Security service (e.g., using time based filter, liveliness policies, controlled data access pattern) Real-time data service (e.g., using deadline, transport priority, latency budget policies, continuous data pattern)

26 26 MDD Solutions for Configuration Options Configuration Modeling Language (OCML) ensures semantic consistency of option configurations OCML is used by OCML metamodel is platform- independent OCML models are platform- specific – Application developers to configure the middleware for a specific application – Middleware developers to design the configuration model Configuration model validates application model

27 27 Applying OCML – Configuration space – Constraints OCML generates config model Middleware developers specify

28 28 Applying OCML Middleware developers specify – Configuration space – Constraints OCML generates config model Application developers provide a model of desired options & their values, e.g., – Network resources – Concurrency & connection management strategies

29 29 Applying OCML Middleware developers specify – Configuration space – Constraints OCML generates config model Application developers provide a model of desired options & their values, e.g., – Network resources – Concurrency & connection management strategies OCML constraint checker flags incompatible options & then – Synthesizes XML descriptors for middleware configuration – Generates documentation for middleware configuration – Validates the configurations

30 30 Supporting DDS QoS Modeling With OCML Integrate OCML with DRE system modeling languages CIAO Pub Port DDS Option Set More generation options – Other config file formats – Parameters for simulations – Code blocks –Enable association of option sets with system model elements PICML –ORB/POA/Container –Ports using DDS (proposed DDS-4- LWCCM spec) DDS-specific ML –DDS entities XML C++

31 31 Modeling QoS With Design Patterns Continuous Data constant updates many-to-many last value is best seamless failover Reliability = BEST_EFFORT Time-Based Filter = X Use keys & multicast History = KEEP_LAST, 1 Ownership = EXCLUSIVE Deadline = X

32 32 Modeling QoS With Design Patterns State Information persistent data occasional mods latest & greatest must deliver must process Durability = PERSISTENT Lifespan = X Reliability = RELIABLE Pub History = KEEP_ALL Sub History = KEEP_LAST, n

33 33 Modeling QoS With Design Patterns Alarms & Events asynchronous must deliver authorized sender Liveliness = MANUAL Reliability = RELIABLE Pub History = KEEP_ALL Ownership = EXCLUSIVE

34 34 Pollux MDD Tools –Design Time Challenges Carry out commander’s intent by automated mapping of familiar scenarios to models Assure QoS by – explicit representation in model – automatic consistency checks Explore prob & soln space with – easily grokable/modifiable visual language – multiple artifact generators

35 35 Pollux Perf. Eval. –Run Time Challenges Carry out commander’s intent by DDS getting the right information to the right place at the right time Enforce QoS policies - built in to DDS implementations Manage resources with – Resource Limits policy – Time-Based Filter policy – Lifespan policy – History policy – filter migration to source Compensate for limited resources by – leveraging mutable QoS policies – detecting & acting on meta- events (built-in QoS policies)


Download ppt "1 Key design time challenges Convert commander’s intent, along with static/dynamic environment, into QoS policies Quantitatively evaluate & explore complex."

Similar presentations


Ads by Google