Download presentation
Presentation is loading. Please wait.
1
Agent-Based Acceptability-Oriented Computing International Symposium on Software Reliability Engineering Fast Abstract by Shana Hyvat
2
Reliability vs. Functionality As software becomes more complex insuring its reliability becomes more challenging. Increasing functionality increases potentials for errors and complications that may arise from those errors.
3
Proposed Solutions ► Rinard’s [2003] Acceptability-Oriented Computing Goal: To achieve flexibility in programming while ensuring the system runs reliability
4
Correct vs. Acceptable Behavior As systems become more complex, unrealistic to presume “correct” functionality. Maintaining a system with “acceptable” functionality is more realistic.
5
Acceptable Behavior Program designer must specify functionality States of acceptable behavior must be identified example: Particular error doesn’t lead to a crash but to a stop.
6
Rinard’s Architecture Core specifies the functionality Intended to completely specify both the behavior of the system and the structure required to completely implement the behavior. Remains unreliable by itself.
7
Rinard’s Architecture Outer Layers Enforce acceptable system behavior and structure properties. Identify impending violation of the desired acceptability properties. Restores and maintains program behavior within the acceptability envelope.
8
Enforcement Resilient Approach: Takes actions to restore system to an acceptable state. Example: Memory is full, release old data. Safe Exit Approach: Allows a program to stop before executing improperly These approaches will be independent to each system and will depend of what the system designer decides in acceptable.
9
Components Components are modules within outer layers will monitor, correct, and record errors in executions. We introduce: Intelligent components in the form of “Monitor-Agents” that will perform similarly to Rinard’s
10
Properties of Agents Autonomous/Independent Reactive to their environment Pro-active and work towards a goal Social Ability that allows them to communicate with other agents (may be human of software entities).
11
It will be easy to see how Rinard’s concept matches well with the properties of agents.
12
Autonomy Re-engineering an entire system for greater reliability may not be practical in many instances, but amending a system with a separate, autonomous component, such as an agent, can prove to be a more viable solution.
13
Reactive When an error occurs the agent will be able to detect and choose a solution for repair or for a safe exit. In addition, it’s behavior will be “intelligent”. Pattern recognition will be a characteristic of this intelligence.
14
Pro-active The pro-active, goal driven behaviour of an agent will be to acquire intelligence through learning in the form of pattern recognition. Neural Networks has been shown viable in the field of pattern recognition.
15
Pattern Recognition ► Rinard’s method, errors are logged for the systems designers use ► We give the monitor-agent ability to log errors as well as recognize patterns in them. ► Recognize a sequence of errors that always emerge when taking a particular resilient approach and to avoid this sequence may choose a different resilient approach to an error.
16
Interaction “interaction is the single most important characteristic in complex systems” - Wooldridge and Ciancarini
17
Errors in Interactions A request to access non-existent memory or simple spelling errors are such examples. Utilize the social ability of the agent to “translate” or repair messages to the core from the environment of the system. The translation is from an input that can possibly lead to an error-prone execution to one that will lead to an acceptable execution.
18
Designing the System Gaia Method [Wooldridge and Ciancarini] Focuses on the problem solving nature of agents and organizes agents to communicate with each other and with their world. Method of design of an agent-based system.
19
Gaia Analysis Process Define roles in the system: ► detect error when it occurs ► log the error ► learn from the error ► choose a resilient solution or a safe exit solution
20
Defining Gaia Models In addition to goals we define models for the system. Since we are using a single agent-based systems, there is one model: services model, which is the interface to the core.
21
Services Model Services model as a separate component to the agent- monitor. The services model will be independent to a particular core. The services model will be independent to a particular core. In this way we can create a monitor-agent that can be augmented to any existing system. We refer to any existing system as a unreliable core, given the assumption that we are augmenting a system due some reliability issues.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.