Introduction to IRRIIS testing platform IRRIIS MIT Conference ROME 8 February 2007 Claudio Balducelli.

Slides:



Advertisements
Similar presentations
IRRIIS GdS: La Security nei sistemi di controllo e automazione, nelle reti e infrastrutture San Felice (MI), 26 giugno Pagina 1 Il Progetto IRRIIS.
Advertisements

Defect testing Objectives
Testing Workflow Purpose
Networking Essentials Lab 3 & 4 Review. If you have configured an event log retention setting to Do Not Overwrite Events (Clear Log Manually), what happens.
Marzieh Parandehgheibi
DETAILED DESIGN, IMPLEMENTATIONA AND TESTING Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Testing and Quality Assurance
IRRIIS SimCIP Demo (version 0.8- May 2009) IRRIIS European Project – Antonio Di Pietro – ENEA.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
IRRIIS – Integrated Risk Reduction of Information-based Infrastructure Systems Workshop - Middleware Improved Technology for Interdependent Critical Infrastructures.
COMP8130 and 4130Adrian Marshall 8130 and 4130 Test Execution and Reporting Adrian Marshall.
ECE Synthesis & Verification1 ECE 667 Spring 2011 Synthesis and Verification of Digital Systems Verification Introduction.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Architecture Quality. Outline Importance of assessing software architecture Better predict the quality of the system to be built How to improve.
Copyright © , Software Engineering Research. All rights reserved. Creating Responsive Scalable Software Systems Dr. Lloyd G. Williams Software.
Testing Intrusion Detection Systems: A Critic for the 1998 and 1999 DARPA Intrusion Detection System Evaluations as Performed by Lincoln Laboratory By.
Introduction to Software Testing
Basics of Fault Tree and Event Tree Analysis Supplement to Fire Hazard Assessment for Nuclear Engineering Professionals Icove and Ruggles (2011) Funded.
Presenter: Chi-Hung Lu 1. Problems Distributed applications are hard to validate Distribution of application state across many distinct execution environments.
Business Logic Abuse Detection in Cloud Computing Systems Grzegorz Kołaczek 1st International IBM Cloud Academy Conference Research Triangle Park, NC April.
Software Reliability Categorising and specifying the reliability of software systems.
CSCI 5801: Software Engineering
MIT Requirements for TLC IRRIIS MIT Conference ROME 8 February 2007 Giustino FUMAGALLI Arnaud ANSIAUX.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
System/Software Testing
Reverse Engineering State Machines by Interactive Grammar Inference Neil Walkinshaw, Kirill Bogdanov, Mike Holcombe, Sarah Salahuddin.
1 EVALUATING INTELLIGENT FLUID AUTOMATION SYSTEMS USING A FLUID NETWORK SIMULATION ENVIRONMENT Ron Esmao - Sr. Applications Engineer, Flowmaster USA.
A Portable Virtual Machine for Program Debugging and Directing Camil Demetrescu University of Rome “La Sapienza” Irene Finocchi University of Rome “Tor.
Risk Assessment and Probabilistic Risk Assessment (PRA) Mario. H. Fontana PhD.,PE Research Professor Arthur E. Ruggles PhD Professor The University of.
CLEANROOM SOFTWARE ENGINEERING.
Distributed Control of FACTS Devices Using a Transportation Model Bruce McMillin Computer Science Mariesa Crow Electrical and Computer Engineering University.
Global test beds for control, safety, security and dependability in ICT-Enabled Critical Infrastructures From SAFEGUARD Intrusion Detection Test Environment.
Slide 1 Using Models Introduced in ISA-d Standard: Security of Industrial Automation and Control Systems (IACS) Rahul Bhojani ISA SP99 WG4 Meeting.
Software Engineering Chapter 23 Software Testing Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
S. Bologna, C. Balducelli, A. Di Pietro, L. Lavalle, G. Vicoli ENERSIS 2008 Milano, 17 Giugno, 2008 Una strategia per.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Introduction to IRRIIS MIT Add-On Components IRRIIS, CRUTIAL & GRID Review Meeting 15 March 2007, Brussels Sandro Bologna.
April 2004 At A Glance CAT is a highly portable exception monitoring and action agent that automates a set of ground system functions. Benefits Automates.
Lesson 7-Managing Risk. Overview Defining risk. Identifying the risk to an organization. Measuring risk.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Client: The Boeing Company Contact: Mr. Nick Multari Adviser: Dr. Thomas Daniels Group 6 Steven BromleyJacob Gionet Jon McKeeBrandon Reher.
Notes of Rational Related cyt. 2 Outline 3 Capturing business requirements using use cases Practical principles  Find the right boundaries for your.
Software Testing and Quality Assurance Software Quality Assurance 1.
©2009 Mladen Kezunovic. Improving Relay Performance By Off-line and On-line Evaluation Mladen Kezunovic Jinfeng Ren, Chengzong Pang Texas A&M University,
1 Test Selection for Result Inspection via Mining Predicate Rules Wujie Zheng
One or More Topologies ? One or More Topologies ? A methodological reflection IRRIIS Project, WP2.1 “Topology Analysis” Rome Meeting, 6,7 April 2006 IST.
“Systematic Experimentation and Demonstration activities” IRRIIS AB Meeting Ottobrunn, 20th May 2008 Sandro Bologna ENEA.
Secure In-Network Aggregation for Wireless Sensor Networks
Introduction to the IRRIIS Simulation SimCIP Césaire Beyel.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Introduction to IRRIIS MIT Add-On Components Middleware Improvement Technology for Interdependent Critical Infrastructure 08 February 2007, Rome Giordano.
CS526: Information Security Chris Clifton November 25, 2003 Intrusion Detection.
Carnegie Mellon University Software Engineering Institute Lecture 4 The Survivable Network Analysis Method: Evaluating Survivability of Critical Systems.
DIAMON Project Project Definition and Specifications Based on input from the AB/CO Section leaders.
Simulation Experiments: Emerging Instruments for CIP Dresden 5 th of October 2007 Walter Schmitz.
Chapter 8 Testing the Programs. Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined.
Testing Integral part of the software development process.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
11/03/2016.
Software Testing.
Security SIG in MTS 05th November 2013 DEG/MTS RISK-BASED SECURITY TESTING Fraunhofer FOKUS.
Types of Testing Visit to more Learning Resources.
11th International Conference on Mobile and Ubiquitous Systems:
Introduction to Software Testing
Software Testing (Lecture 11-a)
Presentation transcript:

Introduction to IRRIIS testing platform IRRIIS MIT Conference ROME 8 February 2007 Claudio Balducelli

IRRIIS Summary Design a testing environment for MIT Modelling and running attack and fault behaviours Testing strategies for MIT components Proposed test-bed configuration Conclusions

IRRIIS Target Infrastructures Models Vulnerabilities of the Target Infrastructures Fault/attack Scenarios Generation Models of faults & attacks Use domain knowledge Consider vulnerabilities Design a testing environment for MIT

IRRIIS Meaning of attacks and faults Attacks: A disturbance of the LCCI generated by events coming from outside the LCCI Faults: A disturbance of the LCCI generated by events coming from the components that are part of the LCCI Definition of the meaning of attacks and faults

IRRIIS Meaning of attacks and faults Attacks: Natural disaster (earthquake, flood, etc) Premeditated terrorist attack Cyber attacks (cyber-intrusion) Operator errors ………….…. Faults: Physical component failure (aging, stress, etc.) Software component failure (bug, wrong istal. etc) Wrong component activation ………….….

IRRIIS Normal behavior & fault behavior in SimCIP Activation event t1 Start Comp. 1 Start Comp. 2 End Start Comp. 3 t2 Comp. 3 End Activation event Normal behavior consists in an initial state and a sequence of events represented in form of a petri net oriented graph

IRRIIS Initiating event t1 Failure of Comp. 1 t2 Failure of Comp. 2 t3 Restart Comp. 1 t4 t5 Loss of service 2 Normal behavior & fault behavior in SimCIP Loss of Service 1 Fault behavior may be represented in a similar way Fault events In LCCI-1 Failure of Comp. 2 t6 Failure of Comp. 1 t7 Fault events In LCCI-2

IRRIIS For a certain LCCI normal behaviors are well known and their number is limited the number and the combinations of fault behaviors are very high and not always known in advance how to design fault behaviors? how to select fault behaviors? utilisation of a model based on attack/fault trees seem useful to formalise and manage the knowledge needed to generate attack/fault behaviour Normal behavior & fault behavior in SimCIP

IRRIIS G0 A1 A2 The root of the tree (G) represents an event that could significantly harm the infrastructure’s mission. The terminal leafs (A) of the tree represent the actions to execute for reaching the high level goals Every path in the attack tree represents a unique type of attack Goal G0 AND A1 A2 A3 Goal G0 OR A1 A2 A3 The attack trees could be visualized also in textual form G0 A1 A2 A3 Every node could be decomposed inside lower level nodes using, and decomposition types AND OR Modelling attack knowledge attack/fault trees

IRRIIS G0 S1 A2 S2 A3 A4 A5 A6 The tree generate the following two attack patterns The “terminal leafs” of the tree (A1..An) represent the actions steps needed to execute the attack The “intermediate nodes” (S1..Sn) represent the steps in which a decision has to be taken The attack tree generates attack patterns (attack behaviors), composed by sequences of actions. Attack goal Modelling attack knowledge attack/fault trees

IRRIIS TE S1 C2 S3 C11 C12 C31 C32 The tree generate the following two fault patterns The “terminal leafs” of the tree (C..) represent the elementary failures of the single components of LCCI. The “intermediate nodes” (S…) represent failures of subsystems or services for which the components contribute The fault tree generates fault patterns (fault behaviors), composed by sequences of elementary failures. Top event Fault trees Modelling attack knowledge attack/fault trees

IRRIIS And gate Or gate OR gate AND gate Example of attack tree to model an attack in a local area network (tree structure) The reference model take in account the: Fault Tree Handbook of US Nuclear Regulatory Commission

IRRIIS And gate Or gate OR gate AND gate Example of attack tree to model an attack in a local area network (tree structure) Verify the accessibility to a subnet

IRRIIS And gate Or gate OR gate AND gate Example of attack tree to model an attack in a local area network (tree structure) Discover the target locations & addresses

IRRIIS And gate Or gate OR gate AND gate Example of attack tree to model an attack in a local area network (tree structure) Make sniffing activity or damages

IRRIIS And gate Or gate OR gate AND gate Example of attack tree to model an attack in a local area network (tree structure) Generated behaviours table Attack behaviour 0 Attack behaviour 1 Attack behaviour 2 Attack behaviour 3 Attack behaviour 4 Attack behaviour 5 Attack behaviour 6 Attack behaviour

IRRIIS Example of attack tree to model an attack: associating difficulties to the actions OR gate AND gate = maximum difficulty 1.0 = minimum difficulty Generated behaviours table ordered by action difficulties Attack behaviour 0 with 0,39 of difficulty Attack behaviour 2 with 0,24 of difficulty Attack behaviour 1 with 0.12 of difficulty Attack behaviour 3 with 0.08 of difficulty Attack behaviour 4 with 0.08 of difficulty Attack behaviour 6 with 0.05 of difficulty Attack behaviour 5 with 0.03 of difficulty Attack behaviour 7 with 0.02 of difficulty

IRRIIS Macro scenarios: how to compose attack and fault trees Attack tree Fault tree Attack tree Wait for malfunction

IRRIIS Composite attack and fault behavior t1 Basic Action 0 t2 Basic Action 2 Final Action 0 t3 t4 Final Action 1 Network malfunction Basic Event 0 Attack behavior Attack behavior Fault behavior Attack escalation

IRRIIS Testing MIT components (meaning) REQUIREMENTS: Risk Ass. (1) - The Risk estimator assessment of cascading and escalating effects shall be performed in near real-time. Risk Ass. (2) - The Risk estimator assessment of cascading and escalating effects shall be performed in a predictive way. Risk Ass. (3) - The Risk estimator shall estimate immediate risk to the LCCI. Risk Ass. (4) - The Risk estimator may estimate expected risk to the LCCI. Risk Ass. (5) - The Risk estimator shall estimate potential cascading effects. Objective of the TEST: validate the requirements Risk Ass. (1) - OK Risk Ass. (2) - OK Risk Ass. (3) - OK Risk Ass. (4) - NOT OK Risk Ass. (5) - NOT OK

IRRIIS Testing MIT components (meaning) One of the main objective of the MIT components test inside SimCIP simulated environment is the evaluation of the rate of false/true alarms. The second is to evaluate how much the rate of false alarms may be acceptable for the LCCIs operators

IRRIIS Detecting interdependency alarms Real states Predicted states AlarmNo Alarm P(Alarm) AB P(No Alarm) CD A = Number of alarm states correctly predicted D = Number of no alarm states correctly predicted B = Number of no alarm states predicted as true (FALSE POSITIVE) C = Number of alarm states not predicted (FALSE NEGATIVE) The goal is: max(A + D), min(B + C) Evaluation Table

IRRIIS Detecting interdependency alarms Real states Predicted states AlarmNo Alarm P(Alarm) AB P(No Alarm) CD Fn = C / ( C + D ) Observed False Negative Ratio (FNR) Fp = B / ( A + B ) Observed False Positive Ratio (FPR)

IRRIIS Be not afraid to discover false alarms during the tests. This is the tests objective!! In many cases false alarms could be simply reduced tuning the “sensitivity” level of a MIT component. To evaluate true/false alarms ratio is not sufficient a single attack/fault behavior. Many alternative behaviors are needed!! Logging facilities are very important during experimentations, are the tests results must be archived and documented Detecting interdependency alarms

IRRIIS Proposed testing strategy IRRIIS testing operator Attack/Fault tree editor Design or modify a scenario tree GA S1 A2 S2 A3 A4 A5 A6 Fault behaviors editor Generate & modify fault behaviors, insert timing information etc Documentation console View logs Edit test documents Logs Test documents Fault behavior execution Execute behavious, sets monitors Attacks/faults execution in SimCIP Test design entry point Test design exit point Test design

IRRIIS Proposed testing strategy IRRIIS testing operator Attack/Fault tree editor Design or modify a scenario tree GA S1 A2 S2 A3 A4 A5 A6 Fault behaviors editor Generate & modify fault behaviors, insert timing information etc Documentation console View logs Edit test documents Logs Test documents Fault behavior execution Execute behavious, sets monitors Attacks/faults execution in SimCIP Test execution entry point Test execution exit point Fast testing

IRRIIS Proposed testing strategy IRRIIS testing operator Attack/Fault tree editor Design or modify a scenario tree GA S1 A2 S2 A3 A4 A5 A6 Fault behaviors editor Generate & modify fault behaviors, insert timing information etc Documentation console View logs Edit test documents Logs Test documents Fault behavior execution Execute behavious, sets monitors Attacks/faults execution in SimCIP Test entry point Test exit point Exhaustive testing

IRRIIS Physical TESTBED Configurations LAMPSSys RTI GUI Logger Tool 1 Electricity Simulator LCCI Data Com Simulator Tool 2 Agent / Scenario Behaviours Analysis 1Analysis 2 Fault / Attack Tool MIT Analysis 3 SimCIP Architecture

IRRIIS Physical TESTBED Configurations GUI Logger LAMPSSys RTI Agent / Scenario Behaviours Electricity Simulator Com Simulator LCCI Electricity Data Base Tool 1 Tool 2 Analysis 1, 2, 3.. LCCI Telecom Data Base Simple SimCIP configuration

IRRIIS Physical TESTBED Configuration LAMPSSys RTI Agent / Scenario Behaviours Electricity Simulator Com Simulator LCCI Electricity Data Base Fault /Attack Tool Tool 1 Tool 2 Analysis 1, 2, 3.. LCCI Telecom Data Base SimCIP for testing attacks and faults without MIT GUI Logger

IRRIIS Physical TESTBED Configuration GUI Logger LAMPSSys RTI Agent / Scenario Behaviours Electricity Simulator Com Simulator LCCI Electricity Data Base LCCI Telecom Data Base MT communication Electricity Add-onTelecom Add-on SimCIP for testing MIT with normal behaviors (detect false positive alarms)

IRRIIS Physical TESTBED Configuration GUI Logger LAMPSSys RTI Agent / Scenario Behaviours Electricity Simulator Com Simulator LCCI Electricity Data Base LCCI Telecom Data Base MT communication Electricity Add-onTelecom Add-on SimCIP for testing MIT in presence of attacks/faults (detect false negative alarms) Fault /Attack Tool Tool 1 Tool 2 Analysis 1, 2, 3..

IRRIIS Conclusions Testing of MIT components will be a continuous and iterative process It is necessary to distinguish between the fast tests of the more simple requirements and the exhaustive test process aimed to evaluate the MIT efficiency in detecting interdependency alarms Test designing, reports logging/archiving in a standard way and with the support of a common tool, will help to have sets of comparable tests also if produced in different SimCIP installations. The testing environment will be one of the major a research product of the project, where experimentation may continue also after the end of the project. QUESTIONS?