Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin.

Slides:



Advertisements
Similar presentations
Configuration management
Advertisements

Testing Concurrent/Distributed Systems Review of Final CEN 5076 Class 14 – 12/05.
Margrave: XACML Verification and Change-Impact Analysis Kathi Fisler, WPI Shriram Krishnamurthi, Brown Leo Meyerovich, Brown Michael Carl Tschantz, Brown.
Kai Pan, Xintao Wu University of North Carolina at Charlotte Generating Program Inputs for Database Application Testing Tao Xie North Carolina State University.
First Step Towards Automatic Correction of Firewall Policy Faults Fei Chen Alex X. Liu Computer Science and Engineering Michigan State University JeeHyun.
Prioritizing User-session-based Test Cases for Web Applications Testing Sreedevi Sampath, Renne C. Bryce, Gokulanand Viswanath, Vani Kandimalla, A.Gunes.
Testing Without Executing the Code Pavlina Koleva Junior QA Engineer WinCore Telerik QA Academy Telerik QA Academy.
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
8.2 Discretionary Access Control Models Weiling Li.
Software Quality Metrics
Firewall Policy Queries Author: Alex X. Liu, Mohamed G. Gouda Publisher: IEEE Transaction on Parallel and Distributed Systems 2009 Presenter: Chen-Yu Chang.
Privacy-Preserving Cross-Domain Network Reachability Quantification
XEngine: A Fast and Scalable XACML Policy Evaluation Engine Fei Chen Dept. of Computer Science and Engineering Michigan State University Joint work with.
An Experimental Evaluation on Reliability Features of N-Version Programming Xia Cai, Michael R. Lyu and Mladen A. Vouk ISSRE’2005.
Chapter 2 Access Control Fundamentals. Chapter Overview Protection Systems Mandatory Protection Systems Reference Monitors Definition of a Secure Operating.
Parameterizing Random Test Data According to Equivalence Classes Chris Murphy, Gail Kaiser, Marta Arias Columbia University.
1 Software Testing and Quality Assurance Lecture 30 – Testing Systems.
Security in Databases. 2 Outline review of databases reliability & integrity protection of sensitive data protection against inference multi-level security.
EE694v-Verification-Lect5-1- Lecture 5 - Verification Tools Automation improves the efficiency and reliability of the verification process Some tools,
Software Quality Assurance For Software Engineering && Architecture and Design.
Distributed Computer Security 8.2 Discretionary Access Control Models - Sai Phalgun Tatavarthy.
State coverage: an empirical analysis based on a user study Dries Vanoverberghe, Emma Eyckmans, and Frank Piessens.
Detection and Resolution of Anomalies in Firewall Policy Rules
Intrusion and Anomaly Detection in Network Traffic Streams: Checking and Machine Learning Approaches ONR MURI area: High Confidence Real-Time Misuse and.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
AMOST Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology.
CPIS 357 Software Quality & Testing
Tao Xie Automated Software Engineering Group Department of Computer Science North Carolina State University
SOFTWARE DESIGN (SWD) Instructor: Dr. Hany H. Ammar
Computer Security: Principles and Practice
Requirements-based Test Generation for Functional Testing (© 2012 Professor W. Eric Wong, The University of Texas at Dallas) 1 W. Eric Wong Department.
First Edition by William Stallings and Lawrie Brown Lecture slides by Lawrie Brown Chapter 5 – Database Security.
Database Design and Management CPTG /23/2015Chapter 12 of 38 Functions of a Database Store data Store data School: student records, class schedules,
Test Drivers and Stubs More Unit Testing Test Drivers and Stubs CEN 5076 Class 11 – 11/14.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
1 Introduction to Software Engineering Lecture 1.
Expert Systems with Applications 34 (2008) 459–468 Multi-level fuzzy mining with multiple minimum supports Yeong-Chyi Lee, Tzung-Pei Hong, Tien-Chin Wang.
Stefan Mutter, Mark Hall, Eibe Frank University of Freiburg, Germany University of Waikato, New Zealand The 17th Australian Joint Conference on Artificial.
1 Efficient Rule Matching for Large Scale Systems Packet Classification – A Case Study Alok Tongaonkar Stony Brook University TexPoint fonts used in EMF.
White Box-based Coverage Testing (© 2012 Professor W. Eric Wong, The University of Texas at Dallas) 111 W. Eric Wong Department of Computer Science The.
Detecting Group Differences: Mining Contrast Sets Author: Stephen D. Bay Advisor: Dr. Hsu Graduate: Yan-Cheng Lin.
1 Test Selection for Result Inspection via Mining Predicate Rules Wujie Zheng
Computer Science Systematic Testing and Verification of Security Policies Tao Xie Department of Computer Science North Carolina State University
Computer Science 1 Mining Likely Properties of Access Control Policies via Association Rule Mining JeeHyun Hwang 1, Tao Xie 1, Vincent Hu 2 and Mine Altunay.
Model Checking Grid Policies JeeHyun Hwang, Mine Altunay, Tao Xie, Vincent Hu Presenter: tanya levshina International Symposium on Grid Computing (ISGC.
Computer Science Conformance Checking of Access Control Policies Specified in XACML Vincent C. Hu (National Institute of Standards and Technology) Evan.
A Metrics Program. Advantages of Collecting Software Quality Metrics Objective assessments as to whether quality requirements are being met can be made.
Computer Science 1 Detection of Multiple-Duty-Related Security Leakage in Access Control Policies JeeHyun Hwang 1, Tao Xie 1, and Vincent Hu 2 North Carolina.
Computer Science 1 Test Selection and Augmentation of Regression System Tests for Security Policy Evolution JeeHyun Hwang, Tao Xie, and collaborators at.
Towards Interoperability Test Generation of Time Dependent Protocols: a Case Study Zhiliang Wang, Jianping Wu, Xia Yin Department of Computer Science Tsinghua.
Chapter 6: Analyzing and Interpreting Quantitative Data
1 Access Control Policies: Modeling and Validation Luigi Logrippo & Mahdi Mankai Université du Québec en Outaouais.
Policy Evaluation Testbed Vincent Hu Tom Karygiannis Steve Quirolgico NIST ITL PET Report May 4, 2010.
PolicyMorph: Interactive Policy Model Transformations for a Logical ABAC Framework Michael LeMay Omid Fatemieh Carl A. Gunter.
Computer Science 1 Systematic Structural Testing of Firewall Policies JeeHyun Hwang 1, Tao Xie 1, Fei Chen 2, and Alex Liu 2 North Carolina State University.
Properties Incompleteness Evaluation by Functional Verification IEEE TRANSACTIONS ON COMPUTERS, VOL. 56, NO. 4, APRIL
CSC 8320 Advanced Operating System Discretionary Access Control Models Presenter: Ke Gao Instructor: Professor Zhang.
Identifying “Best Bet” Web Search Results by Mining Past User Behavior Author: Eugene Agichtein, Zijian Zheng (Microsoft Research) Source: KDD2006 Reporter:
ASHRAY PATEL Protection Mechanisms. Roadmap Access Control Four access control processes Managing access control Firewalls Scanning and Analysis tools.
SOFTWARE TESTING AND QUALITY ASSURANCE. Software Testing.
CS223: Software Engineering Lecture 25: Software Testing.
Experience Report: System Log Analysis for Anomaly Detection
Testing Tutorial 7.
QianZhu, Liang Chen and Gagan Agrawal
Chapter 8 – Software Testing
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software testing strategies 2
Test Case Purification for Improving Fault Localization
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Automated Software Engineering Research Group 1 Fix 12?: Title should be Limitations (?? Not Challenges) Slide 18: Verification -> counterexample collectoin

Computer Science 2 Mining Likely Properties of Access Control Policies via Association Rule Mining JeeHyun Hwang Advisor: Dr. Tao Xie Preliminary Oral Examination Department of Computer Science North Carolina State University, Raleigh

Automated Software Engineering Research Group 3 Access Control Mechanism Access control mechanisms control which subjects (such as users or processes) have access to which resources Policy defines rules according to which access control must be regulated Request Response (Permit, Deny) Policy Policy Evaluation Engine

Automated Software Engineering Research Group 4 Access Control Mechanism Access control mechanisms control which subjects (such as users or processes) have access to which resources Policy defines rules according to which access control must be regulated Policy Request Response (Permit, Deny) Policy Evaluation Engine Faults

Automated Software Engineering Research Group 5 Research Accomplishments Quality of Access Control Automated test generation [SRDS 08][SSIRI 09] Likely-property mining [DBSec 10] Property Quality Assessment [ACSAC 08] (Tool) Access Control Policy Tool (ACPT) [POLICY Demo 10] Debugging Fault localization for firewall policies [SRDS 09 SP] Automated fault correction for firewall policies [USENIX LISA 10] Performance Efficient policy evaluation engine [Sigmetrics 08]

Automated Software Engineering Research Group 6 Outline Motivation Our approach Future work

Automated Software Engineering Research Group 7 Outline Motivation Our approach Future work

Automated Software Engineering Research Group 8 Motivation Access control is used to control access to a large number of resources [1,2] Specifying and maintaining correct access control policies is challenging [1,2] – Authorized users should have access to the data – Unauthorized users should not have access to the data Faults in access control policies lead to security problems [1,3] 1.A. Wool. A quantitative study of firewall configuration errors. Computer, 37(6):62–67, Lujo Bauer, Lorrie Cranor, Robert W. Reeder, Michael K. Reiter, and Kami Vaniea. Real Life Challenges in Access-control Management. CHI Sara Sinclair, Sean W. Smith. What's Wrong with Access Control in the Real World?. IEEE Security and Privacy 2010

Automated Software Engineering Research Group 9 Motivation – cont. Need to ensure the correct behaviours of policies – Property verification [1,2] Model a policy and verify properties against the policies Check whether properties are satisfied by a policy Violations of a property expose policy faults 1.Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE Vladimir Kolovski, James Hendler, Bijan Parsia, Analyzing Web Access Control Policies, WWW Michael Carl Tschantz, Shriram Krishnamurthi. Towards Reasonability Properties for Access-Control Policy Languages. SACMAT 2006 Policy Property Verification Property Satisfy? (True, False) 1. Faculty member is permitted to assign grades [3] 2. Subject (who is not a faculty member) is permitted to enroll in courses [3]

Automated Software Engineering Research Group 10 Problem Quality of properties is assessed in terms of fault- detection capability [1] Properties help detect faults Confidence on policy correctness is dependent on the quality of specified properties CONTINUE subject [2] : 25% fault-detection capability with its seven properties In practice, writing properties of high quality is not trivial 1.Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access control policies. ACSAC Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE 2005

Automated Software Engineering Research Group 11 Proposed Solution Mine likely properties automatically based on correlations of attribute values (e.g., write and modify) High quality of properties (with high fault-detection capability) Our assumption: Policy may include faults Mine likely properties, which are true for all or most of the policy behaviors (>= threshold) Policy Faults Likely Properties Mine Detect Faults

Automated Software Engineering Research Group 12 Limitations (?? Not Challenges) Policy is domain-specific Mine likely properties within a given policy Limited set of decisions Two decisions (Permit or Deny) for any request Prioritization Which counterexamples should be inspected first? Expressiveness of likely properties How to find counterexamples?

Automated Software Engineering Research Group 13 XACML Policy Example RBAC_school policy Faculty InternalGrade ExternalGrade View Assign Rule 1 FacultyFamily ExternalGrade Receive Rule 3 Student ExternalGrade Receive Rule 2 If role = Faculty and resource = (ExternalGrade or InternalGrade) and action = (View or Assign) then Permit If role = Faculty and resource = (ExternalGrade or InternalGrade) and action = (View or Assign) then Permit

Automated Software Engineering Research Group 14 XACML Policy Example – cont. Rule 3: Jim can change grades or records. RBAC_school policy Lecturer InternalGrade ExternalGrade View Assign Rule 4 TA InternalGrade View Assign Rule 5 Rule 6

Automated Software Engineering Research Group 15 XACML Policy Example – cont. Rule 3: Jim can change grades or records. RBAC_school policy Lecturer InternalGrade ExternalGrade View Assign Rule 4 TA InternalGrade View Assign Rule 5 Rule 6 View Receive Inject a fault (Receive instead of View) Incorrect Policy Behaviors: 1.TA is Denied to View InternalGrade 2.TA is Permitted to Receive InternalGrade Incorrect Policy Behaviors: 1.TA is Denied to View InternalGrade 2.TA is Permitted to Receive InternalGrade (View) Permit → (Assign) Permit : Frequency: 4 (100%) (Assign) Permit → (View) Permit : Frequency: 4 (80%) (Assign) Permit → (Receive) Deny : Frequency: 4 (80%) (View) Permit → (Assign) Permit : Frequency: 4 (100%) (Assign) Permit → (View) Permit : Frequency: 4 (80%) (Assign) Permit → (Receive) Deny : Frequency: 4 (80%) (View) Permit → (Assign) Permit : Frequency: 5 (100%) (Assign) Permit → (View) Permit : Frequency: 5 (100%) (Assign) Permit → (Receive) Deny : Frequency: 5 (100%) (View) Permit → (Assign) Permit : Frequency: 5 (100%) (Assign) Permit → (View) Permit : Frequency: 5 (100%) (Assign) Permit → (Receive) Deny : Frequency: 5 (100%)

Automated Software Engineering Research Group 16 Policy Model Role-Based Access Control Policy [1] – Permissions are associated with roles – Subject (role) is allowed or denied access to certain objects (i.e., resources) in a system Subject: role of a person Action: command that a subject executes on the resource Resource: object Environment: any other related constraints (e.g., time, location, etc.) 1.XACML Profile for Role Based Access Control (RBAC), 2004

Automated Software Engineering Research Group 17 Likely-Property Model Implication relation Correlate decision (dec1) for an attribute value (v1) with decision (dec2) for another attribute value (v2) (v1) dec1 → (v2) dec2 Types Subject attribute (TA) permit → (Faculty) permit Action attribute (Assign) permit → (View) permit Subject-action attribute: (TA & Assign) permit → (Faculty & View) permit

Automated Software Engineering Research Group 18 Framework Our assumption: Policy may include faults Mine likely properties, which are true for all or most of the policy behaviors (>= threshold)

Automated Software Engineering Research Group 19 Relation Table Generation Find all possible request-response pairs in a policy Generate relation tables (including all request- response pairs) of interest Input for an association rule mining tool 1.Faculty is Permitted to Assign ExternalGrade 2.Faculty is Permitted to View ExternalGrade

Automated Software Engineering Research Group 20 Association Rule Mining 1.Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. VLDB Borgelt, C.: Apriori - Association Rule Induction/Frequent Item Set Mining Given a relation table, find implication relations of attributes via association rule mining [1,2] Find three types of likely properties Report likely properties with confidence values over a given threshold Support: Supp (X) = D / T % of the total number of records - T is #total rows - D is #rows that includes attribute-decision X = (Assign) Permit, Y = Supp (View) Permit Supp (X) = 5/10 = 0.5, Supp (Y) = 4/10 = 0.4 Supp (X ∪ Y) = 4/10 = 0.4 Confidence : Confidence (X → Y) = supp(X ∪ Y)/supp(X) * Likelihood of an likely property Confidence (X → Y) = 4/5 = 0.8

Automated Software Engineering Research Group 21 Likely Property Verification Verify a policy with given likely properties and find counterexamples Counterexample: (v1) dec1 → (v2) ¬ dec2 Inspect to determine whether counterexamples expose a fault Rationale: counterexamples (which do not satisfy the likely properties) deviate from the policy’s normal behaviors and are special cases for inspection

Automated Software Engineering Research Group 22 Basic and Prioritization Techniques Basic technique: inspect counterexamples in no particular order Prioritization technique designed to reduce inspection effort Inspect counterexamples by the order of their fault-detection likelihood Duplicate CE first CE produced from likely properties with fewer CE Likely Properties CE : Counterexamples CE Detect CE Remove Duplication

Automated Software Engineering Research Group 23 Evaluation RQ1: fault-Detection Capability – How higher percentage of faults are detected by our approach compared to an existing related approach [1] ? RQ2: cost – How lower percentage of distinct counterexamples are generated by our approach compared to the existing approach [1] ? RQ3: cost – For cases where a fault in a faulty policy is detected by our approach, how high percentage of distinct counterexamples (for inspection) are reduced by our prioritization? 1.Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

Automated Software Engineering Research Group 24 Metrics Fault-detection ratio (FR) Counterexample count (CC) Counterexample-reduction ratio (CRB) for our approach over the existing approach Counterexample-reduction ratio (CRP) for the prioritization technique over the basic technique

Automated Software Engineering Research Group 25 Mutation Testing Fault-detection capability [1,2] – Seed a fault into a firewall policy and generate a mutant (a faulty version) – # of detected faults / Total # of faults Decisions Countere xample Mutant (faulty version) Expected Decisions Policy (correct version) 1.Evan Martin, JeeHyun Hwang, Tao Xie, and Vincent C. Hu. Assessing quality of policy properties in verification of access control policies. ACSAC Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW 2007

Automated Software Engineering Research Group 26 Evaluation Setup Seed a policy with faults for synthesizing faulty policies – One fault in each faulty policy for ease of evaluation – Four fault types [1] Change-Rule Effect (CRE) Rule-Target True (RTT) Rule-Target False (RTF) Removal Rule (RMR) Compare results of our approach with those of the previous DT approach based on decision tree [2] 1.Evan Martin and Tao Xie. A Fault Model and Mutation Testing of Access Control Policies. WWW Evan Martin and Tao Xie. Inferring Access-Control Policy Properties via Machine Learning. POLICY 2006

Automated Software Engineering Research Group 27 4 XACML Policy Subjects Real-life access control policies – codeD2 : modified version of codeD [1] – continue-a, continue-b [1] : policies for a conference review system – Univ [2] : policies for a univ. The number of rules ranges rules 1.Kathi Fisler, Shriram Krishnamurthi, Leo A. Meyerovich, Michael Carl Tschantz. Verification and change-impact analysis of access-control policies. ICSE Stoller, S.D., Yang, P., Ramakrishnan, C., Gofman, M.I.. Efficient policy analysis for administrative role based access control. CCS 2007

Automated Software Engineering Research Group 28 Evaluation Results (1/2) – CRE Mutants FR: Fault-detection ratioCC: Counterexample count CRB: Counterexample-reduction ratio for our approach over DT approach CRP: Counterexample-reduction ratio for the prioritization technique over the basic technique Fault detection ratios: DT (25.9%), Basic (62.3%), Prioritization (62.3%) Our approach (including Basic and Prioritization techniques) outperform DT in terms of fault-detection capability

Automated Software Engineering Research Group 29 Evaluation Results (1/2) – CRE Mutants FR: Fault-detection ratioCC: Counterexample count CRB: Counterexample-reduction ratio for our approach over DT approach CRP: Counterexample-reduction ratio for the prioritization technique over the basic technique Our approach reduced the number of counterexamples by 55.5% over DT Our approach reduced the number of counterexamples while our approach detected a higher percentage of faults (addressed in RQ1) Prioritization reduced averagely 38.5% of counterexamples (for inspection) (in Column “% CRP”) over Basic

Automated Software Engineering Research Group 30 Evaluation Results (2/2) – Other Mutants Prioritization and Basic achieve the highest fault- detection capability for policies with RTT, RTF, or RMR faults Fault-detection ratios of faulty policies

Automated Software Engineering Research Group 31 Conclusion A new approach that mines likely properties characterizing correlations of policy behaviors w.r.t. attribute values An evaluation on 4 real-world XACML policies – Our approach achieved >30% higher fault-detection capability than that of the previous related approach based on decision tree – Our approach helped reduce >50% counterexamples for inspection compared to the previous approach

Automated Software Engineering Research Group 32 Outline Motivation Our approach Future work

Automated Software Engineering Research Group 33 Future Work  Dissertation Goal  Improving quality of access control:  Automated Test Generation, likely properties mining  Debugging  Fault Localization. Policy Combination Access Control Policy Tool (ACPT) Testing of policies in healthcare system e.g., interoperability and regular compliance (e.g., HIPPA)

Automated Software Engineering Research Group 34 Questions?

Automated Software Engineering Research Group 35 Other Challenges  Generate properties of high quality  Cover a large portion of policy behaviours  Obligation/Delegation/Environments

Automated Software Engineering Research Group 36 Related Work Assessing quality of policy properties in verification of access control policies [Martin et al. ACSAC 2008] Inferring access-control policy properties via machine learning [Martin&Xie Policy 2006] Detecting and resolving policy misconfigurations in access-control systems [Bauer et al. SACMAT 2008]

Automated Software Engineering Research Group 37 My Other Research Work

Automated Software Engineering Research Group 38 Systematic Structural Testing of Firewall Policies JeeHyun Hwang 1, Tao Xie 1, Fei Chen 2, and Alex Liu 2 North Carolina State University 1 Michigan State University 2 (SRDS 2008)

Automated Software Engineering Research Group 39 Problem Factors for misconfiguration – Conflicts among rules – Rule-set complexity – Mistakes in handling corner cases Systematic testing of firewall policies – Exhaustive testing is impractical – Considering test effort and their effectiveness together – Complementing firewall verification How to test Firewall?

Automated Software Engineering Research Group 40 Firewall Policy Structure A Policy is expressed as a set of rules. RuleSrcSPortDestDPortProtocolDecision r1r1 ** *.***accept r2r2 1.2.*.****TCPdiscard r3r3 ***** A Rule is represented as → is a set of predicat e decisio n is “accept” or “discard” Given a packet (Src, Sport, Dest, Dport, Protocol) –When is evaluated “True”, is returned –Src, Sport, Dest, Dport, Protocol –Representing Integer range Given a packet (Src, Sport, Dest, Dport, Protocol) – can be evaluated “True” or “False”. Firewall Format: CISCO REFLEXIVE ACLS

Automated Software Engineering Research Group 41 Random Packet Generation Given domain range (e.g., IP addresses [0, ]), random packets are generated within the domain. SrcSPortDestDPortProtocol Domain***** TCP Easy to generate packets Due to its randomness, difficult to achieve high structural coverage

Automated Software Engineering Research Group 42 Packet Generation based on Local Constraint Solving Considering an individual rule, generates packets to evaluate constraints of clauses in a specified way – For example, every value is evaluated to true TCP TCP –For example, Dest field value is evaluated to false, and the remaining values are evaluated to true RuleSrcSPortDestDPortProtocolDecision r1r1 ** *.***accept TTTTTTTTTF Conflicts among rules

Automated Software Engineering Research Group 43 Packet Generation based on Global Constraint Solving Considering preceding rules are not applicable, generates packets to evaluate constraints of certain rule’s clauses in a specified way – Packet is applicable to r 3 (considering that r 1 and r 2 are not applicable) TCP TTTTT Resolving conflicts among rules and require analysis time to solving such conflicts RuleSrcSPortDestDPortProtocolDecision r1r1 ** *.***accept r2r2 1.2.*.****TCPdiscard r3r3 ***** F F

Automated Software Engineering Research Group 44 Mutation Testing Why mutation testing? – Measure the quality of a test packet set (i.e., fault detection capability) Seed a fault into a firewall policy and generate a mutant (a faulty version). Decisions Test Packets Mutant (faulty version) Expected Decision s Firewall (correct version) Compare their decisions –The fault is detected in a mutant (i.e., the mutant is “killed”).

Automated Software Engineering Research Group Mutation Operators Remove RuleRMR Change Rule DecisionCRD Change Rule OrderCRO Change Range End point OperatorCREO Change Range Start point OperatorCRSO Change Range End point ValueCREV Change Range Start point ValueCRSV Rule Clause FalseRCF Rule Clause TrueRCT Rule Predicate FalseRPF Rule Predicate TrueRPT DescriptionOperator

Automated Software Engineering Research Group 46 Experiment Given a firewall policy (assuming correct!) – Mutants – Packet sets (for each technique) Investigating the following correlations – Packet sets and their achieved structural coverage – Structural coverage criteria and fault-detection capability – Packet sets and their reduced packet sets in terms of fault-detection capability Characteristics of each mutation operator

Automated Software Engineering Research Group 47 Experiment (Contd...) Notations – Rand : packet set generated by the random packet generation technique – Local : packet set generated by the packet generation technique based on local constraint solving – Global : packet set generated by the packet generation technique based on global constraint solving – R-Rand, R-Local, and R-Global are their reduced packet sets

Automated Software Engineering Research Group 48 Subjects We used 14 firewall policies Number of test packets : approximately 2 packets per rule # Rules : Number of rules # Mutants : Number of mutants Gen time (milliseconds) : packet generation time (particularly for Global) Global : global constraint solving

Automated Software Engineering Research Group 49 Measuring Rule Coverage Rand < Local ≤ Global Rand achieves the lowest rule coverage In general, Global achieves slightly higher rule coverage than Local

Automated Software Engineering Research Group 50 Reducing the number of packet sets Reduced packet set (e.g., R-Rand) –Maintain same level of structural coverage –R-Rand (5% of Rand), R-Local (66% of Local), and R-Global (60% of Global) –Compare their fault-detection capabilities

Automated Software Engineering Research Group 51 Fault detection capability by subject policies R-Rand ≤ Rand < R-Local ≤ Local < R-Global≤ Global Packet set with higher structural coverage has higher fault-detection capability

Automated Software Engineering Research Group 52 Fault detection capability by mutation operators Mutant killing ratios vary by mutation operators –Above 85% : RPT –30% - 40% : RPF, RMR –10 – 20% : CRSV, CRSO –0% - 10% : RCT, RCF, CREV, CREO, CRO

Automated Software Engineering Research Group 53 Related Work Testing of XACML access control policies [Martin et al. ICICS 2006, WWW 2007] Specification-based testing of firewalls [J¨urjens et al. PSI 2001] – State transition model between firewall and its surrounding network Defining policy criteria identified by interactions between rules [El-Atawy et al. Policy 2007]

Automated Software Engineering Research Group 54 Conclusion Firewall policy testing helps improve our confidence of firewall policy correctness Systematic testing of firewall policies – Structural coverage criteria – Three automated packet generation techniques Measured Coverage: Rand < Local ≤ Global Mutation testing to show the fault detection capability – Generally, a packet set with higher structural coverage has higher fault-detection capability – Worthwhile to generate test packet set for achieving high structural coverage

Automated Software Engineering Research Group 55 Fault Localization for Firewall Policies 55 JeeHyun Hwang 1, Tao Xie 1, Fei Chen 2, and Alex Liu 2 North Carolina State University 1 Michigan State University 2 Symposium on Reliable Distributed Systems (SRDS 2009)

Automated Software Engineering Research Group 56 Fault Model Faults in an attribute in a Rule – Rule Decision Change (RDC) Change Rule Decision – R1: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → accept – R1’: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → deny – Rule Field Interval Change (RFC) Change selected rule’s interval randomly – R1: F1 ∈ [0,10] ∧ F2 ∈ [3, 5] → accept – R1’: F1 ∈ [2,7] ∧ F2 ∈ [3, 5] → accept

Automated Software Engineering Research Group 57 Overview of Approach Input – Faulty Firewall Policy – Failed and Passed Test Packets Techniques – Covered-Rule-Fault Localization – Rule Reduction Technique – Rule Ranking Technique Output – Set of likely faulty rules (with their ranking)

Automated Software Engineering Research Group 58 Covered-Rule-Fault Localization R 1 : F 1 ∈ [0,10] ∧ F 2 ∈ [3, 5] ∧ F 3 ∈ [3, 5] → accept R 2 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [3, 5] → discard R 3 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [6, 7] → accept R 4 : F 1 ∈ [2,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [5,10] → discard R 5 : F 1 ∈ [0,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [0,10] → discard Inspect a rule covered by a failed test R4 is selected for inspection RDC faulty rule is effectively filtered out # Rule Cov Selected #Failed #Passed ● accept Inject a Rule Decision Change Fault in R4 “accept” rather than “discard”

Automated Software Engineering Research Group 59 Rule Reduction Technique Reduce # of rules for inspection The earliest-placed rule covered by failed tests; R4 Other rules with following criterion Rules above r’; R1, R2, R3 Rules with different decision of r’; R1, R3 R 1 : F 1 ∈ [0,10] ∧ F 2 ∈ [3, 5] ∧ F 3 ∈ [3, 5] → accept R 2 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [3, 5] → discard R 3 : F 1 ∈ [5, 7] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [6, 7] → accept R 4 : F 1 ∈ [2,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [5,10] → discard R 5 : F 1 ∈ [0,10] ∧ F 2 ∈ [0, 10] ∧ F 3 ∈ [0,10] → discard # Rule Cov Selected #Failed #Passed ●●●●●● Inject a Field Interval Change Fault in R1’s F3 F3 ∈ [3, 3] rather than F3 ∈ [3, 5] F 3 ∈ [3, 3]

Automated Software Engineering Research Group 60 Rule Ranking Technique Rank rules based on their likelihood of being faulty using clause coverage. – FC1 <= FC2 FC1: # clauses that are evaluated to false in a faulty rule FC2: # clauses that are evaluated to false in other rules – Ranking is calculated based on the following formula FF(r) : # clauses evaluated to false FT(r) : # clauses evaluated to true

Automated Software Engineering Research Group 61 Experiments 14 firewall policies # Rules: # of rules # Tests: # generated test packets # RDC: # RDC faulty policies #RFC# RFC faulty policies

Automated Software Engineering Research Group 62 Results: Covered-Rule-Fault Localization 100 % of RDC faulty rules are detected 69 % of RFC faulty rules are detected 31% of RFC faulty rules are not covered by only failed test

Automated Software Engineering Research Group 63 Results: Rule Reduction for Inspection Rule reduction percentage: % Reduce (30.63 % of Rules) Ranking-based rule reduction percentage: % R-Reduce (66% of Rules)

Automated Software Engineering Research Group 64 Conclusion and Future Work Our techniques help policy authors locating faults effectively by reducing # of rules for inspection – 100% of RDC faulty rules and 69% of RFC faulty rules can be detected by inspecting covered rules – 30.63% of rules on average are reduced for inspection based on our rule reduction technique and 56.53% of rule ranking technique We plan to investigate fault localization for multiple faults in a firewall policy