Download presentation
Presentation is loading. Please wait.
Published byKelly Boyd Modified over 8 years ago
1
DOWeR Detecting Outliers in Web Service Requests Master’s Presentation of Christian Blass
2
Overview 1.Motivation Web Services (SOAP and REST) Vulnerabilities Existing Mitigation Methods 2.Key Ideas of DOWeR Detection Proxy Outlier Detection N-Gram Tries as Feature Vectors 3.Testing Environment ROC vs. Precision and Recall Architecture of Evaluation Engine 4.Tested Properties and Potential Issues 5.Outlook and Conclusion 6.Demonstration of DOWeR
3
1. Motivation Rising Popularity of Web Services Amazon, eBay, Google Hard to Develop Secure Developers don’t have knowledge about security issues Vulnerabilities on the application logic Aims of Attackers Overloading service Theft of private data Compromising client machines Aims of Research Securing existing Web Services No insights to Web Service structure No alteration of code No deep understanding of security
4
Web Services and Protocols for Implementation Definition of Web Services “A Web Service is a software system designed to support interoperable machine-to-machine interaction over a network.” (W3C) Protocols SOAP XML based Envelope, header, body Attributes embedded to SOAP body REST Based on HTTP commands Resources GET, POST, DELETE, … http://example.org/service?operation=search&term=Security Search Securtiy
5
Vulnerabilities and Countermeasures Vulnerabilities SOAP Vulnerabilities Exploiting XML parsers DTD Entity Reference Attack SQL injection Malicious alteration of SQL query SELECT * FROM data WHERE user = ‘ ’ SELECT * FROM data WHERE user = ‘A’ or user = ‘B’ Cross Site Scripting Attacker forces execution of script on third party client Countermeasures SOAP Standards like WS Security, WS Policy, and XML Schema Intrusion detection
6
NetShield - Extension of Mitigator NetShield Mitigator Monitors traffic (packet types) on network layer Generates firewall rules Can only react to attacks already in progress Extension Network layer vs. application layer Attacks hide in payload Traffic on network layer normal Proxy
7
2. Key Ideas of DOWeR Architecture Classifier Client Attacker Web Service DOWeR Classifier
8
Knn Outlier Detection Unlabeled (real) data ? Does a new document fit in the monitored pattern k-nearest (k=3) Compare to threshold computed from other documents Feature A Feature B
9
Language Models as Features Payload of REST request Byte distribution (PAYL, NIDES) Properties of certain Attributes N-Grams Frequency Difference between 3-gram Frequencies of an IIS unicode attack and normal HTTP traffic
10
Storing N-grams in Trie Data Structure 7 4 4 4 4 3 3 3 3 b a r n c a r d 10 8 3 8 8 2 2 2 2 b a n d c a r d 5 k
11
Distance Between Two Tries 7 4 4 4 4 3 b a r n c a 10 8 3 8 8 2 2 2 2 b a n d c a r d 5 k m - (x,y) |4|+|8|
12
3. Testing Environment: Receiver Operating Characteristic (ROC) Score -------++-+++- Threshold
13
ROC vs. Precision and Recall ROC and Area Under Curve (AUC) Use all possible thresholds Estimate the capability of a method of separating data Good to compare methods with each other Precision and Recall A certain threshold needs to be fixed Estimate accuracy of classifier
14
Architecture of Testing Environment Traffic Generator Evaluation Corpus Training Corpus Label Checker Classifier Instance Label Training Attack Free Instances Mixed Instances Quality Measures Evaluation ROC, AUC, Precision, Recall
15
4. Tested Properties and Potential Issues
16
Tested Properties and Potential Issues Threshold estimation Hard to find good thresholds as they depend on The used domain Configurations made Thresholds might change over time as Instances in model change Distributions of requests might change
17
Tested Properties and Potential Issues Impact of Free Text Robustness to Changes Short terms with fixed syntaxLong terms with no constraints lower score higher score Monitored Instances Service A Service B New Service
18
Tested Properties and Potential Issues a) b)c) Possibility of slow alteration of model Intermediate instances close to normal instances and attack to be launched Example using k = 1 1.Red malicious instance further away from nearest neighbor as allowed by threshold 2.Orange specially crafted instance added to model 3.Orange instance becomes nearest neighbor of attack and the score of the attack drops below the threshold resulting in a successful attack
19
Tested Properties and Potential Issues Runtime Purely memory based approach New instances need to be matched against all instances in model Critical as overloading the classifier might also result in DoS Model Size n=1n=2n=3n=4 100038128226314 250089314552769 500018161610841542 Runtime in ms
20
5. Outlook and Conclusion Aim: Securing existing Web Services in-depth knowledge In general method works well 90% accuracy with no false positives Manual tuning of attributes Possible improvements Threshold estimation Adjustment to changes in domain Adaptive threshold Outlier score and algorithm Runtime Pre-clustering of model
21
6. Demo
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.