Download presentation
Presentation is loading. Please wait.
Published byLesley Bryant Modified over 9 years ago
1
Data Mining for Malware Detection Lecture #2 May 27, 2011 Dr. Bhavani Thuraisingham The University of Texas at Dallas
2
10/15/2015 03:06 What is Data Mining? Data Mining Knowledge Mining Knowledge Discovery in Databases Data Archaeology Data Dredging Database Mining Knowledge Extraction Data Pattern Processing Information Harvesting Siftware The process of discovering meaningful new correlations, patterns, and trends by sifting through large amounts of data, often previously unknown, using pattern recognition technologies and statistical and mathematical techniques (Thuraisingham, Data Mining, CRC Press 1998)
3
10/15/2015 03:06 What’s going on in data mining? 0 What are the technologies for data mining? -Database management, data warehousing, machine learning, statistics, pattern recognition, visualization, parallel processing 0 What can data mining do for you? -Data mining outcomes: Classification, Clustering, Association, Anomaly detection, Prediction, Estimation,... 0 How do you carry out data mining? -Data mining techniques: Decision trees, Neural networks, Market-basket analysis, Link analysis, Genetic algorithms,... 0 What is the current status? -Many commercial products mine relational databases 0 What are some of the challenges? -Mining unstructured data, extracting useful patterns, web mining, Data mining, security and privacy
4
10/15/2015 03:06 Data Mining for Intrusion Detection: Problem 0 An intrusion can be defined as “any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource”. 0 Attacks are: - Host-based attacks - Network-based attacks 0 Intrusion detection systems are split into two groups: - Anomaly detection systems - Misuse detection systems 0 Use audit logs -Capture all activities in network and hosts. -But the amount of data is huge!
5
10/15/2015 03:06 Misuse Detection 0 Misuse Detection
6
10/15/2015 03:06 Problem: Anomaly Detection 0 Anomaly Detection
7
10/15/2015 03:06 Our Approach: Overview Training Data Class Hierarchical Clustering (DGSOT) Testing Testing Data SVM Class Training DGSOT: Dynamically growing self organizing tree
8
10/15/2015 03:06 Hierarchical clustering with SVM flow chart Our Approach Our Approach: Hierarchical Clustering
9
10/15/2015 03:06 Results Training Time, FP and FN Rates of Various Methods Methods Average Accuracy Total Training Time Average FP Rate (%) Average FN Rate (%) Random Selection 52%0.44 hours4047 Pure SVM57.6%17.34 hours35.542 SVM+Rocchio Bundling 51.6%26.7 hours44.248 SVM + DGSOT69.8%13.18 hours37.829.8
10
10/15/2015 03:06 Introduction: Detecting Malicious Executables using Data Mining 0 What are malicious executables? -Harm computer systems -Virus, Exploit, Denial of Service (DoS), Flooder, Sniffer, Spoofer, Trojan etc. -Exploits software vulnerability on a victim -May remotely infect other victims -Incurs great loss. Example: Code Red epidemic cost $2.6 Billion 0 Malicious code detection: Traditional approach -Signature based -Requires signatures to be generated by human experts -So, not effective against “zero day” attacks
11
10/15/2015 03:06 State of the Art in Automated Detection O Automated detection approaches: 0 Behavioural: analyse behaviours like source, destination address, attachment type, statistical anomaly etc. 0 Content-based: analyse the content of the malicious executable -Autograph (H. Ah-Kim – CMU): Based on automated signature generation process -N-gram analysis (Maloof, M.A. et.al.): Based on mining features and using machine learning.
12
10/15/2015 03:06 Our New Ideas (Khan, Masud and Thuraisingham) ✗ Content -based approaches consider only machine-codes (byte-codes). ✗ Is it possible to consider higher-level source codes for malicious code detection? ✗ Yes: Diassemble the binary executable and retrieve the assembly program ✗ Extract important features from the assembly program ✗ Combine with machine-code features
13
10/15/2015 03:06 The Hybrid Feature Retrieval Model Feature Extraction Binary n-gram features - Sequence of n consecutive bytes of binary executable Assembly n-gram features - Sequence of n consecutive assembly instructions System API call features Collect training samples of normal and malicious executables. Extract features 0 Train a Classifier and build a model 0 Test the model against test samples
14
10/15/2015 03:06 Hybrid Feature Retrieval (HFR): Training and Testing
15
10/15/2015 03:06 Binary n-gram features -Features are extracted from the byte codes in the form of n- grams, where n = 2,4,6,8,10 and so on. Example: Given a 11-byte sequence: 0123456789abcdef012345, The 2-grams (2-byte sequences) are: 0123, 2345, 4567, 6789, 89ab, abcd, cdef, ef01, 0123, 2345 The 4-grams (4-byte sequences) are: 01234567, 23456789, 456789ab,...,ef012345 and so on.... Problem: -Large dataset. Too many features (millions!). Solution: -Use secondary memory, efficient data structures -Apply feature selection Feature Extraction
16
10/15/2015 03:06 Assembly n-gram features -Features are extracted from the assembly programs in the form of n-grams, where n = 2,4,6,8,10 and so on. Example: three instructions “push eax”; “mov eax, dword[0f34]” ; “add ecx, eax”; 2-grams (1) “push eax”; “mov eax, dword[0f34]”; (2) “mov eax, dword[0f34]”; “add ecx, eax”; Problem: -Same problem as binary Solution: -Same solution Feature Extraction
17
10/15/2015 03:06 0 Select Best K features 0 Selection Criteria: Information Gain 0 Gain of an attribute A on a collection of examples S is given by Feature Selection
18
10/15/2015 03:06 Experiments 0 Dataset -Dataset1: 838 Malicious and 597 Benign executables -Dataset2: 1082 Malicious and 1370 Benign executables -Collected Malicious code from VX Heavens (http://vx.netlux.org) 0 Disassembly -Pedisassem ( http://www.geocities.com/~sangcho/index.html ) 0 Training, Testing -Support Vector Machine (SVM) -C-Support Vector Classifiers with an RBF kernel
19
10/15/2015 03:06 Results 0 HFS = Hybrid Feature Set 0 BFS = Binary Feature Set 0 AFS = Assembly Feature Set
20
10/15/2015 03:06 Results 0 HFS = Hybrid Feature Set 0 BFS = Binary Feature Set 0 AFS = Assembly Feature Set
21
10/15/2015 03:06 Results 0 HFS = Hybrid Feature Set 0 BFS = Binary Feature Set 0 AFS = Assembly Feature Set
22
10/15/2015 03:06 Future Plans 0 System call: -seems to be very useful. -Need to Consider Frequency of call -Call sequence pattern (following program path) -Actions immediately preceding or after call 0 Detect Malicious code by program slicing -requires analysis
23
10/15/2015 03:06 Data Mining for Buffer Overflow Introduction 0 Goal -Intrusion detection. -e.g.: worm attack, buffer overflow attack. 0 Main Contribution -'Worm' code detection by data mining coupled with 'reverse engineering'. -Buffer overflow detection by combining data mining with static analysis of assembly code.
24
10/15/2015 03:06 Background 0 What is 'buffer overflow'? -A situation when a fixed sized buffer is overflown by a larger sized input. 0 How does it happen? -example:........ char buff[100]; gets(buff);........ buffStack memory Input string
25
10/15/2015 03:06 Background (cont...) 0 Then what?........ char buff[100]; gets(buff);........ buffStack memory Stack Return address overwritten buffStack memory New return address points to this memory location Attacker's code buff
26
10/15/2015 03:06 Background (cont...) 0 So what? -Program may crash or -The attacker can execute his arbitrary code 0 It can now -Execute any system function -Communicate with some host and download some 'worm' code and install it! -Open a backdoor to take full control of the victim 0 How to stop it?
27
10/15/2015 03:06 Background (cont...) 0 Stopping buffer overflow -Preventive approaches -Detection approaches 0 Preventive approaches -Finding bugs in source code. Problem: can only work when source code is available. -Compiler extension. Same problem. -OS/HW modification 0 Detection approaches -Capture code running symptoms. Problem: may require long running time. -Automatically generating signatures of buffer overflow attacks.
28
10/15/2015 03:06 CodeBlocker (Our approach) 0 A detection approach 0 Based on the Observation: -Attack messages usually contain code while normal messages contain data. 0 Main Idea -Check whether message contains code 0 Problem to solve: -Distinguishing code from data
29
10/15/2015 03:06 Some Statistics 0 Statistics to support this observation (a)on Windows platforms -most web servers (port 80) accept data only; -remote access services (ports 111, 137, 138, 139) accept data only; Microsoft SQL Servers (port 1434) accept data only; -workstation services (ports 139 and 445) accept data only. 0 (b) On Linux platforms, most -Apache web servers (port 80) accept data only; -BIND (port 53) accepts data only; -SNMP (port 161) accepts data only; -most Mail Transport (port 25) accepts data only; -Database servers (Oracle, MySQL, PostgreSQL) at ports 1521, 3306 and 5432 accept data only.
30
10/15/2015 03:06 Severity of the problem 0 It is not easy to detect actual instruction sequence from a given string of bits
31
10/15/2015 03:06 Our solution 0 Apply data mining. 0 Formulate the problem as a classification problem (code, data) 0 Collect a set of training examples, containing both instances 0 Train the data with a machine learning algorithm, get the model 0 Test this model against a new message
32
10/15/2015 03:06 CodeBlocker Model
33
10/15/2015 03:06 Feature Extraction
34
10/15/2015 03:06 Disassembly 0 We apply SigFree tool -implemented by Xinran Wang et al. (PennState)
35
10/15/2015 03:06 Feature extraction 0 Features are extracted using -N-gram analysis -Control flow analysis 0 N-gram analysis Assembly program Corresponding IFG What is an n-gram? -Sequence of n instructions Traditional approach: -Flow of control is ignored 2-grams are: 02, 24, 46,...,CE
36
10/15/2015 03:06 Feature extraction (cont...) 0 Control-flow Based N-gram analysis Assembly programCorresponding IFG What is an n-gram? -Sequence of n instructions Proposed Control-flow based approach -Flow of control is considered 2-grams are: 02, 24, 46,...,CE, E6
37
10/15/2015 03:06 Feature extraction (cont...) 0 Control Flow analysis. Generated features -Invalid Memory Reference (IMR) -Undefined Register (UR) -Invalid Jump Target (IJT) 0 Checking IMR -A memory is referenced using register addressing and the register value is undefined -e.g.: mov ax, [dx + 5] 0 Checking UR -Check if the register value is set properly 0 Checking IJT -Check whether jump target does not violate instruction boundary
38
10/15/2015 03:06 Putting it together 0 Why n-gram analysis? Intuition: in general, disassembled executables should have a different pattern of instruction usage than disassembled data. 0 Why control flow analysis? -Intuition: there should be no invalid memory references or invalid jump targets. 0 Approach -Compute all possible n-grams -Select best k of them -Compute feature vector (binary vector) for each training example -Supply these vectors to the training algorithm
39
10/15/2015 03:06 Experiments 0 Dataset -Real traces of normal messages -Real attack messages -Polymorphic shellcodes 0 Training, Testing -Support Vector Machine (SVM)
40
10/15/2015 03:06 Results 0 CFBn: Control-Flow Based n-gram feature 0 CFF: Control-flow feature
41
10/15/2015 03:06 Novelty, Advantages, Limitations, Future 0 Novelty -We introduce the notion of control flow based n-gram -We combine control flow analysis with data mining to detect code / data -Significant improvement over other methods (e.g. SigFree) 0 Advantages -Fast testing -Signature free operation -Low overhead -Robust against many obfuscations 0 Limitations -Need samples of attack and normal messages. -May not be able to detect a completely new type of attack. 0 Future -Find more features -Apply dynamic analysis techniques -Semantic analysis
42
10/15/2015 03:06 Worm Detection: Introduction 0 What are worms? -Self-replicating program; Exploits software vulnerability on a victim; Remotely infects other victims 0 Evil worms -Severe effect; Code Red epidemic cost $2.6 Billion 0 Goals of worm detection -Real-time detection 0 Issues -Substantial Volume of Identical Traffic, Random Probing 0 Methods for worm detection -Count number of sources/destinations; Count number of failed connection attempts 0 Worm Types -Email worms, Instant Messaging worms, Internet worms, IRC worms, File- sharing Networks worms 0 Automatic signature generation possible -EarlyBird System (S. Singh -UCSD); Autograph (H. Ah-Kim - CMU)
43
10/15/2015 03:06 Email Worm Detection using Data Mining Training data Feature extraction Clean or Infected ? Outgoing Emails Classifier Machine Learning Test data The Model Task: given some training instances of both “normal” and “viral” emails, induce a hypothesis to detect “viral” emails. We used: Naïve Bayes SVM
44
10/15/2015 03:06 Assumptions 0 Features are based on outgoing emails. 0 Different users have different “normal” behaviour. 0 Analysis should be per-user basis. 0 Two groups of features -Per email (#of attachments, HTML in body, text/binary attachments) -Per window (mean words in body, variable words in subject) 0 Total of 24 features identified 0 Goal: Identify “normal” and “viral” emails based on these features
45
10/15/2015 03:06 Feature sets -Per email features =Binary valued Features Presence of HTML; script tags/attributes; embedded images; hyperlinks; Presence of binary, text attachments; MIME types of file attachments =Continuous-valued Features Number of attachments; Number of words/characters in the subject and body -Per window features =Number of emails sent; Number of unique email recipients; Number of unique sender addresses; Average number of words/characters per subject, body; average word length:; Variance in number of words/characters per subject, body; Variance in word length =Ratio of emails with attachments
46
10/15/2015 03:06 Data Mining Approach Classifier SVMNaïve Bayes infected ? Clean ? Clean Clean/ Infected Test instance
47
10/15/2015 03:06 Data set 0 Collected from UC Berkeley. -Contains instances for both normal and viral emails. 0 Six worm types: -bagle.f, bubbleboy, mydoom.m, -mydoom.u, netsky.d, sobig.f 0 Originally Six sets of data: -training instances: normal (400) + five worms (5x200) -testing instances: normal (1200) + the sixth worm (200) 0 Problem: Not balanced, no cross validation reported 0 Solution: re-arrange the data and apply cross-validation
48
10/15/2015 03:06 Our Implementation and Analysis 0 Implementation -Naïve Bayes: Assume “Normal” distribution of numeric and real data; smoothing applied SVM: with the parameter settings: one-class SVM with the radial basis function using “gamma” = 0.015 and “nu” = 0.1. 0 Analysis -NB alone performs better than other techniques -SVM alone also performs better if parameters are set correctly -mydoom.m and VBS.Bubbleboy data set are not sufficient (very low detection accuracy in all classifiers) -The feature-based approach seems to be useful only when we have identified the relevant features gathered enough training data Implement classifiers with best parameter settings
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.