Malicious Code Detection and Security Applications

Slides:



Advertisements
Similar presentations
Code-Red : a case study on the spread and victims of an Internet worm David Moore, Colleen Shannon, Jeffery Brown Jonghyun Kim.
Advertisements

1 Topic 1 – Lesson 3 Network Attacks Summary. 2 Questions ► Compare passive attacks and active attacks ► How do packet sniffers work? How to mitigate?
Worm Origin Identification Using Random Moonwalks Yinglian Xie, V. Sekar, D. A. Maltz, M. K. Reiter, Hui Zhang 2005 IEEE Symposium on Security and Privacy.
19.1 Silberschatz, Galvin and Gagne ©2003 Operating System Concepts with Java Chapter 19: Security The Security Problem Authentication Program Threats.
Models and Security Requirements for IDS. Overview The system and attack model Security requirements for IDS –Sensitivity –Detection Analysis methodology.
Unsupervised Intrusion Detection Using Clustering Approach Muhammet Kabukçu Sefa Kılıç Ferhat Kutlu Teoman Toraman 1/29.
Silberschatz, Galvin and Gagne  Operating System Concepts Module 19: Security The Security Problem Authentication Program Threats System Threats.
Internet Quarantine: Requirements for Containing Self-Propagating Code David Moore et. al. University of California, San Diego.
Data Mining for Security Applications: Detecting Malicious Executables Mr. Mehedy M. Masud (PhD Student) Prof. Latifur Khan Prof. Bhavani Thuraisingham.
Intrusion Detection System Marmagna Desai [ 520 Presentation]
Lucent Technologies – Proprietary Use pursuant to company instruction Learning Sequential Models for Detecting Anomalous Protocol Usage (work in progress)
A Hybrid Model to Detect Malicious Executables Mohammad M. Masud Latifur Khan Bhavani Thuraisingham Department of Computer Science The University of Texas.
A Statistical Anomaly Detection Technique based on Three Different Network Features Yuji Waizumi Tohoku Univ.
Vulnerability-Specific Execution Filtering (VSEF) for Exploit Prevention on Commodity Software Authors: James Newsome, James Newsome, David Brumley, David.
Security Exploiting Overflows. Introduction r See the following link for more info: operating-systems-and-applications-in-
Speaker : Hong-Ren Jiang A Novel Testbed for Detection of Malicious Software Functionality 1.
IIT Indore © Neminah Hubballi
Survey “Intrusion Detection: Systems and Models” “A Stateful Intrusion Detection System for World-Wide Web Servers”
Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Lecture #5 Forensics Systems September 5, 2007.
1 Confidentiality and Trust Management in a Coalition Environment Lecture #11 Dr. Bhavani Thuraisingham February 13, 2008 Data and Applications Security.
Honeypot and Intrusion Detection System
Data Mining for Malware Detection Lecture #2 May 27, 2011 Dr. Bhavani Thuraisingham The University of Texas at Dallas.
IEEE Communications Surveys & Tutorials 1st Quarter 2008.
Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Application Forensics November 5, 2008.
Guest Lecture Introduction to Data Mining Dr. Bhavani Thuraisingham September 17, 2010.
CISC Machine Learning for Solving Systems Problems Presented by: Sandeep Dept of Computer & Information Sciences University of Delaware Detection.
CISC Machine Learning for Solving Systems Problems Presented by: Ashwani Rao Dept of Computer & Information Sciences University of Delaware Learning.
Shellcode Development -Femi Oloyede -Pallavi Murudkar.
Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Network Forensics - III November 3, 2008.
Malicious Code Detection and Security Applications Prof. Bhavani Thuraisingham The University of Texas at Dallas October 2008.
Automated Worm Fingerprinting Authors: Sumeet Singh, Cristian Estan, George Varghese and Stefan Savage Publish: OSDI'04. Presenter: YanYan Wang.
I NTRUSION P REVENTION S YSTEM (IPS). O UTLINE Introduction Objectives IPS’s Detection methods Classifications IPS vs. IDS IPS vs. Firewall.
Dr. Bhavani Thuraisingham October 9, 2015 Analyzing and Securing Social Media Attacks on Social Media.
HoneyStat: Local Worm Detection Using Honeypots David Dagon, Xinzhou Qin, Guofei Gu, Wenke Lee, et al from Georgia Institute of Technology Authors: The.
Erik Jonsson School of Engineering and Computer Science The University of Texas at Dallas Cyber Security Research on Engineering Solutions Dr. Bhavani.
Data Mining for Security Applications Prof. Bhavani Thuraisingham The University of Texas at Dallas May 2006.
Data Mining for Malicious Code Detection and Security Applications Prof. Bhavani Thuraisingham Prof. Latifur Khan The University of Texas at Dallas Guest.
1. ABSTRACT Information access through Internet provides intruders various ways of attacking a computer system. Establishment of a safe and strong network.
Week-14 (Lecture-1) Malicious software and antivirus: 1. Malware A user can be tricked or forced into downloading malware comes in many forms, Ex. viruses,
Data Mining for Security Applications Prof. Bhavani Thuraisingham The University of Texas at Dallas June 2006.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Chapter 40 Internet Security.
Experience Report: System Log Analysis for Anomaly Detection
Botnets A collection of compromised machines
Shellcode COSC 480 Presentation Alison Buben.
Learning to Detect and Classify Malicious Executables in the Wild by J
Internet Quarantine: Requirements for Containing Self-Propagating Code
TMG Client Protection 6NPS – Session 7.
Machine Learning for Computer Security
The Internet Worm Compromising the availability and reliability of systems through security failure.
Ch.22 INTRUSION DETECTION
Chap 20. Vulnerability Analysis
Detecting Malicious Executables
Operating system Security
Worm Origin Identification Using Random Moonwalks
Data and Applications Security Introduction to Data Mining
BotCatch: A Behavior and Signature Correlated Bot Detection Approach
Botnets A collection of compromised machines
INFORMATION SYSTEMS SECURITY and CONTROL
Lecture 3: Secure Network Architecture
Faculty of Science IT Department By Raz Dara MA.
Security.
Intrusion Detection system
Identifying Slow HTTP DoS/DDoS Attacks against Web Servers DEPARTMENT ANDDepartment of Computer Science & Information SPECIALIZATIONTechnology, University.
CSC-682 Advanced Computer Security
Operating System Concepts
Crisis and Aftermath Morris worm.
Modeling IDS using hybrid intelligent systems
When Machine Learning Meets Security – Secure ML or Use ML to Secure sth.? ECE 693.
Presentation transcript:

Malicious Code Detection and Security Applications Prof. Bhavani Thuraisingham The University of Texas at Dallas September 8, 2008 Lecture #5

Outline Data mining overview Intrusion detection and Malicious code detection (worms and virus) Digital forensics and UTD work Algorithms for Digital Forensics

What is Data Mining? Information Harvesting Knowledge Mining Knowledge Discovery in Databases Data Archaeology Data Dredging Database Mining Knowledge Extraction Data Pattern Processing Information Harvesting Siftware The process of discovering meaningful new correlations, patterns, and trends by sifting through large amounts of data, often previously unknown, using pattern recognition technologies and statistical and mathematical techniques (Thuraisingham, Data Mining, CRC Press 1998)

What’s going on in data mining? What are the technologies for data mining? Database management, data warehousing, machine learning, statistics, pattern recognition, visualization, parallel processing What can data mining do for you? Data mining outcomes: Classification, Clustering, Association, Anomaly detection, Prediction, Estimation, . . . How do you carry out data mining? Data mining techniques: Decision trees, Neural networks, Market-basket analysis, Link analysis, Genetic algorithms, . . . What is the current status? Many commercial products mine relational databases What are some of the challenges? Mining unstructured data, extracting useful patterns, web mining, Data mining, security and privacy

Data Mining for Intrusion Detection: Problem An intrusion can be defined as “any set of actions that attempt to compromise the integrity, confidentiality, or availability of a resource”. Attacks are: Host-based attacks Network-based attacks Intrusion detection systems are split into two groups: Anomaly detection systems Misuse detection systems Use audit logs Capture all activities in network and hosts. But the amount of data is huge!

Misuse Detection Misuse Detection

Problem: Anomaly Detection

Our Approach: Overview Training Data Class Hierarchical Clustering (DGSOT) SVM Class Training Testing DGSOT: Dynamically growing self organizing tree Testing Data

Hierarchical clustering with SVM flow chart Our Approach: Hierarchical Clustering Our Approach Hierarchical clustering with SVM flow chart

Results Training Time, FP and FN Rates of Various Methods Methods Average Accuracy Total Training Time Average FP Rate (%) Average FN Random Selection 52% 0.44 hours 40 47 Pure SVM 57.6% 17.34 hours 35.5 42 SVM+Rocchio Bundling 51.6% 26.7 hours 44.2 48 SVM + DGSOT 69.8% 13.18 hours 37.8 29.8  

Introduction: Detecting Malicious Executables using Data Mining What are malicious executables? Harm computer systems Virus, Exploit, Denial of Service (DoS), Flooder, Sniffer, Spoofer, Trojan etc. Exploits software vulnerability on a victim May remotely infect other victims Incurs great loss. Example: Code Red epidemic cost $2.6 Billion Malicious code detection: Traditional approach Signature based Requires signatures to be generated by human experts So, not effective against “zero day” attacks

State of the Art in Automated Detection Automated detection approaches: Behavioural: analyse behaviours like source, destination address, attachment type, statistical anomaly etc. Content-based: analyse the content of the malicious executable Autograph (H. Ah-Kim – CMU): Based on automated signature generation process N-gram analysis (Maloof, M.A. et .al.): Based on mining features and using machine learning.

Our New Ideas (Khan, Masud and Thuraisingham) Content -based approaches consider only machine-codes (byte-codes). Is it possible to consider higher-level source codes for malicious code detection? Yes: Diassemble the binary executable and retrieve the assembly program Extract important features from the assembly program Combine with machine-code features

Feature Extraction Binary n-gram features Sequence of n consecutive bytes of binary executable Assembly n-gram features Sequence of n consecutive assembly instructions System API call features DLL function call information

The Hybrid Feature Retrieval Model Collect training samples of normal and malicious executables. Extract features Train a Classifier and build a model Test the model against test samples

Hybrid Feature Retrieval (HFR) Training

Hybrid Feature Retrieval (HFR) Testing

Feature Extraction Binary n-gram features Features are extracted from the byte codes in the form of n- grams, where n = 2,4,6,8,10 and so on. Example: Given a 11-byte sequence: 0123456789abcdef012345, The 2-grams (2-byte sequences) are: 0123, 2345, 4567, 6789, 89ab, abcd, cdef, ef01, 0123, 2345 The 4-grams (4-byte sequences) are: 01234567, 23456789, 456789ab,...,ef012345 and so on.... Problem: Large dataset. Too many features (millions!). Solution: Use secondary memory, efficient data structures Apply feature selection

Feature Extraction Assembly n-gram features Features are extracted from the assembly programs in the form of n-grams, where n = 2,4,6,8,10 and so on. Example: three instructions “push eax”; “mov eax, dword[0f34]” ; “add ecx, eax”; 2-grams (1) “push eax”; “mov eax, dword[0f34]”; (2) “mov eax, dword[0f34]”; “add ecx, eax”; Problem: Same problem as binary Solution: Same solution

Feature Selection Select Best K features Selection Criteria: Information Gain Gain of an attribute A on a collection of examples S is given by

Experiments Dataset Dataset1: 838 Malicious and 597 Benign executables Collected Malicious code from VX Heavens (http://vx.netlux.org) Disassembly Pedisassem ( http://www.geocities.com/~sangcho/index.html ) Training, Testing Support Vector Machine (SVM) C-Support Vector Classifiers with an RBF kernel

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Future Plans System call: seems to be very useful. Need to Consider Frequency of call Call sequence pattern (following program path) Actions immediately preceding or after call Detect Malicious code by program slicing requires analysis

Data Mining for Buffer Overflow Introduction Goal Intrusion detection. e.g.: worm attack, buffer overflow attack. Main Contribution 'Worm' code detection by data mining coupled with 'reverse engineering'. Buffer overflow detection by combining data mining with static analysis of assembly code.

Background What is 'buffer overflow'? A situation when a fixed sized buffer is overflown by a larger sized input. How does it happen? example: ........ char buff[100]; gets(buff); memory buff Stack Input string

Background (cont...) Then what? buff Stack ........ char buff[100]; gets(buff); memory buff Stack Return address overwritten Attacker's code memory buff Stack New return address points to this memory location

Background (cont...) So what? Program may crash or The attacker can execute his arbitrary code It can now Execute any system function Communicate with some host and download some 'worm' code and install it! Open a backdoor to take full control of the victim How to stop it?

Background (cont...) Stopping buffer overflow Preventive approaches Detection approaches Finding bugs in source code. Problem: can only work when source code is available. Compiler extension. Same problem. OS/HW modification Capture code running symptoms. Problem: may require long running time. Automatically generating signatures of buffer overflow attacks.

CodeBlocker (Our approach) A detection approach Based on the Observation: Attack messages usually contain code while normal messages contain data. Main Idea Check whether message contains code Problem to solve: Distinguishing code from data

Severity of the problem It is not easy to detect actual instruction sequence from a given string of bits

Our solution Apply data mining. Formulate the problem as a classification problem (code, data) Collect a set of training examples, containing both instances Train the data with a machine learning algorithm, get the model Test this model against a new message

CodeBlocker Model

Feature Extraction

Disassembly We apply SigFree tool implemented by Xinran Wang et al. (PennState)

Feature extraction What is an n-gram? -Sequence of n instructions Features are extracted using N-gram analysis Control flow analysis What is an n-gram? -Sequence of n instructions Traditional approach: -Flow of control is ignored 2-grams are: 02, 24, 46,...,CE Assembly program Corresponding IFG

Feature extraction (cont...) Control-flow Based N-gram analysis What is an n-gram? -Sequence of n instructions Proposed Control-flow based approach -Flow of control is considered 2-grams are: 02, 24, 46,...,CE, E6 Assembly program Corresponding IFG

Feature extraction (cont...) Control Flow analysis. Generated features Invalid Memory Reference (IMR) Undefined Register (UR) Invalid Jump Target (IJT) Checking IMR A memory is referenced using register addressing and the register value is undefined e.g.: mov ax, [dx + 5] Checking UR Check if the register value is set properly Checking IJT Check whether jump target does not violate instruction boundary

Putting it together Why n-gram analysis? Intuition: in general, disassembled executables should have a different pattern of instruction usage than disassembled data. Why control flow analysis? Intuition: there should be no invalid memory references or invalid jump targets. Approach Compute all possible n-grams Select best k of them Compute feature vector (binary vector) for each training example Supply these vectors to the training algorithm

Experiments Dataset Real traces of normal messages Real attack messages Polymorphic shellcodes Training, Testing Support Vector Machine (SVM)

Results CFBn: Control-Flow Based n-gram feature CFF: Control-flow feature

Novelty, Advantages, Limitations, Future We introduce the notion of control flow based n-gram We combine control flow analysis with data mining to detect code / data Significant improvement over other methods (e.g. SigFree) Advantages Fast testing Signature free operation Low overhead Robust against many obfuscations Limitations Need samples of attack and normal messages. May not be able to detect a completely new type of attack. Future Find more features Apply dynamic analysis techniques Semantic analysis

Analysis of Firewall Policy Rules Using Data Mining Techniques Firewall is the de facto core technology of today’s network security First line of defense against external network attacks and threats Firewall controls or governs network access by allowing or denying the incoming or outgoing network traffic according to firewall policy rules. Manual definition of rules often result in in anomalies in the policy Detecting and resolving these anomalies manually is a tedious and an error prone task Solutions: Anomaly detection: Theoretical Framework for the resolution of anomaly; A new algorithm will simultaneously detect and resolve any anomaly that is present in the policy rules Traffic Mining: Mine the traffic and detect anomalies

Traffic Mining Firewall Log File Mining Log File Using Frequency To bridge the gap between what is written in the firewall policy rules and what is being observed in the network is to analyze traffic and log of the packets– traffic mining Network traffic trend may show that some rules are out- dated or not used recently Firewall Log File Mining Log File Using Frequency Filtering Rule Generalization Generic Rules Identify Decaying & Dominant Rules Edit Firewall Rules Firewall Policy Rule

Anomaly Discovery Result Traffic Mining Results 1: TCP,INPUT,129.110.96.117,ANY,*.*.*.*,80,DENY 2: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,80,ACCEPT 3: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,443,DENY 4: TCP,INPUT,129.110.96.117,ANY,*.*.*.*,22,DENY 5: TCP,INPUT,*.*.*.*,ANY,*.*.*.*,22,ACCEPT 6: TCP,OUTPUT,129.110.96.80,ANY,*.*.*.*,22,DENY 7: UDP,OUTPUT,*.*.*.*,ANY,*.*.*.*,53,ACCEPT 8: UDP,INPUT,*.*.*.*,53,*.*.*.*,ANY,ACCEPT 9: UDP,OUTPUT,*.*.*.*,ANY,*.*.*.*,ANY,DENY 10: UDP,INPUT,*.*.*.*,ANY,*.*.*.*,ANY,DENY 11: TCP,INPUT,129.110.96.117,ANY,129.110.96.80,22,DENY 12: TCP,INPUT,129.110.96.117,ANY,129.110.96.80,80,DENY 13: UDP,INPUT,*.*.*.*,ANY,129.110.96.80,ANY,DENY 14: UDP,OUTPUT,129.110.96.80,ANY,129.110.10.*,ANY,DENY 15: TCP,INPUT,*.*.*.*,ANY,129.110.96.80,22,ACCEPT 16: TCP,INPUT,*.*.*.*,ANY,129.110.96.80,80,ACCEPT 17: UDP,INPUT,129.110.*.*,53,129.110.96.80,ANY,ACCEPT 18: UDP,OUTPUT,129.110.96.80,ANY,129.110.*.*,53,ACCEPT Rule 1, Rule 2: ==> GENRERALIZATION Rule 1, Rule 16: ==> CORRELATED Rule 2, Rule 12: ==> SHADOWED Rule 4, Rule 5: ==> GENRERALIZATION Rule 4, Rule 15: ==> CORRELATED Rule 5, Rule 11: ==> SHADOWED Anomaly Discovery Result

Worm Detection: Introduction What are worms? Self-replicating program; Exploits software vulnerability on a victim; Remotely infects other victims Evil worms Severe effect; Code Red epidemic cost $2.6 Billion Goals of worm detection Real-time detection Issues Substantial Volume of Identical Traffic, Random Probing Methods for worm detection Count number of sources/destinations; Count number of failed connection attempts Worm Types Email worms, Instant Messaging worms, Internet worms, IRC worms, File- sharing Networks worms Automatic signature generation possible EarlyBird System (S. Singh -UCSD); Autograph (H. Ah-Kim - CMU)

Email Worm Detection using Data Mining Task: given some training instances of both “normal” and “viral” emails, induce a hypothesis to detect “viral” emails. We used: Naïve Bayes SVM Outgoing Emails The Model Test data Feature extraction Classifier Machine Learning Training data Clean or Infected ?

Assumptions Features are based on outgoing emails. Different users have different “normal” behaviour. Analysis should be per-user basis. Two groups of features Per email (#of attachments, HTML in body, text/binary attachments) Per window (mean words in body, variable words in subject) Total of 24 features identified Goal: Identify “normal” and “viral” emails based on these features

Feature sets Per email features Binary valued Features Presence of HTML; script tags/attributes; embedded images; hyperlinks; Presence of binary, text attachments; MIME types of file attachments Continuous-valued Features Number of attachments; Number of words/characters in the subject and body Per window features Number of emails sent; Number of unique email recipients; Number of unique sender addresses; Average number of words/characters per subject, body; average word length:; Variance in number of words/characters per subject, body; Variance in word length Ratio of emails with attachments

Data Mining Approach Classifier Clean/ Infected Test instance SVM infected? Naïve Bayes Test instance Clean? Clean

Data set Collected from UC Berkeley. Contains instances for both normal and viral emails. Six worm types: bagle.f, bubbleboy, mydoom.m, mydoom.u, netsky.d, sobig.f Originally Six sets of data: training instances: normal (400) + five worms (5x200) testing instances: normal (1200) + the sixth worm (200) Problem: Not balanced, no cross validation reported Solution: re-arrange the data and apply cross-validation

Our Implementation and Analysis Naïve Bayes: Assume “Normal” distribution of numeric and real data; smoothing applied SVM: with the parameter settings: one-class SVM with the radial basis function using “gamma” = 0.015 and “nu” = 0.1. Analysis NB alone performs better than other techniques SVM alone also performs better if parameters are set correctly mydoom.m and VBS.Bubbleboy data set are not sufficient (very low detection accuracy in all classifiers) The feature-based approach seems to be useful only when we have identified the relevant features gathered enough training data Implement classifiers with best parameter settings

Digital Forensics and UTD Work Machines are infected through unauthorized intrusions, worms and viruses Therefore data has to be acquired from the machine, we skip this step as we get the data from open source web sites We then apply our analysis tools based on data mining Our current research at UTD is focusing mainly on “Botnets” and also to some extent “Honeypots”. We are also conducting research on “Active Defense” – trying to find out the adversary is upto.

Algorithms for Digital Forensics http://www.dfrws.org/2007/proceedings/p49-beebe.pdf http://portal.acm.org/citation.cfm?id=1113034.1113074&coll= GUIDE&dl=GUIDE&idx=J79&part=periodical&WantType=per iodical&title=Communications%20of%20the%20ACM