Detecting Malicious Executables

Slides:



Advertisements
Similar presentations
Scalable Parallel Intrusion Detection Fahad Zafar Advising Faculty: Dr. John Dorband and Dr. Yaacov Yeesha 1 University of Maryland Baltimore County.
Advertisements

ROP is Still Dangerous: Breaking Modern Defenses Nicholas Carlini et. al University of California, Berkeley USENIX Security 2014 Presenter: Yue Li Part.
By Hiranmayi Pai Neeraj Jain
1 Detection of Injected, Dynamically Generated, and Obfuscated Malicious Code (DOME) Subha Ramanathan & Arun Krishnamurthy Nov 15, 2005.
1 Topic 1 – Lesson 3 Network Attacks Summary. 2 Questions ► Compare passive attacks and active attacks ► How do packet sniffers work? How to mitigate?
Assured Information Sharing for Security Applications: Malicious Code Detection Prof. Bhavani Thuraisingham Prof. Latifur Khan Prof. Murat Kantarcioglu.
Polymorphic blending attacks Prahlad Fogla et al USENIX 2006 Presented By Himanshu Pagey.
Security Protection and Checking in Embedded System Integration Against Buffer Overflow Attacks Zili Shao, Chun Xue, Qingfeng Zhuge, Edwin H.-M. Sha International.
Unsupervised Intrusion Detection Using Clustering Approach Muhammet Kabukçu Sefa Kılıç Ferhat Kutlu Teoman Toraman 1/29.
Data Mining for Security Applications: Detecting Malicious Executables Mr. Mehedy M. Masud (PhD Student) Prof. Latifur Khan Prof. Bhavani Thuraisingham.
Jarhead Analysis and Detection of Malicious Java Applets Johannes Schlumberger, Christopher Kruegel, Giovanni Vigna University of California Annual Computer.
Automated malware classification based on network behavior
A Hybrid Model to Detect Malicious Executables Mohammad M. Masud Latifur Khan Bhavani Thuraisingham Department of Computer Science The University of Texas.
11 The Ghost In The Browser Analysis of Web-based Malware Reporter: 林佳宜 Advisor: Chun-Ying Huang /3/29.
CISC Machine Learning for Solving Systems Problems Presented by: Akanksha Kaul Dept of Computer & Information Sciences University of Delaware SBMDS:
A Statistical Anomaly Detection Technique based on Three Different Network Features Yuji Waizumi Tohoku Univ.
Vulnerability-Specific Execution Filtering (VSEF) for Exploit Prevention on Commodity Software Authors: James Newsome, James Newsome, David Brumley, David.
Computer Viruses Preetha Annamalai Niranjan Potnis.
Security Exploiting Overflows. Introduction r See the following link for more info: operating-systems-and-applications-in-
1 Confidentiality and Trust Management in a Coalition Environment Lecture #11 Dr. Bhavani Thuraisingham February 13, 2008 Data and Applications Security.
Computer Security and Penetration Testing
Automatic Diagnosis and Response to Memory Corruption Vulnerabilities Authors: Jun Xu, Peng Ning, Chongkyung Kil, Yan Zhai, Chris Bookholt In ACM CCS’05.
Introduction: Exploiting Linux. Basic Concepts Vulnerability A flaw in a system that allows an attacker to do something the designer did not intend,
1 Figure 4-16: Malicious Software (Malware) Malware: Malicious software Essentially an automated attack robot capable of doing much damage Usually target-of-opportunity.
Data Mining for Malware Detection Lecture #2 May 27, 2011 Dr. Bhavani Thuraisingham The University of Texas at Dallas.
Automatic Diagnosis and Response to Memory Corruption Vulnerabilities Presenter: Jianyong Dai Jun Xu, Peng Ning, Chongkyung Kil, Yan Zhai, Chris Bookhot.
Digital Forensics Dr. Bhavani Thuraisingham The University of Texas at Dallas Application Forensics November 5, 2008.
Presented by: Akbar Saidov Authors: M. Polychronakis, K. G. Anagnostakis, E. P. Markatos.
CISC Machine Learning for Solving Systems Problems Presented by: Sandeep Dept of Computer & Information Sciences University of Delaware Detection.
CISC Machine Learning for Solving Systems Problems Presented by: Ashwani Rao Dept of Computer & Information Sciences University of Delaware Learning.
Buffer Overflow Attack Proofing of Code Binary Gopal Gupta, Parag Doshi, R. Reghuramalingam, Doug Harris The University of Texas at Dallas.
Search Worms, ACM Workshop on Recurring Malcode (WORM) 2006 N Provos, J McClain, K Wang Dhruv Sharma
Shellcode Development -Femi Oloyede -Pallavi Murudkar.
nd Joint Workshop between Security Research Labs in JAPAN and KOREA Polymorphic Worm Detection by Instruction Distribution Kihun Lee HPC Lab., Postech.
Introduction Program File Authorization Security Theorem Active Code Authorization Authorization Logic Implementation considerations Conclusion.
Malicious Code Detection and Security Applications Prof. Bhavani Thuraisingham The University of Texas at Dallas October 2008.
Buffer Overflow Attack- proofing of Code Binaries Ramya Reguramalingam Gopal Gupta Gopal Gupta Department of Computer Science University of Texas at Dallas.
Automated Worm Fingerprinting Authors: Sumeet Singh, Cristian Estan, George Varghese and Stefan Savage Publish: OSDI'04. Presenter: YanYan Wang.
Dr. Bhavani Thuraisingham October 9, 2015 Analyzing and Securing Social Media Attacks on Social Media.
Erik Jonsson School of Engineering and Computer Science The University of Texas at Dallas Cyber Security Research on Engineering Solutions Dr. Bhavani.
Week-14 (Lecture-1) Malicious software and antivirus: 1. Malware A user can be tricked or forced into downloading malware comes in many forms, Ex. viruses,
Unveiling Zeus Automated Classification of Malware Samples Abedelaziz Mohaisen Omar Alrawi Verisign Inc, VA, USA Verisign Labs, VA, USA
Assured Information Sharing for Security Applications: Malicious Code Detection Prof. Bhavani Thuraisingham Prof. Latifur Khan Prof. Murat Kantarcioglu.
Experience Report: System Log Analysis for Anomaly Detection
Shellcode COSC 480 Presentation Alison Buben.
Learning to Detect and Classify Malicious Executables in the Wild by J
Internet Quarantine: Requirements for Containing Self-Propagating Code
TMG Client Protection 6NPS – Session 7.
Machine Learning for Computer Security
Techniques, Tools, and Research Issues
Secure Software Development: Theory and Practice
An Enhanced Support Vector Machine Model for Intrusion Detection
Malicious Code Detection and Security Applications
Summary by - Bo Zhang and Shuang Guo [Date: 03/31/2014]
Artificial Immune System against Viral Attack
تحلیل ساختاری ویروس‌های کامپیوتری از تئوری تا کاربرد
RHMD: Evasion-Resilient Hardware Malware Detectors
Lecture 3: Secure Network Architecture
Faculty of Science IT Department By Raz Dara MA.
Week 2: Buffer Overflow Part 2.
Security.
Identifying Slow HTTP DoS/DDoS Attacks against Web Servers DEPARTMENT ANDDepartment of Computer Science & Information SPECIALIZATIONTechnology, University.
CSC-682 Advanced Computer Security
CS5123 Software Validation and Quality Assurance
Operating System Concepts
Program & Application Security Through Binary Code Analysis
Modeling IDS using hybrid intelligent systems
When Machine Learning Meets Security – Secure ML or Use ML to Secure sth.? ECE 693.
Presentation transcript:

Detecting Malicious Executables Mr. Mehedy Masud (PhD Student) Prof. Latifur Khan Prof. Bhavani Thuraisingham Department of Computer Science The University of Texas at Dallas Lecture#7

Introduction: Detecting Malicious Executables What are malicious executables? Harm computer systems Virus, Exploit, Denial of Service (DoS), Flooder, Sniffer, Spoofer, Trojan etc. Exploits software vulnerability on a victim May remotely infect other victims Incurs great loss. Example: Code Red epidemic cost $2.6 Billion Malicious code detection: Traditional approach Signature based Requires signatures to be generated by human experts So, not effective against “zero day” attacks

State of the Art: Automated Detection Automated detection approaches: Behavioural: analyse behaviours like source, destination address, attachment type, statistical anomaly etc. Content-based: analyse the content of the malicious executable Autograph (H. Ah-Kim – CMU): Based on automated signature generation process N-gram analysis (Maloof, M.A. et .al.): Based on mining features and using machine learning.

New Ideas Content -based approaches consider only machine-codes (byte-codes). Is it possible to consider higher-level source codes for malicious code detection? Yes: Diassemble the binary executable and retrieve the assembly program Extract important features from the assembly program Combine with machine-code features

Feature Extraction Binary n-gram features Assembly n-gram features Sequence of n consecutive bytes of binary executable Assembly n-gram features Sequence of n consecutive assembly instructions System API call features DLL function call information

The Hybrid Feature Retrieval Model Collect training samples of normal and malicious executables. Extract features Train a Classifier and build a model Test the model against test samples

Hybrid Feature Retrieval (HFR) Training

Hybrid Feature Retrieval (HFR) Testing

Feature Extraction Binary n-gram features Example: Problem: Solution: Features are extracted from the byte codes in the form of n-grams, where n = 2,4,6,8,10 and so on. Example: Given a 11-byte sequence: 0123456789abcdef012345, The 2-grams (2-byte sequences) are: 0123, 2345, 4567, 6789, 89ab, abcd, cdef, ef01, 0123, 2345 The 4-grams (4-byte sequences) are: 01234567, 23456789, 456789ab,...,ef012345 and so on.... Problem: Large dataset. Too many features (millions!). Solution: Use secondary memory, efficient data structures Apply feature selection

Feature Extraction Assembly n-gram features Example: Problem: Features are extracted from the assembly programs in the form of n-grams, where n = 2,4,6,8,10 and so on. Example: three instructions “push eax”; “mov eax, dword[0f34]” ; “add ecx, eax”; 2-grams (1) “push eax”; “mov eax, dword[0f34]”; (2) “mov eax, dword[0f34]”; “add ecx, eax”; Problem: Same problem as binary Solution: Same solution

Feature Selection Select Best K features Selection Criteria: Information Gain Gain of an attribute A on a collection of examples S is given by

Experiments Dataset Disassembly Training, Testing Dataset1: 838 Malicious and 597 Benign executables Dataset2: 1082 Malicious and 1370 Benign executables Collected Malicious code from VX Heavens (http://vx.netlux.org) Disassembly Pedisassem ( http://www.geocities.com/~sangcho/index.html ) Training, Testing Support Vector Machine (SVM) C-Support Vector Classifiers with an RBF kernel

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Results HFS = Hybrid Feature Set BFS = Binary Feature Set AFS = Assembly Feature Set

Future Plans System call: seems to be very useful. Need to Consider Frequency of call Call sequence pattern (following program path) Actions immediately preceding or after call Detect Malicious code by program slicing requires analysis

Buffer Overflow Attack Detection Mohammad M. Masud, Latifur Khan, Bhavani Thuraisingham Department of Computer Science The University of Texas at Dallas

Introduction Goal Main Contribution Intrusion detection. e.g.: worm attack, buffer overflow attack. Main Contribution 'Worm' code detection by data mining coupled with 'reverse engineering'. Buffer overflow detection by combining data mining with static analysis of assembly code.

Background What is 'buffer overflow'? How does it happen? A situation when a fixed sized buffer is overflown by a larger sized input. How does it happen? example: ........ char buff[100]; gets(buff); memory buff Stack Input string

Background (cont...) Then what? buff Stack ........ char buff[100]; gets(buff); memory buff Stack Return address overwritten Attacker's code memory buff Stack New return address points to this memory location

Background (cont...) So what? It can now How to stop it? Program may crash or The attacker can execute his arbitrary code It can now Execute any system function Communicate with some host and download some 'worm' code and install it! Open a backdoor to take full control of the victim How to stop it?

Background (cont...) Stopping buffer overflow Preventive approaches Detection approaches Finding bugs in source code. Problem: can only work when source code is available. Compiler extension. Same problem. OS/HW modification Capture code running symptoms. Problem: may require long running time. Automatically generating signatures of buffer overflow attacks.

CodeBlocker (Our approach) A detection approach Based on the Observation: Attack messages usually contain code while normal messages contain data. Main Idea Check whether message contains code Problem to solve: Distinguishing code from data

Severity of the problem It is not easy to detect actual instruction sequence from a given string of bits

Our solution Apply data mining. Formulate the problem as a classification problem (code, data) Collect a set of training examples, containing both instances Train the data with a machine learning algorithm, get the model Test this model against a new message

CodeBlocker Model

Feature Extraction

Disassembly We apply SigFree tool implemented by Xinran Wang et al. (PennState)

Feature extraction Features are extracted using N-gram analysis Control flow analysis What is an n-gram? -Sequence of n instructions Traditional approach: -Flow of control is ignored 2-grams are: 02, 24, 46,...,CE Assembly program Corresponding IFG

Feature extraction (cont...) Control-flow Based N-gram analysis What is an n-gram? -Sequence of n instructions Proposed Control-flow based approach -Flow of control is considered 2-grams are: 02, 24, 46,...,CE, E6 Assembly program Corresponding IFG

Feature extraction (cont...) Control Flow analysis. Generated features Invalid Memory Reference (IMR) Undefined Register (UR) Invalid Jump Target (IJT) Checking IMR A memory is referenced using register addressing and the register value is undefined e.g.: mov ax, [dx + 5] Checking UR Check if the register value is set properly Checking IJT Check whether jump target does not violate instruction boundary

Feature extraction (cont...) Why n-gram analysis? Intuition: in general, disassembled executables should have a different pattern of instruction usage than disassembled data. Why control flow analysis? Intuition: there should be no invalid memory references or invalid jump targets.

Putting it together Compute all possible n-grams Select best k of them Compute feature vector (binary vector) for each training example Supply these vectors to the training algorithm

Experiments Dataset Training, Testing Real traces of normal messages Real attack messages Polymorphic shellcodes Training, Testing Support Vector Machine (SVM)

Results CFBn: Control-Flow Based n-gram feature CFF: Control-flow feature

Novelty / contribution We introduce the notion of control flow based n-gram We combine control flow analysis with data mining to detect code / data Significant improvement over other methods (e.g. SigFree)

Advantages 1) Fast testing 2) Signature free operation 3) Low overhead 4) Robust against many obfuscations

Limitations Need samples of attack and normal messages. May not be able to detect a completely new type of attack.

Future Works Find more features Apply dynamic analysis techniques Semantic analysis

Reference / suggested readings X. Wang, C. Pan, P. Liu, and S. Zhu. Sigfree: A signature free buffer overflow attack blocker. In USENIX Security, July 2006. Kolter, J. Z., and Maloof, M. A. Learning to detect malicious executables in the wild Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining Seattle, WA, USA Pages: 470 – 478, 2004.

Email Worm Detection (behavioural approach) Outgoing Emails The Model Feature extraction Test data Training data Machine Learning Classifier Clean or Infected ?

Feature Extraction Per email features Per window features Binary valued Features Presence of HTML; script tags/attributes; embedded images; hyperlinks; Presence of binary, text attachments; MIME types of file attachments Continuous-valued Features Number of attachments; Number of words/characters in the subject and body Per window features Number of emails sent; Number of unique email recipients; Number of unique sender addresses; Average number of words/characters per subject, body; average word length:; Variance in number of words/characters per subject, body; Variance in word length Ratio of emails with attachments

Feature Reduction & Selection Principal Component Analysis Reduce higher dimensional data into lower dimension Helps reducing noise, overfitting Decesion Tree Used to Select Best features

Experiments Data Set Training, Testing: Contains instances for both normal and viral emails. Six worm types: bagle.f, bubbleboy, mydoom.m, mydoom.u, netsky.d, sobig.f Collected from UC Berkeley Training, Testing: Decision Tree: C4.5 algorithm (J48) on Weka Systems Support Vector Machine (SVM) and Naïve Bayes (NB).

Results

Conclusion & Future Work Three approaches has been tested Apply classifier directly Apply dimension reduction (PCA) and then classify Apply feature selection (decision tree) and then classify Decision tree has the best performance Future Plans Combine content based with behavioral approaches Offensive Operations Honeypots, Information operations