Download presentation
Presentation is loading. Please wait.
Published byAnna Hawkins Modified over 9 years ago
1
Web-Phishing – Techniques and Countermeasures CIS5370 Computer Security Fall 2008 Muhammad Khalil / Marcus Wolff
2
Agenda 1.The Problem Definition 2.Phishing Techniques 3.Social Engineering Aspects 4.Countermeasures 5.Our Solution Approach 6.Current Implementation 7.Future Work
3
1. Problem Definition Cyber criminals try to get sensitive data from end-users (credit card data, passwords for online services etc.) Data only given out voluntarily from users in case of existing trust relationships Criminals use this knowledge by faking websites to which users might have built up trust relationships already Users are mostly lead to these phishing sites by received phishing e-mails that contain links to them Examples of highly targeted phishing objects: websites from banks, insurances, auctions, employers, government (social security)
4
1. Problem Definition Statistics of password stolen victims until March 2008 + 337 % ! Source: Anti-Phishing Working Group (Antiphishing.org)
5
2. Phishing Techniques Creating legitimate reasons –Spammers need to entice users into believing something. –For example they would say that their account is expiring and need users to reinstate by giving up certain information. –They would use false time constraints.
6
2. Phishing Techniques Webpage modifications 1 altering phishing website URL’s to look similar to real website: e.g. ‘L’ in paypal to ‘1’ in paypa1 Partial URL identities: www.ebay-security.comwww.ebay-security.com Fooling user with wrong URL descriptor: href="http://www.badguy.com"> http://account.earthlink.com IP addresses can be used to cover source domains
7
2. Phishing Techniques Webpage modification 2 Inserting the ‘@’ symbol in the URL would redirect the website to the words after the ‘@’ symbol, like in http://cgi1.ebay.com.awcgiebayISAPI.dll@ 210.93.131.250/my/index.htm. http://cgi1.ebay.com.awcgiebayISAPI.dll@ 210.93.131.250/my/index.htm Inserting a NULL value just before the ‘@’ like in http://cgi1.ebay.com.awcgiebayISAPI.dll%00@ 210.93.131.250/my/index.htm. The browser would show the legitimate ebay website. http://cgi1.ebay.com.awcgiebayISAPI.dll%00@ 210.93.131.250/my/index.htm Using Hex notation instead of IP/Domain name Using special ports after site break-ins (usually 8000 and above) E.g. http://www.citibankonline.com: 8000http://www.citibankonline.com
8
3. Social Engineering Aspects Users lack the proper understanding of security. They tend to trust everything they see. –They would believe messages saying that account has expired and require user actions without verification.
9
3. Social Engineering Aspects Example :
10
3. Social Engineering Aspects Users do not realize absence of security: Visual Security Indicators in Mozilla Firefox Browser v3
11
3. Social Engineering Aspects Lack of attention to security indicators:
12
3. Social Engineering Aspects Not being aware of trust implications:
13
4. Countermeasures Users have to be on alert. –Fix patches and install anti-spam software Banks have user customizable login pages to notice any change in the page. Mozilla has developed a software “petname” which is a task bar plug-in to help keep end users from falling prey to phishing attacks. allows users to protect important websites.
14
4. Countermeasures Anti-phishing filters already exist: integrated in web browsers (MS Internet Explorer 7.0, Mozilla Firefox 3.0) external tools (SpoofGuard) These approaches are only reactive Cannot act proactive due to static input Good overview: Paper “Phinding Phish: Evaluating Anti-Phishing Tools”, Carnegie Mellon, 2007 Most tools use blacklists: high ratio of false negatives (> 50%) Some tools use heuristics: high ratio of false positives (> 40%)
15
5. Our Solution Approach Combination of whitelist, blacklist and heuristic behavioral analysis guarantees reactive as well as proactive approach and low ratio of false positives and false negatives Whitelist stores distinct identifiers from legitimate websites grouped by business types (banks, insurances etc.) Blacklist stores distinct identifiers from known phishing sites Heuristics store algorithms which detect suspicious set-up or suspicious behavior of the websites
16
5. Our Heuristic Approach Heuristic algorithms detect anomalies that strongly indicate phishing behavior Indicators: –Obfuscated URL’s in e-mail (hiding the real URL destination) –Strong visual similarity to existing legitimate websites (main approach) –Direct URL link to sensitive sub-pages (Login-Page) –Language specifics (broken language, wrong addressing) –No Secure Socket Layer or usage of fake/one-day SSL certificates –website name just recently registered –anonymous registrar –Mismatch between country information for website and information about country of origin for represented company (taken out of extended whitelists)
17
5. Detection Process General Procedure (PW=Phishing Website): 1.Extract URLs from potential phishing e-mails (in real-time since URL should still be resolvable) 2.Look for hit in white list if Y, cancel (no PW) 3.Look for hit in black list if Y, classify as PW 4.Else: Use heuristics to return probability of target being a PW (unlikely..very likely): P(PW)
18
5. Our Detection Process Schema INPUT- URL Phishing-Site Legitimate Site Test? Found! OUTPUT: classification of INPUT-URL as phishing or legitimate website Whitelist DB Ext. Sources Heuristic Engine Not found Probability P >=0.5<0.5 Blacklist DB Feedback loop
19
5. Current Implementation: Whitelist Whitelist approach: Retrieve unique identifiers from all major websites grouped by company types that are effected most by phishing (banks, insurances, shopping websites) Update Automation with help of automatic script tools Unique identifiers: UID1 = URL(s) UID2 = SSL certificate(s) UID1/UID2-values stored in Postgres DB relationships, grouped by branches (banks, insurances etc.) Extended information stored in Whitelist for later heuristics (company type, country, official login-page, …) If UID1 / UID2 of target site match Whitelist entry classify as legitimate website
20
5. Current Implementation: Blacklist Blacklist approach: Retrieving unique identifiers from all reported phishing websites (PW), frequent updates Designed software to manage automatic retrieval by using API features of Phishing DB web services: –phishtank.com, Millersmiles.uk, Fraudwatchinternational.com In addition own DB maintained for storing discovered Phishing cases Unique identifiers (UID1) = URL (entry point) of PW If UID of target site match entry in own or external phishing blacklist DB classified as phishing website
21
5. Current Implementation: Heuristics Heuristic approaches: Classified target website to company type/ company by using text/ graphics analysis For graphical site components, OCR approach detecting modified company logos still get a match to original site graphics Used open-source component: J/G-OCR Results improvable: handles subtle changes Traversed directory structures of website, finding similarities to Whitelist-entries of same company type, based on: same file size OR same file name OR same content (hash-based, excluding modified logos)
22
6. Future Work Intended future features: More reliable full-automation of white-list/ black-list update procedure Improved and more flexible heuristics –using specialized Captcha-OCRs for better results (pwntcha; UC Berkeley: Breaking a Visual Captcha) –Testing for similarities in code and non-textual graphics –general GUI layout matching Implementing other heuristic indicators Optional independence from Unmask Comfortable GUI and installer options for end- users of the system
23
Bibliography [1] Anatomy of a Phishing Email, Christine E. Drake, Jonathan J. Oliver, and Eugene J. Koontz, MailFrontier, Inc., 1841 Page Mill Road, Palo Alto, CA 94304 [2] Why Phishing Works, Rachna Dhamij, Harvard University, J. D. Tygar, UC Berkeley, Marti Hearst: UC Berkeley [3] Phinding Phish: Evaluating Anti-Phishing Tools, Yue Zhang, Serge Egelman, Lorrie Cranor, and Jason Hong, Carnegie Mellon University, 2007 [4] Recognizing objects in adversarial clutter: breaking a visual CAPTCHA, Mori, G. Malik, J., UC Berkeley, IEEE Computer Vision and Pattern Recognition, 2003. Proceedings, 2003
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.