Download presentation
Presentation is loading. Please wait.
Published byJessica McLeod Modified over 10 years ago
1
Network-Level Spam Filtering Nick Feamster Georgia Tech with Anirudh Ramachandran, Shuang Hao, Maria Konte, Nadeem Syed, Alex Gray, Santosh Vempala, Jaeyeon Jung
2
2 Spam: More than Just a Nuisance 95% of all email traffic –Image and PDF Spam (PDF spam ~12%) As of August 2007, one in every 87 emails constituted a phishing attack Targeted attacks on the rise –20k-30k unique phishing attacks per month Source: CNET (January 2008), APWG
3
3 Filtering Prevent unwanted traffic from reaching a users inbox by distinguishing spam from ham Question: What features best differentiate spam from legitimate mail? –Content-based filtering: What is in the mail? –IP address of sender: Who is the sender? –Behavioral features: How the mail is sent?
4
Conventional Approach: Content Filters Trying to hit a moving target......and even mp3s! PDFsExcel sheets Images
5
5 Problems with Content Filtering Low cost to evasion: Spammers can easily alter features of an emails content can be easily adjusted and changed Customized emails are easy to generate: Content- based filters need fuzzy hashes over content, etc. High cost to filter maintainers: Filters must be continually updated as content-changing techniques become more sophisticated
6
6 Another Approach: IP Addresses Problem: IP addresses are ephemeral Every day, 10% of senders are from previously unseen IP addresses Possible causes –Dynamic addressing –New infections
7
7 Problem: Addresses Keep Changing Fraction of IP Addresses About 10% of IP addresses never seen before in trace
8
8 Key Idea: Network-Based Filtering Filter email based on how it is sent, in addition to simply what is sent. Network-level properties are less malleable –Set of target recipients –Hosting or upstream ISP (AS number) –Membership in a botnet (spammer, hosting infrastructure) –Network location of sender and receiver
9
9 Challenges (Talk Outline) Understanding the network-level behavior –What behaviors do spammers have? –How well do existing techniques work? Building classifiers using network-level features –Key challenge: Which features to use? –Two Algorithms: SpamTracker and SNARE Building the system –Dynamism: Behavior itself can change –Scale: Lots of email messages (and spam!) out there
10
10 Understanding the Network-Level Behavior of Spammers
11
11 Data: Spam and BGP Spam Traps: Domains that receive only spam BGP Monitors: Watch network-level reachability Domain 1 Domain 2 17-Month Study: August 2004 to December 2005
12
12 Data Collection: MailAvenger Highly configurable SMTP server Collects many useful statistics
13
13 BGP Spectrum Agility Hijack IP address space using BGP Send spam Withdraw IP address A small club of persistent players appears to be using this technique. Common short-lived prefixes and ASes 61.0.0.0/8 4678 66.0.0.0/8 21562 82.0.0.0/8 8717 ~ 10 minutes Somewhere between 1-10% of all spam (some clearly intentional, others might be flapping)
14
14 Why Such Big Prefixes? Visibility: Route typically wont be filtered (nice and short) Flexibility: Client IPs can be scattered throughout dark space within a large /8 –Same sender usually returns with different IP addresses
15
15 Other Findings Top senders: Korea, China, Japan –Still about 40% of spam coming from U.S. More than half of sender IP addresses appear less than twice ~90% of spam sent to traps from Windows
16
16 What about IP-based blacklists?
17
17 Two Metrics Completeness: The fraction of spamming IP addresses that are listed in the blacklist Responsiveness: The time for the blacklist to list the IP address after the first occurrence of spam
18
18 Completeness and Responsiveness 10-35% of spam is unlisted at the time of receipt 8.5-20% of these IP addresses remain unlisted even after one month Data: Trap data from March 2007, Spamhaus from March and April 2007
19
19 Whats Wrong with IP Blacklists? Based on ephemeral identifier (IP address) –More than 10% of all spam comes from IP addresses not seen within the past two months Dynamic renumbering of IP addresses Stealing of IP addresses and IP address space Compromised machines IP addresses of senders have considerable churn Often require a human to notice/validate the behavior –Spamming is compartmentalized by domain and not analyzed across domains
20
20 How to Fix This Problem? Option 1: Stronger sender identity –Stronger sender identity/authentication may make reputation systems more effective –May require changes to hosts, routers, etc. Option 2: Filtering based on sender behavior –Can be done on todays network –Identifying features may be tricky, and some may require network-wide monitoring capabilities
21
21 Outline Understanding the network-level behavior –What behaviors do spammers have? –How well do existing techniques work? Building classifiers using network-level features –Key challenge: Which features to use? –Algorithms: SpamTracker and SNARE Building the system (SpamSpotter) –Dynamism: Behavior itself can change –Scale: Lots of email messages (and spam!) out there
22
22 SpamTracker Idea: Blacklist sending behavior (Behavioral Blacklisting) –Identify sending patterns commonly used by spammers Intuition: Much more difficult for a spammer to change the technique by which mail is sent than it is to change the content
23
23 SpamTracker Approach Construct a behavioral fingerprint for each sender Cluster senders with similar fingerprints Filter new senders that map to existing clusters
24
24 SpamTracker: Identify Invariant domain1.com domain2.com domain3.com spam IP Address: 76.17.114.xxx Known Spammer DHCP Reassignment Behavioral fingerprint domain1.com domain2.com domain3.com spam IP Address: 24.99.146.xxx Unknown sender Cluster on sending behavior Similar fingerprint! Cluster on sending behavior Infection
25
25 Building the Classifier: Clustering Feature: Distribution of email sending volumes across recipient domains Clustering Approach –Build initial seed list of bad IP addresses –For each IP address, compute feature vector: volume per domain per time interval –Collapse into a single IP x domain matrix: –Compute clusters
26
26 Clustering: Output and Fingerprint For each cluster, compute fingerprint vector: New IPs will be compared to this fingerprint IP x IP Matrix: Intensity indicates pairwise similarity
27
27 Evaluation Emulate the performance of a system that could observe sending patterns across many domains –Build clusters/train on given time interval Evaluate classification –Relative to labeled logs –Relative to IP addresses that were eventually listed
28
28 Data 30 days of Postfix logs from email hosting service –Time, remote IP, receiving domain, accept/reject –Allows us to observe sending behavior over a large number of domains –Problem: About 15% of accepted mail is also spam Creates problems with validating SpamTracker 30 days of SpamHaus database in the month following the Postfix logs –Allows us to determine whether SpamTracker detects some sending IPs earlier than SpamHaus
29
29 Classification Results Ham Spam SpamTracker Score Not always so accurate!
30
30 Improving Classification Lower overhead Faster detection Better robustness (i.e., to evasion, dynamism) Use additional features and combine for more robust classification –Temporal: interarrival times, diurnal patterns –Spatial: sending patterns of groups of senders
31
31 Outline Understanding the network-level behavior –What behaviors do spammers have? –How well do existing techniques work? Building classifiers using network-level features –Key challenge: Which features to use? –Two Algorithms: SpamTracker and SNARE Building the system –Dynamism: Behavior itself can change –Scale: Lots of email messages (and spam!) out there
32
32 SNARE: Automated Sender Reputation Goal: Sender reputation from a single packet? (or at least as little information as possible) –Lower overhead –Faster classification –Less malleable Key challenge –What features satisfy these properties and can distinguish spammers from legitimate senders
33
33 Sender-Receiver Geodesic Distance 90% of legitimate messages travel 2,200 miles or less
34
34 Density of Senders in IP Space For spammers, k nearest senders are much closer in IP space
35
35 Local Time of Day at Sender Spammers peak at different local times of day
36
36 Combining Features: RuleFit Put features into the RuleFit classifier 10-fold cross validation on one day of query logs from a large spam filtering appliance provider Using only network-level features Completely automated
37
37 Outline Understanding the network-level behavior –What behaviors do spammers have? –How well do existing techniques work? Building classifiers using network-level features –Key challenge: Which features to use? –Algorithms: SpamTracker and SNARE Building the system (SpamSpotter) –Dynamism: Behavior itself can change –Scale: Lots of email messages (and spam!) out there
38
38 Deployment: Real-Time Blacklist As mail arrives, lookups received at BL Queries provide proxy for sending behavior Train based on received data Return score Approach
39
39 Challenges Scalability: How to collect and aggregate data, and form the signatures without imposing too much overhead? Dynamism: When to retrain the classifier, given that sender behavior changes? Reliability: How should the system be replicated to better defend against attack or failure? Evasion resistance: Can the system still detect spammers when they are actively trying to evade?
40
40 Design Choice: Augment DNSBL Expressive queries –SpamHaus: $ dig 55.102.90.62.zen.spamhaus.org Ans: 127.0.0.3 (=> listed in exploits block list) –SpamSpotter: $ dig \ receiver_ip.receiver_domain.sender_ip.rbl.gtnoise.net e.g., dig 120.1.2.3.gmail.com.-.1.1.207.130.rbl.gtnoise.net Ans: 127.1.3.97 (SpamSpotter score = -3.97) Also a source of data –Unsupervised algorithms work with unlabeled data
41
41 Design Choice: Sampling Relatively small samples can achieve low false positive rates
42
42 Dynamism: Accuracy over Time
43
43 Improvements Accuracy –Synthesizing multiple classifiers –Incorporating user feedback –Learning algorithms with bounded false positives Performance –Caching/Sharing –Streaming Security –Learning in adversarial environments
44
44 Next Steps: Applications to Scams Scammers host Web sites on dynamic scam hosting infrastructure Use the DNS to redirect users to different sites when the location of the sites move State of the art: Blacklist URL Our approach: Blacklist based on network-level fingerprints
45
45 Example: Time Between Record Changes Fast-flux Domains tend to change much more frequently than legitimately hosted sites
46
46 Summary: Network-Based Behavioral Filtering Spam increasing, spammers becoming agile –Content filters are falling behind –IP-Based blacklists are evadable Up to 30% of spam not listed in common blacklists at receipt. ~20% remains unlisted after a month Complementary approach: behavioral blacklisting based on network-level features –Blacklist based on how messages are sent –SpamTracker: Spectral clustering catches significant amounts faster than existing blacklists –SNARE: Automated sender reputation ~90% accuracy of existing with lightweight features –SpamSpotter: Putting it together in an RBL system
47
47 References Anirudh Ramachandran and Nick Feamster, Understanding the Network-Level Behavior of Spammers, ACM SIGCOMM, 2006 Anirudh Ramachandran, Nick Feamster, and Santosh Vempala, Filtering Spam with Behavioral Blacklisting, ACM CCS, 2007 Nadeem Syed, Shuang Hao, Nick Feamster, Alex Gray and Sven Krasser, SNARE: Spatio-temporal Network- level Automatic Reputation Engine, GT-CSE-08-02 Anirudh Ramachandran, Shuang Hao, Hitesh Khandelwal, Nick Feamster, Santosh Vempala, A Dynamic Reputation Service for Spotting Spammers, GT-CS-08-09
48
48
49
49 Classifying IP Addresses Given new IP address, build a feature vector based on its sending pattern across domains Compute the similarity of this sending pattern to that of each known spam cluster –Normalized dot product of the two feature vectors –Spam score is maximum similarity to any cluster
50
50 Sampling: Training Time
51
51 Additional History: Message Size Variance Senders of legitimate mail have a much higher variance in sizes of messages they send Message Size Range Certain Spam Likely Spam Likely Ham Certain Ham Surprising: Including this feature (and others with more history) can actually decrease the accuracy of the classifier
52
52 Completeness of IP Blacklists ~80% listed on average ~95% of bots listed in one or more blacklists Number of DNSBLs listing this spammer Only about half of the IPs spamming from short-lived BGP are listed in any blacklist Fraction of all spam received Spam from IP-agile senders tend to be listed in fewer blacklists
53
53 Low Volume to Each Domain Lifetime (seconds) Amount of Spam Most spammers send very little spam, regardless of how long they have been spamming.
54
54 Some Patterns of Sending are Invariant domain1.comdomain2.com domain3.com spam IP Address: 76.17.114.xxx DHCP Reassignment domain1.comdomain2.com domain3.com spam IP Address: 24.99.146.xxx Spammer's sending pattern has not changed IP Blacklists cannot make this connection
55
55 Characteristics of Agile Senders IP addresses are widely distributed across the /8 space IP addresses typically appear only once at our sinkhole Depending on which /8, 60-80% of these IP addresses were not reachable by traceroute when we spot- checked Some IP addresses were in allocated, albeit unannounced space Some AS paths associated with the routes contained reserved AS numbers
56
56 Early Detection Results Compare SpamTracker scores on accepted mail to the SpamHaus database –About 15% of accepted mail was later determined to be spam –Can SpamTracker catch this? Of 620 emails that were accepted, but sent from IPs that were blacklisted within one month –65 emails had a score larger than 5 (85 th percentile)
57
57 Evasion Problem: Malicious senders could add noise –Solution: Use smaller number of trusted domains Problem: Malicious senders could change sending behavior to emulate normal senders –Need a more robust set of features…
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.