Presentation is loading. Please wait.

Presentation is loading. Please wait.

By Ankur Khator 01005028 Gaurav Sharma 01005029 Arpit Mathur 01D05014 SPAM FILTERING.

Similar presentations


Presentation on theme: "By Ankur Khator 01005028 Gaurav Sharma 01005029 Arpit Mathur 01D05014 SPAM FILTERING."— Presentation transcript:

1 By Ankur Khator 01005028 Gaurav Sharma 01005029 Arpit Mathur 01D05014 SPAM FILTERING

2 “junk email” or “unsolicited commercial email”. Spam filtering - a special case of email classification. Only 2 classes – Spam and Non-spam. What is Spam Email?

3 Various Approaches Bayesian Learning  Probabilistic model for Spam Filtering  Bag of Words Representation Ripper algorithm  Context Sensitive Learning. Boosting algorithm  Improving Accuracy by combining weaker hypotheses.

4 Term Vectors

5 Naive Bayes for Spam Seeking model to find P(Y=1/X 1 =x 1,X 2 =x 2,..,X d =x d ) From Bayes theorem P(Y=1/X 1 =x 1,..,X d =x d ) = P(Y=1) * P(X 1 =x 1,..,X d =x d /Y=1) P(X 1 =x 1,..X d =x d ) P(Y=0/X 1 =x 1,..,X d =x d ) = P(Y=0) * P(X 1 =x 1,..,X d =x d /Y=0) P(X 1 =x 1,..X d =x d )

6 Justification of using Bayes Theorem Sparseness of data P(B/A) can be easily and accurately determined as compared to P(A/B)

7 Assume P(X 1 =x 1,..,X d =x d /Y=k) = ∏ P(X i =x i / Y=k) Naive Bayes for Spam(contd.) Also assume X i = 1 if no of occurrence of word i >= 1 = 0 otherwise

8 referred to as weights of evidence Inconsistency when some probability is zero. Smooth the estimates by adding a smooth positive constant to both numerator and denominator of each probability estimate Naive Bayes for Spam(contd.)

9 Assume new mail with text  “The quick rabbit rests” Classifying 0.51 + 0.51 + 0 + 0.51 + 1.10 + 0 = 2.63 Probability = 0.93

10 Threshold Lower threshold  Higher false positive rate Higher threshold  Higher false negative rate  Preferred

11 Linear Classifier Ignores the effect of Context of word on its meaning. Unrealistic. Build a linear classifier that test for more complex Features like Simultaneous Occurrences. High Computation Cost !! Non-Linear Classification is the Solution Non-Linear Classification

12 Ripper Disjunction of Different Contexts Each Contexts is conjunction of Simple terms Context of w1 is : if w2 belongs to data and w3 belongs to data. i.e. for context to be true w1 must occur with w2 and w3. Three Components of Ripper Algorithm:

13 Rule Learning : Spam  spam Є Subject Spam  Free Є Subject,Spam Є Subject. Spam  Gift!! Є Subject, Click Є Subject. The rule would be disjunction of three statements stated above. There is an initial set of rules too

14 Constructing Rule Set Initial Rule Set is Constructed Using a greedy Strategy. Based on the IREP (Incremental Reduced Error Pruning) To Construct A new Rule partitioning Dataset into two parts training Set And Pruning Set is Done. Every Time a Single condition is Added to Rule.

15 Simplification And Optimization At every step the density of +ve examples covered is increased. Adding stops until clause cover no –ve example or there is no positive gain. After this, pruning i.e. simplification is done. At every stage, again following greedy Strategy

16 Reaching Sufficient Rules The clause is deleted which maximizes the Function where U + (i+1) and U - (i-1) are the positive and negative examples. Termination when information gain is non-zero i.e. every rule covers +ve examples. But If data is noisy then number of rules increase

17 MDL Several heuristics are applied to solve the problem. MDL(Minimum Description Length) is one of them. After addition of each rule, total length of current rule set and example is calculated. Addition of rule is stopped when this length is d bits larger than shortest length.

18 AdaBoost Easy to find rule of thumb which are often correct If ‘buy now’ occurs in message, then predict ‘spam’ Hard to find one rule which is very accurate AdaBoost helps here  general method of converting rough rules of thumb into highly accurate prediction rule  Concentrating on hard examples

19 Pictorially

20 Algorithm Input S = { (X i, Y i ) } m i=1 Initialize D(i) = 1/m for all i For i = 1 to T H(t) = WeakLearner(S,D t ) Choose β t  ln((1-ε)/ε) (proven to Minimize error for 2class) [2] Update D t+1 (i) = D t (i) exp(-β t Y i h t (x i )) and Normalize Final Hypotheses f(x) = ∑β t h t (x)

21 Example

22

23 Accuracy Weighted accuracy measure (λL - + S + ) / (λL + S) λ strictness measure L : # legitimate messages S : # spam L- : #legitimate messages classified as legitimate S+ : #spam classified as spam Improving accuracy  Increase λ  Introduce θ threshold Example classified positive only if f(x) > θ Default is ZERO Recall correctly predicted spam out of number of spam in corpus Precision correctly classified spam out of number predicted as spam

24 Results on corpus PU1... [1] TRECALLPRECISIONACC Tree Depth 1 Θ = 10.2 λ = 9 52593.5598.7198.59 Tree Depth 1 θ = 46.9 λ = 999 55074.4310099.98 Tree Depth 5 θ = 37.4 λ = 952593.9799.1298.92 Tree Depth 5 Θ = 178 λ = 999 55066.5310099.97

25 Pros and Cons Fast and Simple No parameters to tune Flexible Can combine with any learning Algorithm No knowledge needed of WeakLearner Error reduces exponentially Robust to overfitting Data Driven – requires lots of data Performance depends on WeakLearner May fail if WeakLearner is too weak

26 Conclusion RIPPER as text categorization algorithm works better than Naïve Bayes (better for more classes). Comparable for spam filtering (2 classes) Boosting better than any weak learner it works on.

27 References [1] Boosting trees for Anti Spam Email Filtering by Xavier Carreras and Llius Marquez 2001. [2] The boosting approach to machine learning: An overview. by Robert E. Schapire in MSRI Workshop on Nonlinear Estimation and Classification, 2002. [3] Statistics and The War on Spam by David Madigan, David Madigan, 2004. [4] Androutsopoulos, J. Koutsias, K. V. Chandrinos, G. Paliouras, and C. D. Spyropoulos. An Evaluation of Naive Bayesian Anti-Spam Filtering. In Proc. of the workshop on Machine Learning in the New Information Age, 2000. http://citeseer.ist.psu.edu/androutsopoulos00evaluation.html http://citeseer.ist.psu.edu/androutsopoulos00evaluation.html [5] William W. Cohen, Yoram Singer: Context-sensitive Learning Methods for Text Categorization. SIGIR 1996: 307-315


Download ppt "By Ankur Khator 01005028 Gaurav Sharma 01005029 Arpit Mathur 01D05014 SPAM FILTERING."

Similar presentations


Ads by Google