By Ankur Khator Gaurav Sharma Arpit Mathur 01D05014 SPAM FILTERING
“junk ” or “unsolicited commercial ”. Spam filtering - a special case of classification. Only 2 classes – Spam and Non-spam. What is Spam ?
Various Approaches Bayesian Learning Probabilistic model for Spam Filtering Bag of Words Representation Ripper algorithm Context Sensitive Learning. Boosting algorithm Improving Accuracy by combining weaker hypotheses.
Term Vectors
Naive Bayes for Spam Seeking model to find P(Y=1/X 1 =x 1,X 2 =x 2,..,X d =x d ) From Bayes theorem P(Y=1/X 1 =x 1,..,X d =x d ) = P(Y=1) * P(X 1 =x 1,..,X d =x d /Y=1) P(X 1 =x 1,..X d =x d ) P(Y=0/X 1 =x 1,..,X d =x d ) = P(Y=0) * P(X 1 =x 1,..,X d =x d /Y=0) P(X 1 =x 1,..X d =x d )
Justification of using Bayes Theorem Sparseness of data P(B/A) can be easily and accurately determined as compared to P(A/B)
Assume P(X 1 =x 1,..,X d =x d /Y=k) = ∏ P(X i =x i / Y=k) Naive Bayes for Spam(contd.) Also assume X i = 1 if no of occurrence of word i >= 1 = 0 otherwise
referred to as weights of evidence Inconsistency when some probability is zero. Smooth the estimates by adding a smooth positive constant to both numerator and denominator of each probability estimate Naive Bayes for Spam(contd.)
Assume new mail with text “The quick rabbit rests” Classifying = 2.63 Probability = 0.93
Threshold Lower threshold Higher false positive rate Higher threshold Higher false negative rate Preferred
Linear Classifier Ignores the effect of Context of word on its meaning. Unrealistic. Build a linear classifier that test for more complex Features like Simultaneous Occurrences. High Computation Cost !! Non-Linear Classification is the Solution Non-Linear Classification
Ripper Disjunction of Different Contexts Each Contexts is conjunction of Simple terms Context of w1 is : if w2 belongs to data and w3 belongs to data. i.e. for context to be true w1 must occur with w2 and w3. Three Components of Ripper Algorithm:
Rule Learning : Spam spam Є Subject Spam Free Є Subject,Spam Є Subject. Spam Gift!! Є Subject, Click Є Subject. The rule would be disjunction of three statements stated above. There is an initial set of rules too
Constructing Rule Set Initial Rule Set is Constructed Using a greedy Strategy. Based on the IREP (Incremental Reduced Error Pruning) To Construct A new Rule partitioning Dataset into two parts training Set And Pruning Set is Done. Every Time a Single condition is Added to Rule.
Simplification And Optimization At every step the density of +ve examples covered is increased. Adding stops until clause cover no –ve example or there is no positive gain. After this, pruning i.e. simplification is done. At every stage, again following greedy Strategy
Reaching Sufficient Rules The clause is deleted which maximizes the Function where U + (i+1) and U - (i-1) are the positive and negative examples. Termination when information gain is non-zero i.e. every rule covers +ve examples. But If data is noisy then number of rules increase
MDL Several heuristics are applied to solve the problem. MDL(Minimum Description Length) is one of them. After addition of each rule, total length of current rule set and example is calculated. Addition of rule is stopped when this length is d bits larger than shortest length.
AdaBoost Easy to find rule of thumb which are often correct If ‘buy now’ occurs in message, then predict ‘spam’ Hard to find one rule which is very accurate AdaBoost helps here general method of converting rough rules of thumb into highly accurate prediction rule Concentrating on hard examples
Pictorially
Algorithm Input S = { (X i, Y i ) } m i=1 Initialize D(i) = 1/m for all i For i = 1 to T H(t) = WeakLearner(S,D t ) Choose β t ln((1-ε)/ε) (proven to Minimize error for 2class) [2] Update D t+1 (i) = D t (i) exp(-β t Y i h t (x i )) and Normalize Final Hypotheses f(x) = ∑β t h t (x)
Example
Accuracy Weighted accuracy measure (λL - + S + ) / (λL + S) λ strictness measure L : # legitimate messages S : # spam L- : #legitimate messages classified as legitimate S+ : #spam classified as spam Improving accuracy Increase λ Introduce θ threshold Example classified positive only if f(x) > θ Default is ZERO Recall correctly predicted spam out of number of spam in corpus Precision correctly classified spam out of number predicted as spam
Results on corpus PU1... [1] TRECALLPRECISIONACC Tree Depth 1 Θ = 10.2 λ = Tree Depth 1 θ = 46.9 λ = Tree Depth 5 θ = 37.4 λ = Tree Depth 5 Θ = 178 λ =
Pros and Cons Fast and Simple No parameters to tune Flexible Can combine with any learning Algorithm No knowledge needed of WeakLearner Error reduces exponentially Robust to overfitting Data Driven – requires lots of data Performance depends on WeakLearner May fail if WeakLearner is too weak
Conclusion RIPPER as text categorization algorithm works better than Naïve Bayes (better for more classes). Comparable for spam filtering (2 classes) Boosting better than any weak learner it works on.
References [1] Boosting trees for Anti Spam Filtering by Xavier Carreras and Llius Marquez [2] The boosting approach to machine learning: An overview. by Robert E. Schapire in MSRI Workshop on Nonlinear Estimation and Classification, [3] Statistics and The War on Spam by David Madigan, David Madigan, [4] Androutsopoulos, J. Koutsias, K. V. Chandrinos, G. Paliouras, and C. D. Spyropoulos. An Evaluation of Naive Bayesian Anti-Spam Filtering. In Proc. of the workshop on Machine Learning in the New Information Age, [5] William W. Cohen, Yoram Singer: Context-sensitive Learning Methods for Text Categorization. SIGIR 1996: