Fraud Detection CNN designed for anti-phishing. Contents Recap PhishZoo Approach Initial Approach Early Results On Deck.

Slides:



Advertisements
Similar presentations
PhishZoo: Detecting Phishing Websites By Looking at Them
Advertisements

Human Detection Phanindra Varma. Detection -- Overview  Human detection in static images is based on the HOG (Histogram of Oriented Gradients) encoding.
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Content-Based Image Retrieval
Classification spotlights
Ming-Ming Cheng 1 Ziming Zhang 2 Wen-Yan Lin 3 Philip H. S. Torr 1 1 Oxford University, 2 Boston University 3 Brookes Vision Group Training a generic objectness.
Image Information Retrieval Shaw-Ming Yang IST 497E 12/05/02.
Karen Simonyan Andrew Zisserman
Large Scale Visual Recognition Challenge (ILSVRC) 2013: Detection spotlights.
Lecture 31: Modern object recognition
Face Alignment at 3000 FPS via Regressing Local Binary Features
Large-Scale Object Recognition with Weak Supervision
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Real-time Embedded Face Recognition for Smart Home Fei Zuo, Student Member, IEEE, Peter H. N. de With, Senior Member, IEEE.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
1 Automated Feature Abstraction of the fMRI Signal using Neural Network Clustering Techniques Stefan Niculescu and Tom Mitchell Siemens Medical Solutions,
Spatial Pyramid Pooling in Deep Convolutional
© 2013 IBM Corporation Efficient Multi-stage Image Classification for Mobile Sensing in Urban Environments Presented by Shashank Mujumdar IBM Research,
Generic object detection with deformable part-based models
Multiclass object recognition
Human tracking and counting using the KINECT range sensor based on Adaboost and Kalman Filter ISVC 2013.
Identifying Computer Graphics Using HSV Model And Statistical Moments Of Characteristic Functions Xiao Cai, Yuewen Wang.
Using Error-Correcting Codes For Text Classification Rayid Ghani Center for Automated Learning & Discovery, Carnegie Mellon University.
Visual-Similarity-Based Phishing Detection Eric Medvet, Engin Kirda, Christopher Kruegel SecureComm 2008 Sep.
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
Object Bank Presenter : Liu Changyu Advisor : Prof. Alex Hauptmann Interest : Multimedia Analysis April 4 th, 2013.
“Secret” of Object Detection Zheng Wu (Summer intern in MSRNE) Sep. 3, 2010 Joint work with Ce Liu (MSRNE) William T. Freeman (MIT) Adam Kalai (MSRNE)
Detecting Semantic Cloaking on the Web Baoning Wu and Brian D. Davison Lehigh University, USA WWW 2006.
Lecture 29: Face Detection Revisited CS4670 / 5670: Computer Vision Noah Snavely.
Phishing Webpage Detection Jau-Yuan Chen COMS E6125 WHIM March 24, 2009.
Web Design, 3 rd Edition 3 Planning a Successful Web Site: Part 1.
Lecture 31: Modern recognition CS4670 / 5670: Computer Vision Noah Snavely.
Pedestrian Detection and Localization
Collective Vision: Using Extremely Large Photograph Collections Mark Lenz CameraNet Seminar University of Wisconsin – Madison February 2, 2010 Acknowledgments:
Object Recognition in Images Slides originally created by Bernd Heisele.
BING: Binarized Normed Gradients for Objectness Estimation at 300fps
MSRI workshop, January 2005 Object Recognition Collected databases of objects on uniform background (no occlusions, no clutter) Mostly focus on viewpoint.
By Gianluca Stringhini, Christopher Kruegel and Giovanni Vigna Presented By Awrad Mohammed Ali 1.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Object detection, deep learning, and R-CNNs
Histograms of Oriented Gradients for Human Detection(HOG)
Robust Real Time Face Detection
Web Design, 3 rd Edition 3 Planning a Successful Web Site: Part 1.
Image Classification over Visual Tree Jianping Fan Dept of Computer Science UNC-Charlotte, NC
Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation.
CS-498 Computer Vision Week 9, Class 2 and Week 10, Class 1
A Brief Introduction on Face Detection Mei-Chen Yeh 04/06/2010 P. Viola and M. J. Jones, Robust Real-Time Face Detection, IJCV 2004.
Web Design, 5 th Edition 3 Planning a Successful Website: Part 1.
Convolutional Neural Network
Rich feature hierarchies for accurate object detection and semantic segmentation 2014 IEEE Conference on Computer Vision and Pattern Recognition Ross Girshick,
Week 4: 6/6 – 6/10 Jeffrey Loppert. This week.. Coded a Histogram of Oriented Gradients (HOG) Feature Extractor Extracted features from positive and negative.
Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition arXiv: v4 [cs.CV(CVPR)] 23 Apr 2015 Kaiming He, Xiangyu Zhang, Shaoqing.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
1 Munther Abualkibash University of Bridgeport, CT.
Course Project Lists for ITCS6157 Jianping Fan. Project Implementation Lists Automatic Image Clustering You can download 1,000,000 images from You can.
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Recent developments in object detection
Deep Neural Net Scenery Generation
The Problem: Classification
Article Review Todd Hricik.
R-CNN region By Ilia Iofedov 11/11/2018 BGU, DNN course 2016.
Object Detection + Deep Learning
Automated Video Cutting:
Im2Calories: towards an automated mobile vision food diary
Outline Background Motivation Proposed Model Experimental Results
Dr. Borji Aisha Urooj Cecilia La Place
RCNN, Fast-RCNN, Faster-RCNN
Heterogeneous convolutional neural networks for visual recognition
VERY DEEP CONVOLUTIONAL NETWORKS FOR LARGE-SCALE IMAGE RECOGNITION
Object Detection Implementations
Presentation transcript:

Fraud Detection CNN designed for anti-phishing

Contents Recap PhishZoo Approach Initial Approach Early Results On Deck

Recap GOAL: build a real-time phish detection API public boolean isitphish(string URL){ }

Recap Phishing is a BIG problem that costs the global private economy >$11B/yr

Recap Previous inspiration and vision based detection frameworks – J Chen, C Huang et al, Fighting Phishing with Discriminative Keypoint Features, IEEE Computer Society, 2009 – G Wang, H. Liu et al, Verilogo: Proactive Phishing Detection via Logo Recognition, Technical Report CS – S Afroz, R Greenstadt, PhishZoo: Detecting Phishing Websites by Looking at Them, Semantic Computing, 2011

Recap Why Deep CNN’s? – Traditional computer vision has failed – Many objects – Sparse data – Performance – Adversarial problem

Recap Bottom Line: detect targeted brand trademarks on web images

Recap So we might miss this…

Recap But we’ll catch this…

Recap Bottom Line: detect targeted brand trademarks on web images – Accuracy – False Positives – Performance

PhishZoo Approach Model PhishZoo (Afroz et al) approach – 1000 phishing examples from PhishTank.com – 200 ‘false positives’ sites from Alexa.com/topsites

PhishZoo Approach Model PhishZoo (Afroz et al) approach – 1000 phishing examples from PhishTank.com – 200 ‘false positives’ sites from Alexa.com/topsites – ‘Profile Brands’

PhishZoo Approach Model PhishZoo (Afroz et al) approach – Benchmark results: compare with pure images for “apples to apples” comparison

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data – 11 brands

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data – 11 brands – Each brand had multiple logo variants (27 total) Eg. Paypal has five

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data – 11 brands – Each brand had multiple logo variants (27 total) – Decouple text logos from picture logos

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data – 11 brands – Each brand had multiple logo variants (27 total) – Decouple text logos from picture logos – Synthesize training data for each case by randomly re-cropping (100 per logo variant)

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data – 11 brands – Each brand had multiple logo variants (27 total) – Decouple text logos from picture logos – Synthesize training data for each case by randomly re-cropping (100 per logo variant) – Each re-cropping is done from full screen

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data Use 2000 ImageNet sample for false positives

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data Use 2000 ImageNet sample for false positives Install Caffe and use standard ILSVRC model – Python interface – CPU mode – bvlc_reference_caffenet

Initial Approach - Training Collect all logo variants for ‘profiled’ brands, and synthesize training data Use 2000 ImageNet sample for false positives Install Caffe and use standard ILSVRC model Generate fc7 features using pre-trained ILSVRC model to train SVM – Similar approach to Simonyan et al – Use sklearn’s multi-class SVM – Apply grid search to find optimal C and gamma

Initial Approach - Testing Get PhishZoo data from Sadia et al. – Sadia sent PhishZoo dataset that was ‘similar’ to one used in paper; contains 945 phish pages – 730 of the 945 contained brand logo images – 730 set consists of replicates of smaller set of 23 logo images

Initial Approach - Testing Get PhishZoo data from Sadia et al ImageNet sample for false positives

Initial Approach - Testing Get PhishZoo data from Sadia et al ImageNet sample for false positives Apply sliding window search for testing – Four degrees of freedom [x, y, scale, shape] Vertical and horizontal stride lengths of 5 pixels 5 different scales ranging from 0.3 to 0.9 the height 5 different box shapes

Early Results Detect 18/23 logo images accurately (78%)

Early Results Detect 18/23 logo images accurately (78%) – Detection errors

Early Results Detect 18/23 logo images accurately (78%) False positive on the sliding window images in 5 cases

Early Results Detect 18/23 logo images accurately (78%) False positive on the sliding window images in 5 cases

Early Results Detect 18/23 logo images accurately (78%) False positive on the sliding window images in 5 cases No false positives on the 8000 ImageNet sample

Early Results Performance – CNN + SVM runs ~ 250ms/image on CPU – Sliding window with 4 degrees of freedom running on 200x200 image can search ~10^6 sub-images – Lesson: CPU-mode + sliding window = death

Early Results Observations – All of the detection errors were < 25 pixels; the smallest training example is ~40 pixels

Early Results Observations – All of the detection errors were < 25 pixels; the smallest training example is ~40 pixels – Our big mistake: used ImageNet samples to train/test model on negative examples that were used to pre-train the ILSVRC model!

Early Results Our big mistake: used ImageNet samples to train/test model on negative examples that were used to pre-train the ILSVRC model! – SVM grid search had perfect cross validation region, should have been a give away

On Deck Fix mistake; get ‘other’ data for false positive training

On Deck Fix mistake; get ‘other’ data for false positive training Synthesize training data at the scale that the human eye can detect trademarks

On Deck Fix mistake; get ‘other’ data for false positive training Synthesize training data at the scale that the human eye can detect trademarks Synthesize data for adversarial effects

On Deck Fix mistake; get ‘other’ data for false positive training Synthesize training data at the scale that the human eye can detect trademarks Synthesize data for adversarial effects

On Deck Fix mistake; get ‘other’ data for false positive training Synthesize training data at the scale that the human eye can detect trademarks Synthesize data for adversarial effects Use training data that is graphically rendered, not photographed data

On Deck Fix mistake; get ‘other’ data for false positive training Synthesize training data at the scale that the human eye can detect trademarks Synthesize data for adversarial effects Use training data that is graphically rendered, not photographed data

On Deck Data 2.0: download phish pages directly from PhishTank.com database (~1,500/day)

On Deck Data 2.0: download phish pages directly from PhishTank.com database (~1,500/day) Real web-image data x phish size

On Deck Data 2.0: download phish pages directly from PhishTank.com database (~1,500/day) Real web-image data x phish size CPU  GPU (250ms/image  1ms/image)

On Deck Data 2.0: download phish pages directly from PhishTank.com database (~1,500/day) Real web-image data x phish size CPU  GPU (250ms/image  1ms/image) Sliding window  R-CNN – 10^6 regions/image  10^2 or 10^3/image – Koen van de Sande’s code available on Caffe – May re-use own region proposal code

On Deck Data 2.0: download phish pages directly from PhishTank.com database (~1,500/day) Real web-image data x phish size CPU  GPU (250ms/image  1ms/image) Sliding window  R-CNN Try sliding cascade approach – eg. color histograms, Haar filters, (Farfade et al.

On Deck Target for API (big picture) – Accuracy* (99.9%) – False Positives (2%) – Performance (<10ms/page) *Accuracy defined as sites using trademarked visual content illicitly Daily measured on 1,500 PhishTank examples and 1,000,000+ web images

On Deck Target for API (big picture)

On Deck Finalists at Hi-Tech Venture Challenge April 23 rd at 4pm in Davis Auditorium