WiFi-Reports: Improving Wireless Network Selection with Collaboration Presented By Tim McDowell
WiFi-Reports Authors: ◦ Jeffrey Pang, Carnegie Mellon University ◦ Srinivasan Seshan, Carnegie Mellon University ◦ Michael Kaminsky, Intel Research Pittsburgh ◦ Ben Greenstein, Intel Research Seattle ◦ Damon McCoy, University of Colorado
Overview Background Setup Experimental Results Discussion Future Work Conclusions
Background: The Problem Figure 1: Measured Hotspots Near the University of Washington
Background: The Problem Commercial Access Points ◦ May block certain applications (e.g. SSH, File- sharing, Online Games) ◦ Often exhibit poorer than advertised performance ◦ ‘Official AP’ is not always best!
Background: The Problem Figure 2: (a) AP Connection Success Rate, (b) TCP Download Speed, (c) google.com Fetch Time
Setup Official AP is shown to have the lowest median latency ~70% of the time. Why should we care about the remaining 30%?
Setup Performance depends on many factors ◦ Ranking depends on application requirements ◦ Reports include estimated back-haul capacity, ports blocked, and connectivity failures Historical information, {AP, SNR, date, connectivity}, is collected and sent to a central database
Setup Figure 3: Wifi-Reports Components & Procedure
Setup: Key Challenges Maintain user privacy Limit Fraudulent Reports
Setup: Privacy Issues Want to report on AP without revealing user’s location ‘Blind signature’ allows users to remain anonymous while preventing fraud. Anonymous data is matched to location using existing war-driving databases. Introduces processing overhead
Setup: Privacy Issues Table 1: Blind Signature Method Processing Times (ms)
Setup: Preventing Fraud Client must contact account authority to obtain credentials ( verification, text message, etc.) before reporting on an AP Prevents a single user from submitting many fraudulent reports Using median performance values allows reports to be robust to a small fraction of outliers
Experimental Results Figure 4: Box Plots of (a) TCP Download Speed & (b) google.com Fetch Time for 5 Different AP selection methods
Experimental Results: Handling Fraudulent Reports Figure 5: CDF of AP Prediction Accuracy for Varying Percentages of Fraudulent Reports Robust enough to handle some frauds Users lured to poor APs will submit correct reports
Discussion: Limitations AP performance still varies over time Blind signature introduces significant computation overhead in order to maintain privacy & prevent fraud Persistent attackers can join together and submit fraudulent reports independently
Discussion: Limitations Selection is subjective (depends on cost, venue, service providers’ reputations) Historical information looses accuracy over time, does not account for new APs Not beneficial in regions with sparse Wifi coverage Heavily reliant on war-driving
Future Work Geo-locating moving access points to deal with mobile users. Improve robustness to fraud More accurate location determination
Conclusions Wifi Reports allows users to make informed decisions in areas of dense AP coverage Users must decide for themselves if freedom of AP choice is worth the extra overhead
Questions?