Download presentation
Presentation is loading. Please wait.
Published byAmos Fisher Modified over 9 years ago
1
Privacy-Preserving Support Vector Machines via Random Kernels Olvi Mangasarian UW Madison & UCSD La Jolla Edward Wild UW Madison November 14, 2015 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A AAA A A A A A
2
Vertically Partitioned DataHorizontally Partitioned Data A A1A1 A2A2 A3A3 A¢1A¢1 A¢2A¢2 A¢3A¢3 Data Features 1 2..………….…………. n Examples 12........m12........m
3
Problem Statement Entities with related data wish to learn a classifier based on all data The entities are unwilling to reveal their data to each other –If each entity holds a different set of features for all examples, then the data is said to be vertically partitioned –If each entity holds a different set of examples with all features, then the data is said to be horizontally partitioned Our approach: privacy-preserving support vector machine (PPSVM) using random kernels –Provides accurate classification –Does not reveal private information
4
Outline Support vector machines (SVMs) Reduced and random kernel SVMs Privacy-preserving SVM for vertically partitioned data Privacy-preserving SVM for horizontally partitioned data Summary
5
K(x 0, A 0 )u = 1 Support Vector Machines K(A +, A 0 )u ¸ e +e K(A , A 0 )u · e e + _ _ _ _ _ _ _ + + + + + + + + _ _ _ _ _ _ _ _ _ _ _ + + + + + + + + + _ _ _ _ _ K(x 0, A 0 )u = K(x 0, A 0 )u = Slack variable y ¸ 0 allows points to be on the wrong side of the bounding surface x 2 R n SVM defined by parameters u and threshold of the nonlinear surface A contains all data points {+…+} ½ A + { … } ½ A e is a vector of ones SVMs Minimize e 0 s (||u|| 1 at solution) to reduce overfitting Minimize e 0 y (hinge loss or plus function or max{, 0}) to fit data Linear kernel: (K(A, B)) ij = (AB) ij = A i B ¢ j = K(A i, B ¢ j ) Gaussian kernel, parameter (K(A, B)) ij = exp(- ||A i 0 -B ¢ j || 2 )
6
Support Vector MachineReduced Support Vector Machine L&M, 2001: replace the kernel matrix K(A, A 0 ) with K(A, Ā 0 ), where Ā 0 consists of a randomly selected subset of the rows of A M&T, 2006: replace the kernel matrix K(A, A 0 ) with K(A, B 0 ), where B 0 is a completely random matrix Random Reduced Support Vector Machine Using the random kernel K(A, B 0 ) is a key result for generating a simple and accurate privacy-preserving SVM
7
Error of Random Kernels is Comparable to Full Kernels: Linear Kernels Full Kernel AA 0 Error Random Kernel AB 0 Error Each point represents one of 7 datasets from the UCI repository B is a random matrix with the same number of columns as A and 10% as many rows. dim(AB 0 ) << dim(AA 0 ) Equal error for random and full kernels
8
Error of Random Kernels is Comparable to Full Kernels: Gaussian Kernels Full Kernel K(A, A 0 ) Error Random Kernel K(A, B 0 ) Error
9
Vertically Partitioned Data: Each entity holds different features for the same examples A¢1A¢1 A¢3A¢3 A¢2A¢2 A¢1A¢1 A¢2A¢2 A¢3A¢3
10
Serial Secure Computation of the Linear Kernel AA 0 Yu-Vaidya-Jiang 2006 A ¢ 1 A ¢ 1 0 + R 1 (A ¢ 1 A ¢ 1 0 + R 1 ) + A ¢ 2 A ¢ 2 0 ((A ¢ 1 A ¢ 1 0 + R 1 ) + A ¢ 2 A ¢ 2 0 ) + A ¢ 3 A ¢ 3 0
11
Our Parallel Secure Computation of the Random Linear Kernel AB 0 A¢1B¢10A¢1B¢10 A¢2B¢20A¢2B¢20 A¢3B¢30A¢3B¢30 A¢1B¢10A¢1B¢10 A¢2B¢20A¢2B¢20 A¢3B¢30A¢3B¢30
12
Privacy Preserving SVMs for Vertically Partitioned Data via Random Kernels Each of q entities privately owns a block of data A ¢ 1, …, A ¢ q that it is unwilling to share with the others Each entity j picks its own random matrix B ¢ j and distributes K(A ¢ j, B ¢ j ) to the other p - 1 entites K(A, B 0 ) = K(A ¢ 1, B ¢ 1 0 ) © … © K(A ¢ p, B ¢ p 0 ) – © is + for the linear kernel – © is the Hadamard (element-wise) product for the Gaussian kernel A new point x = (x 1 0, …, x p 0 ) 0 can be distributed amongst the entities by similarly computing K(x 0, B 0 ) = K(x 1 0, B ¢ 1 ) © … © K(x p 0, B ¢ p 0 ) Recovering A ¢ j from K(A ¢ j, B ¢ j 0 ) without knowing B ¢ j is essentially impossible
13
Results for PPSVM on Vertically Partitioned Data Compare classifiers which share feature data with classifiers which do not share –Seven datasets from the UCI repository Simulate situations in which each entity has only a subset of features –In first situation, features evenly divided between 5 entities –In second situation, each entity receives about 3 features
14
Error Rate of Sharing Data Generally Better than not Sharing: Linear Kernels Error Without Sharing Data Error Sharing Data Error Rate Without Sharing Error Rate With Sharing 7 datasets represented by two points each
15
Error Rate of Sharing Data Generally Better than not Sharing: Nonlinear Kernels Error Without Sharing Data Error Sharing Data
16
Horizontally Partitioned Data: Each entity holds different examples with the same features A1A1 A2A2 A3A3 A3A3 A2A2 A1A1
17
Privacy Preserving SVMs for Horizontally Partitioned Data via Random Kernels Each of q entities privately owns a block of data A 1, …, A q that they are unwilling to share with the other q - 1 entities The entities all agree on the same random basis matrix and distribute K(A j, B 0 ) to all entities K(A, B 0 ) = A j cannot be recovered uniquely from K(A j, B 0 )
18
B Privacy Preservation: Infinite Number of Solutions for A i given A i B 0 Given – – Consider an attempt to solve for row r of A i, 1 · r · m i from the equation –BA ir 0 = P ir, A ir 0 2 R n –Every square submatrix of the random matrix B is nonsingular –There are at least Thus there are solutions A i to the equation BA i 0 = P i If each entity has 20 points in R 30, there are 30 20 solutions Furthermore, each of the infinite number of matrices in the affine hull of these matrices is a solution P ir A ir 0 =
19
Results for PPSVM on Horizontally Partitioned Data Compare classifiers which share examples with classifiers which do not share –Seven datasets from the UCI repository Simulate a situation in which each entity has only a subset of about 25 examples
20
Error Rate of Sharing Data is Better than not Sharing: Linear Kernels Error Without Sharing Data Error Sharing Data
21
Error Rate of Sharing Data is Better than not Sharing: Gaussian Kernels Error Without Sharing Data Error Sharing Data
22
Summary Privacy preserving SVM for vertically or horizontally partitioned data –Based on using the random kernel K(A, B 0 ) –Learn classifier using all data, but without revealing privately held data –Classification accuracy is better than an SVM without sharing, and comparable to an SVM where all data is shared
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.