Properties of Kernels Presenter: Hongliang Fei Date: June 11, 2009.

Slides:



Advertisements
Similar presentations
Vector Spaces A set V is called a vector space over a set K denoted V(K) if is an Abelian group, is a field, and For every element vV and K there exists.
Advertisements

10.4 Complex Vector Spaces.
Chapter 4 Euclidean Vector Spaces
1 D. R. Wilton ECE Dept. ECE 6382 Introduction to Linear Vector Spaces Reference: D.G. Dudley, “Mathematical Foundations for Electromagnetic Theory,” IEEE.
Component Analysis (Review)
1 Welcome to the Kernel-Class My name: Max (Welling) Book: There will be class-notes/slides. Homework: reading material, some exercises, some MATLAB implementations.
Support Vector Machines
Signal , Weight Vector Spaces and Linear Transformations
Signal , Weight Vector Spaces and Linear Transformations
Linear Transformations
I. Homomorphisms & Isomorphisms II. Computing Linear Maps III. Matrix Operations VI. Change of Basis V. Projection Topics: Line of Best Fit Geometry of.
Multivariable Control Systems Ali Karimpour Assistant Professor Ferdowsi University of Mashhad.
1 Introduction to Kernels Max Welling October (chapters 1,2,3,4)
Chapter 3 Determinants and Matrices
Support Vector Machines and Kernel Methods
The Implicit Mapping into Feature Space. In order to learn non-linear relations with a linear machine, we need to select a set of non- linear features.
Kernel Methods and SVM’s. Predictive Modeling Goal: learn a mapping: y = f(x;  ) Need: 1. A model structure 2. A score function 3. An optimization strategy.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Preliminaries/ Chapter 1: Introduction. Definitions: from Abstract to Linear Algebra.
Learning in Feature Space (Could Simplify the Classification Task)  Learning in a high dimensional space could degrade generalization performance  This.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Signal-Space Analysis ENSC 428 – Spring 2008 Reference: Lecture 10 of Gallager.
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Quantum Mechanics(14/2)Taehwang Son Functions as vectors  In order to deal with in more complex problems, we need to introduce linear algebra. Wave function.
Chapter 5 Orthogonality.
+ Review of Linear Algebra Optimization 1/14/10 Recitation Sivaraman Balakrishnan.
Gram-Schmidt Orthogonalization
Elementary Linear Algebra Anton & Rorres, 9th Edition
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Linear Algebra (Aljabar Linier) Week 10 Universitas Multimedia Nusantara Serpong, Tangerang Dr. Ananda Kusuma
Chap 3. Formalism Hilbert Space Observables
Chapter Content Real Vector Spaces Subspaces Linear Independence
Linear algebra: matrix Eigen-value Problems
Domain Range definition: T is a linear transformation, EIGENVECTOR EIGENVALUE.
1. 2  A Hilbert space H is a real or complex inner product space that is also a complete metric space with respect to the distance function induced.
DE Weak Form Linear System
MA5241 Lecture 1 TO BE COMPLETED
Support Vector Machines and Kernel Methods Machine Learning March 25, 2010.
Chapter 4 Hilbert Space. 4.1 Inner product space.
Survey of Kernel Methods by Jinsan Yang. (c) 2003 SNU Biointelligence Lab. Introduction Support Vector Machines Formulation of SVM Optimization Theorem.
Review of Linear Algebra Optimization 1/16/08 Recitation Joseph Bradley.
Signal & Weight Vector Spaces
Advanced Computer Graphics Spring 2014 K. H. Ko School of Mechatronics Gwangju Institute of Science and Technology.
A function is a rule f that associates with each element in a set A one and only one element in a set B. If f associates the element b with the element.
1 Kernel-class Jan Recap: Feature Spaces non-linear mapping to F 1. high-D space 2. infinite-D countable space : 3. function space (Hilbert.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture XXVI.  The material for this lecture is found in James R. Schott Matrix Analysis for Statistics (New York: John Wiley & Sons, Inc. 1997).  A.
Beyond Vectors Hung-yi Lee. Introduction Many things can be considered as “vectors”. E.g. a function can be regarded as a vector We can apply the concept.
Tutorial 6. Eigenvalues & Eigenvectors Reminder: Eigenvectors A vector x invariant up to a scaling by λ to a multiplication by matrix A is called.
Vector Space Examples Definition of vector space
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Eigenvalues and Eigenvectors
LECTURE 10: DISCRIMINANT ANALYSIS
Matrices and Vectors Review Objective
Linear Transformations
GROUPS & THEIR REPRESENTATIONS: a card shuffling approach
Chapter 3 Linear Algebra
Welcome to the Kernel-Club
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
LECTURE 09: DISCRIMINANT ANALYSIS
Linear Algebra Lecture 32.
Linear Algebra Lecture 20.
VI.3 Spectrum of compact operators
Vector Spaces, Subspaces
Eigenvalues and Eigenvectors
Linear Vector Space and Matrix Mechanics
Presentation transcript:

Properties of Kernels Presenter: Hongliang Fei Date: June 11, 2009

Overview Inner product and Hilbert space Characteristics of kernels The kernel Matrix Kernel construction

Hilbert spaces Linear function: Given a vector space X over the reals, a function f: X->R is linear if f(ax)=af(x) and f(x+z) = f(x)+f(z) for all x,z \in X and a \in R. Inner product space: A vector space X over the reals R is an inner product space if there exists a real-valued symmetric bilinear (linear in each argument) map (.,.), that satisfies

Hilbert spaces A Hilbert Space F is an inner product space with the additional properties that it is separable and complete. Completeness refers to the property that every Cauchy sequence {h n } n≥1 of elements of F converges to an element h ∈ F. A space F is separable if and only if it admits a countable orthonormal basis.

Cauchy – Schwarz inequality In an inner product space, and the equality sign holds in a strict inner product space if and only if x and z are rescalings of the same vector.

Gram matrix

Positive semi-definite matrices A symmetric matrix is positive semidefinite, iff its eigenvalues are all non-negative. for all v, A symmetric matrix is positive semidefinite, iff its eigenvalues are all postive Gram and kernel matrices are positive semi- definite.

Finitely positive semi-definite functions A function satisfies the finitely positive semi-definite property if it is a symmetric function for which the matrices formed by restriction to any finite subset of the space X are positive semi- definite.

Mercier Kernel Theorem A function which is either continuous or has a finite domain, can be decomposed into a feature map φ into a Hilbert space F applied to both its arguments followed by the evaluation of the inner product in F if and only if it satisfies the finitely positive semi-definite property.

The kernel matrix Implementation issues Kernels and prior knowledge Kernel Selection Kernel Alignment

Kernel Selection Ideally select the optimal kernel based on our prior knowledge of the problem domain. Actually, consider a family of kernels defined in a way that again reflects our prior expectations. Simple way: require only limited amount of additional information from the training data. Elaborate way: Combine label information

Kernel Alignment Measure similarity between two kernels The alignment A(K1,K2) between two kernel matrices K1 and K2 is given by

Kernel Construction

Operations on Kernel matrices Simple transformation Centering data Subspace projection: chapter 6 Whitening: Set all eigenvalues to 1 (spherically symmetric)

That ’ s all. Any questions?