Ron Yanovich & Guy Peled 1. Contents Grayscale coloring background Luminance / Luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Pseudo-Relevance Feedback For Multimedia Retrieval By Rong Yan, Alexander G. and Rong Jin Mwangi S. Kariuki
Component Analysis (Review)
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Data Mining Classification: Alternative Techniques
Dimension reduction (1)
Linear Discriminant Analysis
Chapter 4: Linear Models for Classification
Color2Gray: Salience-Preserving Color Removal Amy A. Gooch Sven C. Olsen Jack Tumblin Bruce Gooch.
Discriminative and generative methods for bags of features
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Pattern Recognition and Machine Learning
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Chapter 5: Linear Discriminant Functions
L15:Microarray analysis (Classification) The Biological Problem Two conditions that need to be differentiated, (Have different treatments). EX: ALL (Acute.
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Face Recognition: An Introduction
Dimensionality reduction Usman Roshan CS 675. Supervised dim reduction: Linear discriminant analysis Fisher linear discriminant: –Maximize ratio of difference.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
1 Graph Embedding (GE) & Marginal Fisher Analysis (MFA) 吳沛勳 劉冠成 韓仁智
Presented by Tienwei Tsai July, 2005
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Classifiers Given a feature representation for images, how do we learn a model for distinguishing features from different classes? Zebra Non-zebra Decision.
Ch 4. Linear Models for Classification (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized and revised by Hee-Woong Lim.
1 E. Fatemizadeh Statistical Pattern Recognition.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Pattern Recognition April 19, 2007 Suggested Reading: Horn Chapter 14.
Face Recognition: An Introduction
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
CSE 185 Introduction to Computer Vision Face Recognition.
Lecture 4 Linear machine
Linear Models for Classification
Discriminant Analysis
Feature extraction using fuzzy complete linear discriminant analysis The reporter : Cui Yan
Prototype Classification Methods Fu Chang Institute of Information Science Academia Sinica ext. 1819
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
Dimensionality reduction
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
2D-LDA: A statistical linear discriminant analysis for image matrix
Eick: kNN kNN: A Non-parametric Classification and Prediction Technique Goals of this set of transparencies: 1.Introduce kNN---a popular non-parameric.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Feature Extraction 主講人:虞台文.
SUPPORT VECTOR MACHINES Presented by: Naman Fatehpuria Sumana Venkatesh.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
LECTURE 10: DISCRIMINANT ANALYSIS
Recognition: Face Recognition
REMOTE SENSING Multispectral Image Classification
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Feature space tansformation methods
Generally Discriminant Analysis
LECTURE 09: DISCRIMINANT ANALYSIS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Presentation transcript:

Ron Yanovich & Guy Peled 1

Contents Grayscale coloring background Luminance / Luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) Training (ii) Classification (iii) Color transfer (iv) Optimization 2

Grayscale coloring background Colorization definition: ‘The process of adding colors to monochrome image.’ 3

Grayscale coloring background Colorization is a term introduced by Wilson Markle in 1970 to describe the computer-assisted process he invented for adding color to black and white movies or TV programs. 4

Grayscale coloring background Black magic ( PC tool ) Motion video and film colorization “Color transfer between images” (Reinhard et al.) Transferring the color pallet from one color image to another “Transferring color to greyscale images” (Welsh et al.) Colorizes an image by matching small pixel neighborhoods in the image to those in the reference image “Unsupervised colorization of black-and-white cartoons” (Sykora et al.) Colorization of black and white cartoons (segmented), patch-based sampling and probabilistic reasoning. 5

6 Reinhard et al. Black magic (tool)

7 Welsh et al Sykora et al.

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 8

Luminance / Luminance channel Luminance The amount of light that passes through or is emitted from a particular area Luminance Channel Y - Full resolution plane that represents the mean luminance information only U, V - Full resolution, or lower, planes that represent the chroma (color) information only 9

Luminance / Luminance channel 10

11 Luminance / Luminance channel

12 Luminance / Luminance channel

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 13

Segmentation The process of partitioning a digital image into multiple segments (sets of pixels, also known as superpixels) 14

Segmentation Making the image more meaningful and easier to analyze locate objects and boundaries assigning a label to every pixel in an image 15

Segmentation ‘Superpixel’ - A polygonal part of a digital image, larger than a normal pixel, that is rendered in the same color and brightness 16

Segmentation Possible implementation is mean-shift segmentation 17

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 18

Discrete Cosine Transform Finite sequence of data points in terms of a sum of cosine functions oscillating at different frequencies DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers 19

Discrete Cosine Transform 20

Discrete Cosine Transform Can be used for compression 21

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 22

K-nearest-neighbor (Knn) In pattern recognition, the k-nearest neighbor algorithm (k-NN) is a non-parametric method for classifying objects based on closest training examples in the feature space. 23

K-nearest-neighbor (Knn) All instances are points in n-dimensional space “Closeness” between points determined by some distance measure Example Classification made by Majority Vote among the neighbors 24

Given n points K-nearest-neighbor – 2D Ex 25 a a a a a a a a a a a a a b b b b b b b b b b b b Point location Point Class Given new point Classification for k=2 Given new point Classification for k=5

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 26

Linear discriminant analysis (LDA) In the field of machine learning, the goal of statistical classification is to use an object's characteristics to identify which class (or group) it belongs to 27 Background

Linear discriminant analysis (LDA) A linear classifier achieves this by making a classification decision based on the value of a linear combination of the characteristics An object's characteristics are also known as feature values and are typically presented to the machine in a vector called a feature vector. 28 Background

Linear discriminant analysis (LDA) There are two broad classes of methods for determining the parameters of a linear classifier Generative models (conditional density functions) LDA (or Fisher’s linear discriminant) Discriminative models Support vector machine (SVM) 29 Background

Linear discriminant analysis (LDA) Discriminative training often yields higher accuracy than modeling the conditional density functions. However, handling missing data is often easier with conditional density models 30 Background

Linear discriminant analysis (LDA) LDA seeks to reduce dimensionality while preserving as much of the class discriminatory information as possible LDA finds a linear subspace that maximizes class separability among the feature vector projections in this space 31

LDA – two classes Having a set of D-dimensional samples: The samples are divided into 2 groups: N1 – belongs to class w1 N2 – belongs to class w2 Seek to obtain a scalar y by projecting the samples x onto a line: 32

LDA – two classes Of all the possible lines we would like to select the one that maximizes the separability of the scalars 33

Try to separate the two classes by projecting it onto different lines: 34 Unsuccessful separation LDA – two classes

Try to separate the two classes by projecting it onto different lines: 35 Successful separation Reducing the problem dimensionality from two features (x1,x2) to only a scalar value y. LDA – two classes

In order to find a good projection vector, we need to define a measure of separation Measure by Distance between mean vectors 36 This axis yields better class separability This axis has a larger distance between means

LDA – two classes - Fisher’s solution Fisher suggested maximizing the difference between the means, normalized by a measure of the within- class scatter For each class we define the scatter, an equivalent of the variance The Fisher linear discriminant is defined as the linear function that maximizes the criterion function 37 Within class scatter Scatter (per class)

LDA – two classes - Fisher’s solution Therefore, we are looking for a projection where samples from the same class are projected very close to each other and, at the same time, the projected means are as farther apart as possible 38 w hyperplane

2 sample classes X1, X2  39 Two Classes - Example

are the mean vectors of each class 40 Two Classes - Example S1, S2 are the covariance matrixes of X1 & X2 (the scatter)

41 Two Classes - Example S b is the Between-class scatter matrix S w is the Within-class scatter matrix

42 Two Classes - Example Finding eigenvalues and eigenvectors

43 Two Classes - Example LDA Projection found by Fisher’s Linear Discriminant Apparently, the projection vector that has the highest eigen value provides higher discrimination power between classes

LDA Limitation LDA is a parametric method since it assumes Gaussian conditional density models Therefore if the samples distribution are non- Gaussian, LDA will have difficulties to make the classification for complex structures 44

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 45

Colorization using optimization User scribbles desired colors inside regions Colors are propagated to all pixels Looking at the YUV space Remember: neighboring pixels with similar intensities should have similar colors 46 Levin at el.

Colorization using optimization input: Y(x; y; t) intensity volume output: U(x; y; t) color volume V(x; y; t) color volume w(rs) is a weighting function that sums to one and are the mean and variance of the intensities in a window around the pixel 47 Levin at el.

48 Colorization using optimization

49 Colorization using optimization

Contents Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis (LDA) Colorization using optimization Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 50

Colorization by Example Levin at el. Process main disadvantage is the need for manually adding colored scribbles. If we could place colored scribbles automatically, we could get Levin Process to do the rest. Given a reference color image and a grayscale one, the new process should output a colored image. 51 R. Irony, D. Cohen-Or and D. Lischinski

Colorization by Example 52 Input grayscale image Automatically create scribbled image Input reference colored imageOutput colored image

Training stage 53 Segment reference image

Training stage 54 Use the reference image in the luminance space (the Y dimension)

Training stage 55 Randomly extract k x k pixels surrounding a single pixel. (mach it to its given label)

Training stage 56 Extract DCT from each k x k pixels, and add it to the training set Feature vector

Colorization by Example Colorization by example has four stages I. Training The luminance channel of the reference image along with the accompanying partial segmentation are provided as a training set Construct a feature space and a corresponding classifier To classify between pixels, the classifier must be able to distinguish between different classes mainly based on texture. 57

Classification stage This classifier examines the K nearest neighbors of the feature vector and chooses the label by a majority vote. Extracting K x K pixel surrounding a single pixel from the grayscale image Appling DCT transform on the K x K pixels as it`s feature vector Enter the vector to the classifier 58

Classification stage 59 Sometimes Knn is not enough

Classification stage For better results - Discriminating subspace by LDA 60

Classification stage Applying Knn in a discriminating subspace 61

Classification stage The result of this process is a transformation T which transforms the vector of k^2 DCT coefficients to a point in the low-dimensional subspace between pixels p and q as Let f(pixel) – its feature vector (DCT coefficients) Let the distance between pixels p and q to be 62

Classification stage 63 Grayscale imageNaive nearest neighbor Voting in feature space

Classification stage Replace the label of p with the dominant label in his k×k neighborhood 64 The dominant label is the label with the highest confidence conf(p, l )

Classification stage 65 p – middle pixel N(p) – pixels from K x K neighbors N(p, l ) - pixels from K x K neighbors with label l W x – (x – some pixel) weight function, depend on the distance between the pixel x and its best match M x - (x – some pixel) the nearest neighbor of q in the feature space, which has the same label as q Note: this improvement done in the image space

Classification stage 66

Classification stage 67

Colorization by Example I. II. Classification Attempt to robustly determine, for each grayscale image pixel, which region should be used as a color reference for this pixel Using pixel’s nearest neighbors in the feature space for classification 68

Color transfer stage Getting the color for each grayscale pixel 69 C(p) – chrominance coordinates of a pixel p Mq(p) denotes the pixel in the colored reference image, whose position with respect to Mq is the same as the position of p with respect to q

Color transfer stage Each neighbor of p has a matching neighborhood in the reference image (Mq and Mr respectively), which “predicts” a different color for p The color of p is a result of a weighted average between these predictions 70

Colorization by Example I. II. III. Color transfer The matches found for each pixel and its image space neighbors also determine the color that should be assigned to each pixel, along with a measure of confidence in that choice 71

Optimization stage Transferring color in this manner produces a colorized result Since some areas might still be misclassified, the colorization will be wrong in such areas 72

Optimization stage To improve the colorization, color transferred only to pixels whose confidence in their label is sufficiently large, conf(p, l ) > 0.5 Those pixels are considered “micro-scribbles” 73

Colorization by Example I. II. III. IV. Optimization High level of “confidence” are given as “micro-scribbles” to the optimization-based colorization algorithm of Levin et al. 74

Results 75

Lets review Grayscale coloring background luminance / luminance channel Segmentation Discrete Cosine Transform K-nearest-neighbor (Knn) Linear Discriminant Analysis Levin at el. Colorization by Example (i) training (ii) classification (iii) color transfer (iv) optimization 76

Reference html?flavour=mobile