Download presentation
Presentation is loading. Please wait.
Published byNgô Thắm Modified over 6 years ago
1
Problem definition Given a data set X = {x1, x2, …, xn} Rm and a label set L = {1, 2, … , c}, some samples xi are labeled as yi L and some are unlabeled as yi = The goal is to provide a label to these unlabeled samples Define a graph G = (V,E), with V = {v1, v2, …, vn} Each node vi corresponds to a sample xi An affinity matrix W defines the weight between the edges in E as follows:
2
Model definition Set of particles
Each particle corresponds to a label in L Particle variables: Node being visited by the particle Particle potential Target node by the particle Node variables: Owner particle Level of ownership
3
Variables Initialization
4
Deterministic Moving Probabilities Random Moving Probabilities
0.8 v2 v2 v3 0.8 Random Moving Probabilities v1 v3 v4 0.3 v2 v4 v3
5
Particle and Nodes Dynamics
Node dynamics For each node vi selected by a particle ρj as its target Particle dynamics
6
Simulation Results: Banana-Shaped data set
(a) Toy Data (Banana-Shaped) (b) Classication results Classification of the banana-shaped patterns. (a) toy data set with 2000 samples divided in two classes, 20 samples are pre-labeled (red circles and blue squares) (b) classification achieved by the proposed method
7
Simulation Results: Iris data set (UCI)
Classification Accuracy in the Iris data set with different number of pre-labeled samples
8
Simulation Results: Wine data set (UCI)
Classification Accuracy in the Wine data set with different number of pre-labeled samples
9
Simulation Results: execution time
Time elapsed to reach 95% correct classification with data sets of different sizes. * Banana-shaped classes, 10% pre-labeled samples
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.