Christophe Genolini Bernard Desgraupes Bruno Falissard.

Slides:



Advertisements
Similar presentations
The Important Thing About By. The Important Thing About ******** The important thing about ***** is *****. It is true s/he can *****, *****, and *****.
Advertisements

SAMSI Discussion Session Random Sets/ Point Processes in Multi-Object Tracking: Vo Dr Daniel Clark EECE Department Heriot-Watt University UK.
Nonlinear models Hill et al Chapter 10. Types of nonlinear models Linear in the parameters. –Includes models that can be made linear by transformation:
NORMAL OR GAUSSIAN DISTRIBUTION Chapter 5. General Normal Distribution Two parameter distribution with a pdf given by:
Why R? Free Powerful (add-on packages) Online help from statistical community Code-based (can build programs) Publication-quality graphics.
Disease mapping in Germany. Larynx cancer mortality count in Germany, 1986 to 1990 Spatial resolution: 544 regions.
The Normal Distribution
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Generalized Additive Models Keith D. Holler September 19, 2005 Keith D. Holler September 19, 2005.
Hotspot/cluster detection methods(1) Spatial Scan Statistics: Hypothesis testing – Input: data – Using continuous Poisson model Null hypothesis H0: points.
1 Non-Linear and Smooth Regression Non-linear parametric models: There is a known functional form y=  x,  derived from known theory or from previous.
G. Cowan RHUL Physics Profile likelihood for systematic uncertainties page 1 Use of profile likelihood to determine systematic uncertainties ATLAS Top.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Confidence intervals. Population mean Assumption: sample from normal distribution.
Statistics 200b. Chapter 5. Chapter 4: inference via likelihood now Chapter 5: applications to particular situations.
Programme in Statistics (Courses and Contents). Elementary Probability and Statistics (I) 3(2+1)Stat. 101 College of Science, Computer Science, Education.
Estimation of parameters. Maximum likelihood What has happened was most likely.
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Parametric Inference.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Statistical analysis and modeling of neural data Lecture 4 Bijan Pesaran 17 Sept, 2007.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Statistical inference.
Introduction to Normal Distributions and the Standard Distribution
Further distributions
Learning Theory Reza Shadmehr logistic regression, iterative re-weighted least squares.
ParCFD Parallel computation of pollutant dispersion in industrial sites Julien Montagnier Marc Buffat David Guibert.
Generalized Linear Models All the regression models treated so far have common structure. This structure can be split up into two parts: The random part:
8 Sampling Distribution of the Mean Chapter8 p Sampling Distributions Population mean and standard deviation,  and   unknown Maximal Likelihood.
Linear Programming Advanced Math Topics Mrs. Mongold.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Algebra 2/TrigonometryName: __________________________ 6.6 WorksheetDate: ________________ Block: _____ For each polynomial, complete the following: (a)
G. Cowan RHUL Physics LR test to determine number of parameters page 1 Likelihood ratio test to determine best number of parameters ATLAS Statistics Forum.
Introduction to Machine Learning Multivariate Methods 姓名 : 李政軒.
Machine Learning 5. Parametric Methods.
Linear Programming. Example x:x:Number of cars built Define the Variables Does it matter? y:y:Number of trucks built.
Logistic Regression Saed Sayad 1www.ismartsoft.com.
Bobby L. Jones, PhD Carnegie Mellon University
RECITATION 2 APRIL 28 Spline and Kernel method Gaussian Processes Mixture Modeling for Density Estimation.
Algebra 1 Section 4.2 Graph linear equation using tables The solution to an equation in two variables is a set of ordered pairs that makes it true. Is.
QNT 351 Week 2 DQ 4 Why it is important to find the shape of data distribution before computing descriptive statistics? Do all variables follow normal.
NON LINEAR FUNCTION Quadratic Function.
Optimal topologies in case of probabilistic loading
Example: Solve the equation. Multiply both sides by 5. Simplify both sides. Add –3y to both sides. Simplify both sides. Add –30 to both sides. Simplify.
Contingency (frequency) tables
CH 5: Multivariate Methods
MATH Instructor: Dr. Saralees Nadarajah
TYPES OF SOLUTIONS OF LINEAR EQUATIONS
Probabilistic Models for Linear Regression
Ungraded quiz Unit 1.
Probability & Statistics Probability Theory Mathematical Probability Models Event Relationships Distributions of Random Variables Continuous Random.
Moment Generating Functions
Reverse Distribution and Grouping
The Normal Probability Distribution Summary
TESTING HYPOTHESES AND ASSESSING GOODNESS OF FIT
Mathematical Foundations of BME Reza Shadmehr
إحصاء تربوي الفصل الأول: المفاهيم الأساسية الإحصائية
What is Regression Analysis?
Explained and unexplained variance
The Big 6 Research Model Step 3: Location and Access
Linear Programming Example: Maximize x + y x and y are called
MG3117 Issues and Controversies in Accounting
3858 Tutorial Lecture 8 March 29, 2017 Yifan Li.
Homework: pg. 142 #29, 30 pg. 147 # ) A B C D ) A B C D ) A B
A note about z-scores: All z-scores do not follow a Normal distribution. A z-score simply tells how many standard deviations a value is from the mean,
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
Factor linear expressions
Clustering (2) & EM algorithm
Generalized Additive Model
Ungraded quiz Unit 1.
Presentation transcript:

Christophe Genolini Bernard Desgraupes Bruno Falissard

 Parametric algorithms  Non parametric algorithms

 Parametric algorithms  Example : proc traj  Base on likelihood  Non parametric algorithms  K means (KmL)

I ♥ Quebec…

Size = 1,84 Small likelihood Big likelihood

 Number of clusters  Trajectories shape (linear, polynomial,…)  Distributions of variable (poisson, normal…) Maximization of the likelihood

 Number of clusters Maximization of some criteria

> kml(cld3,4,1,print.traj=TRUE)

longData <- as.cld(gald()) kml(longData,2:5,10,print.traj=TRUE) choice(longData)

 C1: partition for V1  C2: partition for V2  C1xC2: partition for joint trajectories?  C1 = {small,medium,big}  C2 = {blue,red}  C1xC2 = {small blue, small red, medium blue, medium red, big blue, big red}

par(mfrow=c(1,2)) a <- c(1,2,1,3,2,3,3,4,5,3,5) b <- c(6,6,6,5,6,6,5,5,4,3,3) plot(a,type="l",ylim=c(0,10),xlab="First variable",ylab="") plot(b,type="l",ylim=c(0,10),xlab="Second variable",ylab="") points3d(1:11,a,b) axes3d(c("x", "y", "z")) title3d(,, "Time","First variable","Second variable") box3d() aspect3d(c(2, 1, 1)) rgl.viewpoint(0, -90, zoom = 1.2)

cl <- gald(functionClusters=list(function(t){c(-4,-4)},function(t){c(5,0)},function(t){c(0,5)}),functionNoise = function(t){c(rnorm(1,0,2),rnorm(1,0,2))}) plot3d(cl) kml(cl,3,1,paramKml=parKml(startingCond="randomAll")) plot3d(cl,paramTraj=parTraj(col="clusters"))

 The nominees are:  Calinsky & Harabatz  Ray & Turie  Davies & Bouldin ...  The winner is…

 The nominees are:  Calinsky & Harabatz  Ray & Turie  Davies & Bouldin ...  The winner is…  Falissard & Genolini  (or G & F ?)

« classic » distance « shape » distance