Fall Risk Assessment.

Slides:



Advertisements
Similar presentations
K-means Clustering Given a data point v and a set of points X,
Advertisements

Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Regularization David Kauchak CS 451 – Fall 2013.
Experimental Design, Response Surface Analysis, and Optimization
Supervised Learning Recap
An Approach to Evaluate Data Trustworthiness Based on Data Provenance Department of Computer Science Purdue University.
The loss function, the normal equation,
Multivariate linear models for regression and classification Outline: 1) multivariate linear regression 2) linear classification (perceptron) 3) logistic.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Clustered alignments of gene- expression time series data Adam A. Smith, Aaron Vollrath, Cristopher A. Bradfield and Mark Craven Department of Biosatatistics.
Single nucleotide polymorphisms and applications Usman Roshan BNFO 601.
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
MOVING AVERAGES AND EXPONENTIAL SMOOTHING
Switch to Top-down Top-down or move-to-nearest Partition documents into ‘k’ clusters Two variants “Hard” (0/1) assignment of documents to clusters “soft”
Unsupervised Learning and Data Mining
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
Visual Recognition Tutorial
Linear Discriminant Functions Chapter 5 (Duda et al.)
Ulf Schmitz, Pattern recognition - Clustering1 Bioinformatics Pattern recognition - Clustering Ulf Schmitz
Lecture 09 Clustering-based Learning
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Evaluating Performance for Data Mining Techniques
Collaborative Filtering Matrix Factorization Approach
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Overview of NIT HMM-based speech synthesis system for Blizzard Challenge 2011 Kei Hashimoto, Shinji Takaki, Keiichiro Oura, and Keiichi Tokuda Nagoya.
Part 5 Parameter Identification (Model Calibration/Updating)
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
The Group Lasso for Logistic Regression Lukas Meier, Sara van de Geer and Peter Bühlmann Presenter: Lu Ren ECE Dept., Duke University Sept. 19, 2008.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
CSE 5331/7331 F'07© Prentice Hall1 CSE 5331/7331 Fall 2007 Machine Learning Margaret H. Dunham Department of Computer Science and Engineering Southern.
Linear Models for Classification
Clustering Clustering is a technique for finding similarity groups in data, called clusters. I.e., it groups data instances that are similar to (near)
Linear Reduction Method for Tag SNPs Selection Jingwu He Alex Zelikovsky.
Prototype Classification Methods Fu Chang Institute of Information Science Academia Sinica ext. 1819
Yue Xu Shu Zhang.  A person has already rated some movies, which movies he/she may be interested, too?  If we have huge data of user and movies, this.
Information-Theoretic Co- Clustering Inderjit S. Dhillon et al. University of Texas, Austin presented by Xuanhui Wang.
Predicting the Location and Time of Mobile Phone Users by Using Sequential Pattern Mining Techniques Mert Özer, Ilkcan Keles, Ismail Hakki Toroslu, Pinar.
1 Machine Learning Lecture 9: Clustering Moshe Koppel Slides adapted from Raymond J. Mooney.
FORECASTING METHODS OF NON- STATIONARY STOCHASTIC PROCESSES THAT USE EXTERNAL CRITERIA Igor V. Kononenko, Anton N. Repin National Technical University.
Lloyd Algorithm K-Means Clustering. Gene Expression Susumu Ohno: whole genome duplications The expression of genes can be measured over time. Identifying.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 28 Nov 9, 2005 Nanjing University of Science & Technology.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Fuzzy Pattern Recognition. Overview of Pattern Recognition Pattern Recognition Procedure Feature Extraction Feature Reduction Classification (supervised)
Clustering Approaches Ka-Lok Ng Department of Bioinformatics Asia University.
Clustering Usman Roshan CS 675. Clustering Suppose we want to cluster n vectors in R d into two groups. Define C 1 and C 2 as the two groups. Our objective.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
SemiBoost : Boosting for Semi-supervised Learning Pavan Kumar Mallapragada, Student Member, IEEE, Rong Jin, Member, IEEE, Anil K. Jain, Fellow, IEEE, and.
Flexible Speaker Adaptation using Maximum Likelihood Linear Regression Authors: C. J. Leggetter P. C. Woodland Presenter: 陳亮宇 Proc. ARPA Spoken Language.
Clustering Machine Learning Unsupervised Learning K-means Optimization objective Random initialization Determining Number of Clusters Hierarchical Clustering.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
Canadian Bioinformatics Workshops
Clustering (2) Center-based algorithms Fuzzy k-means Density-based algorithms ( DBSCAN as an example ) Evaluation of clustering results Figures and equations.
NAME: OLUWATOSIN UTHMAN ZUBAIR (145919) COURSE: NETWORK FLOW
Clustering (3) Center-based algorithms Fuzzy k-means
Collaborative Filtering Matrix Factorization Approach
Instructor :Dr. Aamer Iqbal Bhatti
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Juande Zornoza UW Madison
EM Algorithm and its Applications
Presentation transcript:

Fall Risk Assessment

Introduction 21,700 fatal falls , 2.3 million nonfatal falls causes injuries in 2010 cost: 0.2 billion dollars fatal, $19 billion dollars nonfatal different factors that assumed to have contribution in fall risk assesment Gait parameters fear of falling age diabetes visual measures location of COM location of COP

COP factor The location of COP has fluctuation even in healthy young people extracting information is difficult Information in terms of unpredictability Sample entropy measure of time series signal of location of COP How similar the two vector of m sequential data are to each other The similarity is defined by tolerance r Sequence length m is called embedding dimension

Main goal 8 parameters in this study SaEn x-coordinate, Eyes closed, m =2, r=0.25 SaEn y-coordinate, Eyes closed, m =2, r=0.25 SaEn x-coordinate, Eyes closed, m =3, r=0.25 SaEn y-coordinate, Eyes closed, m =3, r=0.25 SaEn x-coordinate, Eyes open, m =2, r=0.25 SaEn y-coordinate, Eyes open, m =2, r=0.25 SaEn x-coordinate, Eyes open, m =3, r=0.25 SaEn y-coordinate, Eyes open, m =3, r=0.25 Question: If these parameters have contribution in fall risk assessment, how we can use these parameters to predict the probability of fall?

Fuzzy-clustering

C-means method This method allows one piece of data to belong to two or more clusters This method is based on minimizing following objective function: Using an iterative optimization algorithm on the objective function by updating the membership and the cluster centers , fuzzy partitioning is carried out as follows: The iterations will continue until : which k is the iteration step and ε is a termination criterion between 0 and 1

C-means method the maximum number of falls for this group was 5 in the past 12 month, the following equation has been used to determine the membership function of each person: First, each of the effective parameters has been considered independently, and then any possible combinations of these parameters (up to all 8 parameters) have been discussed. The total number of different combination of parameters is 255. 85% of whole case study data has been used to train the fuzzy inference system and 15% of the data is used as the test data. To evaluate the effectiveness of each combination of parameters, root mean square error (RMSE) has been used as follows:

Results To simplify the problem, a new notation is introduced as follows: - a number is assigned to each factor - each combination of factors is presented by a row matrix ([ 1 0 0 1]) the RMSE for different combination of effective factors: Factor Name assigned Number sample entropy_force plate_ center of pressure_eye open_X1 1 sample entropy_force plate_ center of pressure_ eye open _Y1 2 sample entropy_force plate_ center of pressure_ eye open _X2 3 sample entropy_force plate_ center of pressure_ eye open _Y2 4

Results the algorithm has been repeated with number of clusters equal to 4, 6, 7 and 8:

Results the closed eyes factors and the assigned numbers. the RMSE for different combination of effective factors: Factor Name Assigned Number sample entropy_force plate_ center of pressure_eye close_X1 1 sample entropy_force plate_ center of pressure_ eye close_Y1 2 sample entropy_force plate_ center of pressure_ eye close_X2 3 sample entropy_force plate_ center of pressure_ eye close_Y2 4

Results the algorithm has been repeated with number of clusters equal to 4, 6, 7 and 8:

Results All the factors and assigned numbers. the RMSE for different combination of effective factors: Factor Name Associated Number sample entropy_force plate_ center of pressure_eye open_X1 1 sample entropy_force plate_ center of pressure_ eye open _Y1 2 sample entropy_force plate_ center of pressure_ eye open _X2 3 sample entropy_force plate_ center of pressure_ eye open _Y2 4 sample entropy_force plate_ center of pressure_eye close_X1 5 sample entropy_force plate_ center of pressure_eye close_Y1 6 sample entropy_force plate_ center of pressure_eye close_X2 7 sample entropy_force plate_ center of pressure_eye close_Y2 8

Results For different number of clusters we have:

Results Different number of clusters:

Conclusion As the number of effective parameters increases, the RMSE decreases The maximum RMSE was reported for the case that just one effective factors was considered Number of clusters plays an important role in clustering problems, optimum number of clusters for 4 effective factors was revealed to be 5 and for 8 effective factors is 8. The minimum RMSE error is 20% for the case that 7 of all 8 effective parameters are exist in the combination.