Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 1 BME452 Biomedical Signal Processing Lecture 3  Signal conditioning.

Slides:



Advertisements
Similar presentations
Independent Component Analysis
Advertisements

Variance reduction techniques. 2 Introduction Simulation models should be coded such that they are efficient. Efficiency in terms of programming ensures.
Dimension reduction (1)
The Simple Linear Regression Model: Specification and Estimation
Principal Component Analysis
Principal component analysis (PCA)
Dimensional reduction, PCA
SYSTEMS Identification
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 6 Introduction to Sampling Distributions.
A quick introduction to the analysis of questionnaire data John Richardson.
Face Recognition Using Eigenfaces
Independent Component Analysis (ICA) and Factor Analysis (FA)
Basics of discriminant analysis
A Quick Practical Guide to PCA and ICA Ted Brookings, UCSB Physics 11/13/06.
Bayesian belief networks 2. PCA and ICA
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc Chapter 10 Introduction to Estimation.
Separate multivariate observations
Adaptive Signal Processing
Review of Probability.
Principles of the Global Positioning System Lecture 11 Prof. Thomas Herring Room A;
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Economics 173 Business Statistics Lecture 2 Fall, 2001 Professor J. Petry
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
STATISTICS: BASICS Aswath Damodaran 1. 2 The role of statistics Aswath Damodaran 2  When you are given lots of data, and especially when that data is.
Chapter 3 – Descriptive Statistics
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
From Theory to Practice: Inference about a Population Mean, Two Sample T Tests, Inference about a Population Proportion Chapters etc.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Descriptive Statistics vs. Factor Analysis Descriptive statistics will inform on the prevalence of a phenomenon, among a given population, captured by.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
© Department of Statistics 2012 STATS 330 Lecture 20: Slide 1 Stats 330: Lecture 20.
Modern Navigation Thomas Herring MW 11:00-12:30 Room
Digital Image Processing Lecture 10: Image Restoration
Introduction to Digital Signals
Principal Components Analysis. Principal Components Analysis (PCA) A multivariate technique with the central aim of reducing the dimensionality of a multivariate.
Reduces time complexity: Less computation Reduces space complexity: Less parameters Simpler models are more robust on small datasets More interpretable;
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Principal Component Analysis (PCA)
Spectrum Sensing In Cognitive Radio Networks
MACHINE LEARNING 7. Dimensionality Reduction. Dimensionality of input Based on E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
Discrete-time Random Signals
Variability Introduction to Statistics Chapter 4 Jan 22, 2009 Class #4.
Surveying II. Lecture 1.. Types of errors There are several types of error that can occur, with different characteristics. Mistakes Such as miscounting.
Introduction to Independent Component Analysis Math 285 project Fall 2015 Jingmei Lu Xixi Lu 12/10/2015.
Université d’Ottawa / University of Ottawa 2001 Bio 8100s Applied Multivariate Biostatistics L11.1 Lecture 11: Canonical correlation analysis (CANCOR)
This represents the most probable value of the measured variable. The more readings you take, the more accurate result you will get.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Virtual University of Pakistan
Estimating standard error using bootstrap
Linear Algebra Review.
LECTURE 11: Advanced Discriminant Analysis
Brain Electrophysiological Signal Processing: Preprocessing
Outlier Processing via L1-Principal Subspaces
Principal Component Analysis (PCA)
PCA vs ICA vs LDA.
Introduction to Instrumentation Engineering
Arithmetic Mean This represents the most probable value of the measured variable. The more readings you take, the more accurate result you will get.
Feature space tansformation methods
Principles of the Global Positioning System Lecture 11
Principal Components What matters most?.
Principal Component Analysis (PCA)
Presentation transcript:

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 1 BME452 Biomedical Signal Processing Lecture 3  Signal conditioning

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 2 Lecture 3 Outline In this lecture, we’ll study the following signal conditioning methods (specifically for noise reduction)  Ensemble averaging  Median filtering  Moving average filtering  Principal component analysis  Independent component analysis (in brief) Before we study these, an introduction to some mathematics will be given

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 3 Mean The arithmetic mean is the "standard" average, often simply called the "mean" where N is used to denote the data size (length) In MATLAB, n=1,….N but sometimes we use n=0,1,….N-1. Example An experiment yields the following data: 34,27,45,55,22,34 To get the arithmetic mean  How many items? There are 6. Therefore N=6  What is the sum of all items?  =217.  To get the arithmetic mean divide sum by N, here 217/6= Expectation  What is expected value of X, E[X]? Simply said, it refer to the sum divided by the quantity, i.e. mean of the value in the square brackets  Eg: E[x 2 ]=

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 4 Very often, we set the mean to zero before performing any signal analysis This is to remove the dc (0 Hz) noise  xm=x-mean(x) Mean removal for signals

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 5 Mean removal across channels/recordings Sometimes, a noise corrupts all the signals in a multi-channel signal or across all the recordings of a single channel signal  Since the noise is common to all the channels/recordings, the simplest way of removing this noise is to remove mean across channels/recordings

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 6 Standard deviation (  ) Measures how spread out are the values in a data set Suppose we are given a signal x1,..., xN of real value numbers (all recorded signals are real values) The arithmetic mean of this population is defined as The standard deviation of this population is defined as Given only a sample of values x1,...,xN from some larger population, many authors define the sample (or estimated) standard deviation by This is known as an unbiased estimator for the actual standard deviation

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 7 Standard deviation example

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 8 Interpreting standard deviation A large standard deviation indicates that the data points are far from the mean and a small standard deviation indicates that they are clustered closely around the mean For example, each of the three samples (0, 0, 14, 14), (0, 6, 8, 14), and (6, 6, 8, 8) has an average of 7. Their standard deviations are 7, 5 and 1, respectively. The third set has a much smaller standard deviation than the other two because its values are all close to 7.

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 9 Normalisation Sometimes, we may wish to normalise a signal to mean=0 and set the standard deviation to 1 For example, if we record the same signal but using different instruments with different amplification factor, it will be difficult to analyse the signals together In this regard, we will normalise the signals using

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 10 Variance Variance is simply the square of standard deviation Uncertainty measure  Variance may be thought of as a measure of uncertainty  When deciding whether measurements agree with a theoretical prediction, variance could be used  If variance (using the predicted mean) is high, then the measurements contradict the prediction  Example: say we have predicted that x[1]=7, x[2]=6, x[3]=5  x is measured 3 times => ( ); ( ); ( ) Do this =>  Compute the variance using the predicted value as mean  var[1]=12.76, var[2]=0.610, var[3]=0.605  So, we know that x[1] measurements are contradicting the prediction and probably not x[2] and x[3] measurements

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 11 Covariance If we have multi-channel/multi-trial recorded signals, we can have cross variance or simply covariance Covariance measure the variance between different signals (from different channels/recordings) Covariance between two signals, X and Y with respective means, μ and ν, The covariance sometimes is used as a measure of "linear dependence" between the two signals but correlation is a better measure

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 12 Correlation Correlation between two signals, X and Y is It is simply normalised covariance It measures linear dependence between X and Y The correlation is 1 in the case of an increasing linear relationship, −1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of linear dependence between the variables The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 13 Application of correlation (example) The diagram shows how the unknown signal can be identified  A copy of a known reference signal is correlated with the unknown signal  The correlation will be high if the reference is similar to the unknown signal  The unknown signal is correlated with a number of known reference functions  A large value for correlation shows the degree of similarity to the reference  The largest value for correlation is the most likely match

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 14 Application of correlation (another example) Application to heart disease detection using ECG signals  Cross correlation is one way in which different types of heart diseases can be identified using ECG signals  Each heart disease has a unique ECG signal  Some example of ECG signals for different diseases are shown below  The system has a library of pre-recorded ECG signals (known as templates)  An unknown ECG signal is correlated with all the ECG templates in this library  The largest correlation is the most likely match of the heart disease

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 15 Signal-to-noise ratio (SNR) Before we move into the noise reduction methods, we need a measure of noise in the signals  This is important to gauge the performance of the noise reduction techniques For this purpose, we use SNR  SNR=10log 10 [(signal energy)/(noise energy)] The original noise  x(noise) = x(original signal) – x(noisy signal) After using some noise reduction method,  x(noise) = x(original signal) – x(noise reduced signal)

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 16 Ensemble averaging If we have many recordings, we can use ensemble averaging to reduce noise that is not correlated between the recordings Ensemble averaging to reduce noise from Evoked Potential (EP) EEG Repeated different recordings are known as trials EP EEG signals from trial to another are about the same (high correlation) But noise will be different from one trial to another (low correlation) Hence, it would be possible to use ensemble averaging to reduce noise ……………. EP EEG EP EEG+noise (trial 1) EP EEG after ensemble averaging EP EEG+noise (trial 2) EP EEG+noise (trial 20)

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 17 Worked example 1 - Ensemble averaging Assume we have 3 signals corrupted with noise. Assume we have the original also (for SNR computation). 1. Set the mean to zero first 2. The ensemble average is (the average is done for each sample point n) 3. The noises in the signals are (original signal – noise corrupted signal) n012 3 Noisy signal Noisy signal Noisy signal Original n0123 Ensemble average n0123 Noisy signal Noisy signal Noisy signal Original 3557 n012 3 Signal 1 noise Signal 2 noise Signal 3 noise Ensemble average noise SNR=10log 10 (e(signal)/e(noise)) Original signal energy8 signal 1signal 2signal 3 ensemble average noise energy SNR

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 18 Median filtering Similar to ensemble averaging, if we have many recordings, we can use median filtering to reduce noise that is not correlated between the recordings What is median filtering? If we have x[1] as [ ] from 9 trials, we sort the numbers from small to big, then the centre value (i.e. 5 th ) as the median Sorted x[1] is [ ], so median x[1]= 3 Median filtering is advantageous as compared to ensemble averaging if there is one trial containing a lot of noise AND if the number of trials/recordings are small This is because the one heavily noise corrupted signal will distort the ensemble average values but will less likely affect the median values

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 19 Worked example 2 – median filtering Assume we have 3 signals corrupted with noise, one heavily corrupted (assume the mean has been set to zero) 2. The noises in the signals are (original signal – noise corrupted signal) 4. SNR=10log10(e(signal)/e(noise)) 1. The ensemble average and median filtered signals Which technique gave better noise reduction using SNR – ensemble averaging or median filtering? Why? n012 3 Noisy signal Noisy signal Noisy signal Original n0123 Ensemble average Median filter n012 3 Noise in signal Noise in signal Noise in signal Noise in ensemble averaging Noise in median filter signal Original signal energy8 signal 1signal 2signal 3 ensemble average Median filter noise energy SNR

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 20 Moving average filtering How do we reduce noise if we have only one signal from one recording/ trial? We can’t use ensemble averaging and median filtering Normally, in any signal, the few points before and after a certain point n are correlated (i.e. related) But generally the noise is not correlated So, we can use moving average (MA) filtering It is defined as where S is the filter order Example, for S=3, y[5]=(x[5]+x[6]+x[7])/3 For signals x and y to remain of same sample length:  We have to pad (S-1) zeros to the signal x to get the last (S-1) points of the signal y

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 21 Moving average filtering –zero padding If zero padding is NOT allowed If zero padding is allowed

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 22 Example - moving average filtering Assume we have a EEG signal corrupted with noise Set the mean to zero Apply moving average filter to the noisy signal (use filter order=3 and 5) The higher filter order will remove more noise, but it will also distort the signal more (i.e. remove the signal parts also) So, a compromise has to be found for the value of S (normally by trial and error) load eeg; N=length(eeg); for i=1:N-3, eegMA1(i)=(eeg(i)+eeg(i+1)+eeg(i+2))/3; end eegMA1(255)=(eeg(255)+eeg(256))/2; eegMA1(256)=eeg(256)/1; for i=1:N-5, eegMA2(i)=(eeg(i)+eeg(i+1)+eeg(i+2)+eeg(i+3)+eeg(i+4))/5; end eegMA2(253)= (eeg(253)+eeg(254)+eeg(255)+eeg(256))/4; eegMA2(254)=(eeg(254)+eeg(255)+eeg(256))/3; eegMA2(255)=(eeg(255)+eeg(256))/2; eegMA2(256)=eeg(256)/1; subplot(3,1,1), plot(eeg, 'g '); subplot(3,1,2), plot(eegMA1,'r'); subplot(3,1,3), plot(eegMA2,‘b');

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 23 Median filter for noisy images Consider applying median filtering to some noisy images In computer, these grayscale images are stored as 2D arrays  x(i,j) where I and j are the coordinates and x is the grayscale values (in general from 0 (black) 255 (white)) After applying median filter Mean (averaging) filter could be applied in similar manner though for images, median filter normally gives better results

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 24 Principal component analysis PCA can be used to reduce noise from signals provided we have repeated recordings or signals from a number of trials or multi- channel signals Principal components (PCs) are obtained from PCA, which are orthogonal signals, i.e. signals that are uncorrelated to each other Since noise is less correlated between the trials as compared to the signals, the first few PCs will account for the signals while the last few PCs will account for the noise By discarding the last few PCs before reconstruction, we’ll get the signals without noise/with less noise

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 25 Principal component analysis -algorithm PCA algorithm  Organise the data, X in M x N matrix  Set mean to zero  Compute CX=covariance of matrix, X  Compute eigenvalue, eigenvector of CX  Sort eigenvectors (i.e. principal components) in descending order  Compute Zscores  Decide how many PCs to keep using some criteria  Reconstruct the noise reduced signals using the first few PCs and Zscores

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 26 Eigenvector, eigenvalue – a brief review The steps of setting mean to zero and computing covariance have been covered earlier, so let us move to the step of computing eigenvector, eigenvalue Let us assume that A=cov(X), where X is the mean zero data In MATLAB, [V,D] = eig(A) produces matrices of eigenvalues (D) and eigenvectors (V) of matrix A It is obtained from A.*V = D.*V Note: A has to be a square matrix Eg: So is the eigenvector and 4 is the eigenvalue can be assumed to be the vector direction And eigenvalue=4 is the weight of this vector 3 2 A

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 27 Eigenvector, eigenvalue (cont.) Finding the eigenvalues and eigenvectors for bigger than 3 x 3 matrix is extremely difficult, so we will skip the algorithms and just use MATLAB function eig Example, for the following square matrix: Decide which, if any, of the following vectors are eigenvectors of that matrix and give the corresponding eigenvalue Answer: The eigenvector is because = 1. The eigenvalue is 1

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 28 Sort the eigenvectors Sort the eigenvectors from big to small using eigenvalues Let’s use the example we saw earlier for ensemble averaging and median filtering X=[ ; ; ] Xm=[ ; ; ] A=Cov(Xm’) The eigenvectors are, [V,D]=eig(A) The corresponding eigenvalues are , , So now sort the eigenvectors in the order of eigenvalues: , , So the eigenvectors are

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 29 Zscores Zscores=Vsort’*Xm where V is the sorted eigenvectors and Xm is the mean zero data matrix In the previous example, the size of A=3 So, we will have 3 Zscores Zscores will have the same dimensions as Xm Zscore 1 Zscore 2

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 30 How to select the number of PCs to keep The PCs with higher eigenvalues represent the signals while the PCs with lower eigenvalues represent the noise So we keep the first few PCs and discard the rest But how many PCs do we keep? Using certain percentage of variance to retain, normally 95% or 99% Eigenvalues represent the weight of the PCs i.e. some sort of variance (power) measure of the PCs So, we can use sum(D 1 :D q )/sum(D 1 :D last )>0.99, where D represents the eigenvalues [D1,D2,D3,…Dlast] In our example, say we wish to retain 99% variance  eigenvalues are , , Sum(D1:Dlast)= Sum(D1:D1)=8.4578; sum(D1:D1)/sum(D1:Dlast)= Sum(D1:D2)=8.4849; sum(D1:D1)/sum(D1:Dlast)= Sum(D1:Dlast)=8.4867; sum(D1:Dlast)/sum(D1:Dlast)=1.0 Since the first eigenvalue accounted for 99.96% variance (which is more than 99%) and we can discard the second and third PC If we wish to retain 99.97%, how many PCs do we retain? Answer=2

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 31 Reconstruct using the selected PCs To get back the original signals without noise, we need to reconstruct using the selected PCs X nonoise =V selected *Zscore selected In our example, only 1 PC was selected, so the first eigenvector and the first Zscore will be used to get back the 3 noise reduced signals Xnonoise=Vsort(:,1)*Zscore(1,:) noise=Xm-Xnonoise Energy (noise) = Original signal, x=[ ]; this is the actual original mean removed signal - from the earlier slide Energy (original signal)=8; SNR= SNR using PCA is generally higher than ensemble averaging or median filtering and we do get 3 signal outputs unlike one signal output from ensemble averaging or median filtering

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 32 Principal component analysis – an example of application Consider the following 3 noise corrupted signals Obtain the principal components (in descending order of eigenvalue magnitude) Obtain the Zscores Decide how many PCs to retain - assume that we retain only the first PC By retaining the first PC only for reconstruction, we will have 3 noise reduced EP Reconstruct using only one PC EP signal (trial 1) EP signal (trial 2) EP signal (trial 3) Noisy EP signal (trial 1) Noisy EP signal (trial 2) Noisy EP signal (trial 3)

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 33 Independent component analysis –a brief study ICA is a new method that could be used to separate noise from signal Sometimes known as blind source separation Requires more than one signal recording (like PCA) ICA separates the signals into independent signals (signals and noises – we keep the signals, discard the noises) Example: Assume, we have 3 observed (i.e. recorded signals): x1[n], x2[n] and x3[n] from 3 original signals sources: s1[n], s2[n] and s3[n] x1[n]=a11.s1[n]+a12.s2[n]+a13.s3[n] x2[n]=a21.s1[n]+a22.s2[n]+a23.s3[n] x3[n]=a31.s1[n]+a32.s2[n]+a33.s3[n] The matrix, is known as mixing matrix ICA can be used to obtain the original signals by obtaining the unmixing matrix W  W=A -1 The original signals can be obtained by using s1[n]=w11.x1[n]+w12.x2[n]+w13.x3[n] s2[n]=w21.x1[n]+w22.x2[n]+w23.x3[n] s3[n]=w31.x1[n]+w32.x2[n]+w33.x3[n]

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 34 Independent component analysis – a pictorial example Figures from Independent Component Analysis, Hyvarinen, Karhunen and Oja

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 35 Maximising non-gaussianity using kurtosis How ICA works? The central limit theorem says that sums of non-gaussian random variables are closer to gaussian than the original ones => the independent signals are less gaussian than the combined signals So by maximising non-gaussian behaviour, we get closer to the original signals Kurtosis could be used to measure gaussian behaviour BUT what is gaussian?  See next slide more gaussian less gaussian Source (original signals) Mixed (combined signals)

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 36 Gaussian and probability distributions Gaussian (or normal) probability distribution is BUT what is probability distribution? Probability distribution for discrete-time signals is simply the number of occurences vs value Eg: if x has values from 1 to 10 Gaussian distribution Super-gaussian distribution  The data close to mean have higher occurences Sub-gaussian distribution  Most the data have similar number of occurences count(1:10)=0; for i=1:10, y=find(x==i); count(i)=length(y); end plot(y); x = [ ] Probability distribution of x

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 37 Kurtosis Non-gaussianity can be measure using kurtosis Gaussian signals have kurtosis=3 Sub-gaussian signals have lower kurtosis value Super-gaussian signals have higher kurtosis value Examples x = [ ]; y=kurtosis(x,0); %unbiased kurtosis using MATLAB y= Gaussian distribution signal x = randn(1,100000); % gaussian signal with mean=0, std=1 plot(x); y=kurtosis(x,0) %unbiased kurtosis using MATLAB y=3.00

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 38 EP signal, kurtosis=3.32 noise, kurtosis=2.81X2=EP+noise, kurtosis=2.61 X1= EP+noise, kurtosis=2.79 Example – Kurtosis for EP and noise Original signalsRecorded signals Can you see that kurtosis is lower for combined signals, i.e. the actual independent signals (i.e. sources) have higher kurtosis

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 39 ICA tries to obtain EP and noise by estimating the unmixing matrix The solution is In the beginning, we don’t know the unmixing matrix! A simple ICA method is to randomly generate values in [0,1] for the unmixing matrix Now, EP[n]=w11.X1[n]+w12.X2[n] and noise[n]=w21.X1[n]+w22.X2[n] Kurtosis values are computed for these estimated EP and noise Repeat with other random values for the unmixing matrix (say for a thousand times) The unmixing matrix that gave the highest kurtosis values will denote the actual EP and noise Actual ICA algorithms use complicated neural network learning algorithms, so we’ll skip them It suffices to know that by using certain measures like kurtosis (representing non-gaussianity), we can separate the signals into independent components Simple ICA algorithm – an example using EP and noise Unmixing matrix

Lecture 3 BME452 Biomedical Signal Processing 2013 (copyright Ali Işın, 2013) 40 Study guide (Lecture 3) From this week’s lecture, you should know  Basic mathematics– mean, standard deviation, variance, covariance, correlation, autocorrelation, SNR, etc.  Uses of these basic maths in signal analysis  Noise reduction methods like ensemble averaging, median filtering, moving average filtering, principal component analysis and basics of independent component analysis End of lecture 3