Download presentation
Presentation is loading. Please wait.
Published byJade Norris Modified over 9 years ago
1
4- Data Analysis and Presentation Statistics
2
CHAPTER 04: Opener
3
1) Uncertainty in measurements: large N distributions and related parameters and concepts (Gaussian or normal distribution) 2) Approximations for smaller N (Student’s t- and related concepts) 3) Other methods: G, Q (FYI: F) 4) Excel examples (spreadsheet) What is in this chapter?
4
What do we want from measurements in chemical analysis? We want enough precision and accuracy to give us certain (not uncertain) answers to the specific questions we formulated at the beginning of each chemical analysis. We want small error, small uncertainty. Here we answer the question how to measure it! READ: red blood cells count again!
5
Distribution of results from: Measurements of the same sample from different aliquots Measurements of similar samples (expected to be similar/same because of the same process of generation) Measurements of samples on different instruments ETC
6
FYI: Formalism and mathematical description of distributions Counting of black and white objects in a sample, or how many times will n black balls show up in a sample Binomial distribution For larger numbers of objects with low freqency (probability) in the sample Poisson distribution and if the number of samples goes to infinity Normal or Gaussian distribution
7
Normal or Gaussian distribution Unlimited, infinite number of measurements Large number of measurements Approximation: small number of measurements
8
CHAPTER 04: Figure 4.1
9
Data from many measurements of the same object or many measurements of similar objects show this type of distribution. This figure is the frequency of light bulb lifetimes for a particular brand. Over four hundred were tested (sampled) and the mean bulb life is 845.2 hours. This is similar but not the same as measurement of one bulb many times in similar conditions! See also Fig 4.2 4-1 IMPORTANT: Normal or Gaussian distribution Find “sigma” and “mu” on the Gaussian distribution figure !!!!!!!!!!!!!!!!!!!!!!!!
10
CHAPTER 04: Equation 4.3
11
IMPORTANT
12
CHAPTER 04: Figure 4.3
13
Here is a normal or Gaussian distribution determined by two parameters , here =0, and , here a) 5, b) 10, c) 20. Wide distributions such as ( c) are result of poor or low precision.The distribution (a) has a narrow distribution of values, so it is very precise. Q: How to quantify the width as a measure of precision? A: “sigma” and “s” standard deviation
14
Another example with data
15
Another way to get close to Gaussian distribution is to measure a lot of data
16
Properties of the Gaussian or Normal Distribution or Normal Error Curve 1. Maximum frequency (number of measurements with same value) at zero 2. Positive and negative errors occur at the same frequency (curve is symmetric) 3. Exponential decrease in frequency as the magnitude of the error increases.
17
The interpretation of Normal distribution, Standard Deviation and Probability: the area under the curve is proportional to the probability you will find that value in your measurement. Clearly, we can see form our examples that the probability of measuring value x from a certain range of values is proportional to the area under the normalization curve for that range. RangeGaussian Distribution µ ± 1 68.3% µ ± 2 95.5% µ ± 3 99.7% The more times you measure a quantity, the more confident you can be that the average value of your measurements is close to the true population mean, µ. Standard deviation here is a parameter of Gaussian curve. The uncertainty decreases in proportion to 1/(n)^.5, where n is the number of measurements.
18
We can now say with certain confidence that the value we are measuring will be inside certain range with some well-defined probability. This is what can help us in quantitative analysis! BUT, can we effort measurements of large, almost infinite number of samples? Or repeat measurement of one sample almost infinite number of times???
19
As n gets smaller (<=5) µ mean X and s This is the world we are in, not infinite number of measurements !!!!!!! All our chemical analysis calculations starts here from these “approximations” of Gaussian or Normal distributions: mean and standard deviation We will introduce something that can be measured with smaller number of samples, X, and s instead…….
20
Mean value and Standard deviation Examples-spreadsheet Also interesting are : median (same number of points above and below, range ( or span, from the maximum to the minimum
21
CHAPTER 04: Equation 4.1
22
CHAPTER 04: Equation 4.2
23
Example For the following data set, calculate the mean and standard deviation. Replicate measurements from the Calibration of a 10-mL pipette TrialVolume delivered 19.990 29.993 39.973 49.980 59.982
24
CHAPTER 04: Unnumbered Table 4.1
25
THE TRICK: Student's t (conversion to a small number of measurements, by fitting ) Shown above are the curves for the t distribution and a normal distribution Student's t Table. Degree of freedom = n-1
26
Student's t table, see Table 4-2 book and handouts Confidence level(%) Degrees of freedom90%95% 16.31412.706 22.9204.303 32.3533.182 42.1322.776 52.0152.571 61.9432.447 71.8952.365 81.8602.306 91.8332.262 101.8122.228 151.7532.131 201.7252.086 251.7082.068 301.6972.042 401.6842.021 601.6712.000 1201.6581.980 1.6451.960
27
CHAPTER 04: Equation 4.4
28
CHAPTER 04: Figure 4.2
29
The square of the standard deviation is called the variance (s 2 ) or 2 2 =25 2 =100 2 =400 Link: Can we also use parameters similar to normal distribution to characterize certainties and uncertainties of our measurements?
30
The standard deviation, s, measures how closely the data are clustered about the mean. The smaller the standard deviation, the more closely the data are clustered about the mean. The degrees of freedom of a system are given by the quantity n–1. Typically use small # of trials, so we never measure µ or
31
THE TRICK: Student's t (conversion to a small number of measurements, by fitting ) The confidence interval is an expression stating that the true mean, µ, is likely to lie within a certain distance from the measured mean, x-bar. where s is the measured standard deviation, n is the number of observations, and t is the Student's t Table. Degree of freedom = n-1 Shown above are the curves for the t distribution and a normal distribution.
32
4.2 Confidence interval Calculating CI CI for a range of values will show the probability at certain level (say 90%) that you have the true value in that range. Note : true value.
33
Student's t table, see Table 4-2 book and handouts Confidence level(%) Degrees of freedom90%95% 16.31412.706 22.9204.303 32.3533.182 42.1322.776 52.0152.571 61.9432.447 71.8952.365 81.8602.306 91.8332.262 101.8122.228 151.7532.131 201.7252.086 251.7082.068 301.6972.042 401.6842.021 601.6712.000 1201.6581.980 1.6451.960
35
Representation and the meaning of the confidence interval the error bars include the target mean (10,000) more often for the 90% CL than for the 50% CL Important information for real process!!!
36
A control chart was prepared to detect problems if something is out of specification. As can be seen when 3 away at the 95% CL then there is a problem and the process should be examined. Representation and the meaning of the confidence interval Student's t values can aid us in the interpretation of results and help compare different analysis methods.
37
4-3 Comparison of Means, hypothesis Case 1: Case 2 Case 3 Underlying question is are the mean values from two different measurements significantly different?
38
Hypothesis about the TRUE VALUES and/or ESTABLISHED VALUES We will say that two results do not differ from each other unless there is > 95% chance that our conclusion is correct The statement about the comparison of values is the same statement as the concept of a "null hypothesis in the language of statistics ". The null hypothesis assumes that the two values being compared, are in fact, the same. Thus, we can use the t test (for example) as a measurement of whether the null hypothesis is valid or not. There are three specific cases that we can utilize the t test to question the null hypothesis. Student's t values can aid us in the interpretation of results and help compare different analysis methods.
39
A Answers on analytical chemistry questions: Are the results certain and do they indicate significant differences that could give different answers ?
40
How to establish quantitative criteria?
41
Case #1: Comparing a Measured Result to a "Known Value" Example A new procedure for the rapid analysis of sulfur in kerosene was tested by analysis of a sample which was known from its method of preparation to ccontain 0.123% S. The results obtained were: %S = 0.112, 0.118, 0.115, and 0.119. Is this new method a valid procedure for determining sulfur in kerosene? One of the ways to answer this question is to test the new procedure on the known sulfur sample and if it produces a data value that falls within the 95% confidence interval, then the method should be acceptable. x – = 0.116 s = 0.0033 95% confidence interval = 0.116 ± (3.182)(0.0033) 4 x – = 0.116 ± 0.005 x – = 0.111 to 0.121 which does not contain the "known value 0.123%S" Because the new method has a <5% probability of being correct, we can conclude that this method will not be a valid procedure for determining sulfur in kerosene. Looks good, but…..
42
µ =x – ± t s/n ± t = (x – - µ ) (n/s) The statistical "t" value is found and compared to the table"t"value. If t found > t table, we assume a difference at that CL (i.e. 50%, 95%, 99.9%). Is the method acceptable at 95% CL? dof = (n- 1) = 3 & @ 95% the t t = 3.182 (from student's t table) ± t f = ( x – - µ) (n / s) ± t f = (0.116- 0.123) * [(4)/0.0033] = 4.24 t found > t table 4.24 > 3.18, so there is a difference, (thus the sameconclusion.) …but this is the correct method to avoid problems. **If you have than use it instead of mean
44
Case #2: Comparing Replicate Measurements ( Comparing two sets of data that were independently done using the "t" test. Note: The question is; " Are the two means of two different data sets significantly different?" This could be used to decide if two materials are the same or if two independently done analyses are essentially the same or if the precision for the two analysts performing the analytical method is the same. or two sets of data consisting of n1 and n2 measurements with averages x1 and x2 ), we can calculate a value of t by using the following formula
45
Cont. The value of t is compared with the value of t in Table 4–2 for (n1 + n2 – 2) degrees of freedom. If the calculated value of t is greater than the t value at the 95% confidence level in Table 4–2, the two results are considered to be different. The CRITERIUM If t found > t table there is a difference!!
46
The Ti content (wt%) of two different ore samples was measured several times by the same method. Are the mean values significantly different at the 95% confidence level? nXs Sample 150.01344E-4 Sample 250.01403.4E-4
47
t from Table 4–2 at 95% confidence level and 8 degrees of freedom is 2.306 Since our calculated value (2.564) is larger than the tabulated value (2.306), we can say that the mean values for the two samples are significantly different. If t found > t table then a difference exists.
48
Case #3: Comparing Individual Differences (We are using t test with multiple samples and are comparing the differences obtained by the two methods using different samples without the duplication of samples. For example; it might be reference method vs. new method. This would monitor the versatility of the method for a range of concentrations. This case applies when we use two different methods to make single measurements on several different samples. where d is the difference of results between the two methods
49
Sample Composition by method 1 (old) Composition by method 2 (new) Delta -d A0.01340.01350.0001 B0.01440.01560.0012 C0.01260.01370.0011 D0.01250.01370.0012 E0.01370.01360.0001
51
C Known true value and CI: CI tabulated t values show rejection of hypothesis A A acceptance
52
What to do with outliners? Points far from the rest. Keep them or not? G-method FWI: also Q method (both should give similar estimates)
53
CHAPTER 04: Unnumbered Figure 4.4
54
CHAPTER 04: Equation 4.13
55
CHAPTER 04: Table 4.5
56
Q 4-6) Test for Bad Data (Q-test and Dixon’s outliners) Sometimes one datum appears to be inconsistent with the remaining data. When this happens, you are faced with the decision of whether to retain or discard the questionable data point.The Q Test allows you to make that decision: gap = difference between the questionable point and the nearest point If Q ( observed or calculated) > Q(tabulated), the questionable data point should be discarded. Q table for the rejection of data values # of observations 90%95%99% 30.9410.970.994 40.7650.8290.926 50.6420.710.821 60.560.6250.74 70.5070.5680.68 80.4680.5260.634 90.4370.4930.598 100.4120.4660.568 range= spread of data
58
Note: You must deal with the result of the Q-test. The simplest way is to throw away the datum! It is unethical to KEEP the datum!!!
59
4-4 FYI: F Test The F test provides a simple method for comparing the precision of two sets of identical measurements. where s1 is the standard deviation of method 1 and s2 is the standard deviation of method 2 The F test may be used to provide insights into either of two questions: (1)Is method 1 more precise than method 2, or (2)Is there a significant difference in the precisions of two methods For the first case, the variance of the supposedly more precise procedure is denoted s2. For the second case, the larger variance is always denoted s1.
60
Degrees of Freedom 219.0019.1619.2519.3019.3319.4119.4519.50 39.559.289.129.018.948.748.668.53 46.946.596.396.266.165.915.805.63 55.795.415.195.054.954.684.564.36 65.144.764.534.394.284.003.873.67 123.893.493.263.113.002.692.542.30 203.493.102.872.712.602.282.121.84 Degrees of Freedom (Numerator) (Denominator)234561220 3.002.602.372.212.101.751.571.00 Critical Values for F at the Five Percent Level
61
Example The standard deviations of six data obtained in one experiment was 0.12%, while the standard deviations of five data obtained in a second experiment was 0.06%. Are the standard deviations statistically the same for these two experiments? This example deals with option #2, so the larger standard deviation is placed in the numerator: Note : If F calculated > F table then difference exists. F(tabulated) = 6.26, so the standard deviations are statistically insignificant. i.e. no difference
62
Additional material cont
63
Systematic error What if all measurements are off the true values?
64
SFYI: SYSTEMATIC ERROR T(1) Back to Types and Origin(s) of Uncertainties Errors We must address errors when designing and evaluating any analytical method or performing an analysis determination. Systematic errors or determinate errors. When they are detected, we must remove them, or reduce or have them under control. Signature of the determinate error is that all are on one side of the true value. Examples of systematic error IInstrument errors: a thermometer constantly reads two degrees too high. We can use a correction factor. A balance is out of calibration, so we must have it calibrated. Method errors: Species or reagents are not stable or possible contamination. Relationship about analyte and signal invalid (standard curve not linear). Limitations on equipment, tolerances, measurement errors of glassware, etc). Failing to calibrate you glassware or instrumentation. Lamp light source not properly aligned. Personal errors: color blindness, prejudice (You think the measurement is OK or is bad). We make these a lot of the time! Not reading instruments correctly, not following instructions! Suggested ways of eliminating systematic error: equipment calibration, self-discipline, care, analysis of known reference standards, variation of sample size, blank determinations, independent lab testing of method or sample.
65
Random error is always present and always “symmetrical”!!! Random errors or indeterminate error They cannot be reduced-unless you change the instrument or method; so they are always present and are distributed around some mean (true) value. Thus data readings will fluctuate between low and high values and have a mean value. We often use statistics to characterize these errors.
66
A measurement Examples: "Parallax error reading a buret“ or could be instrumental noise such as electrical voltage noise of recorder, detector, etc.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.