Download presentation
Presentation is loading. Please wait.
Published byAmos Kelly Modified over 9 years ago
1
Professor Abdul Muttaleb Jaber Professor Abdul Muttaleb Jaber Faculty of Pharmacy Faculty of Pharmacy Office: Room # 512 (Faculty of Pharmacy) Office: Room # 512 (Faculty of Pharmacy) Tel: Tel: E-mail: ajaber@philadelphia.edu.jo Office hours: M & W 10-12 am
2
Chapter 1 Control of the quality of analytical methods Uncertainty in measurements Uncertainty in measurements Significant figures and calculations Significant figures and calculations Control of errors in analysis Control of errors in analysis Fundamental statistical terms Fundamental statistical terms Validation of analytical procedures Validation of analytical procedures Standard operating procedure (SOP) Standard operating procedure (SOP) Reporting of results Reporting of results Some terms used in the control of Some terms used in the control of analytical procedures analytical procedures Basic calculations in pharmaceutical analysis analysis
3
Electronic Analytical Balance
4
Copyright McGraw-Hill 2009 4 Uncertainty in Measurement Exact: numbers with defined values –Examples: counting numbers, conversion factors based on definitions Inexact: numbers obtained by any method other than counting –Examples: measured values in the laboratory
5
A measurement always has A measurement always has some degree of uncertainty. some degree of uncertainty. Uncertainty has to be indicated in any measurement. Uncertainty has to be indicated in any measurement. Any measurement has certain digits and one uncertain digit. Any measurement has certain digits and one uncertain digit. A digit that must be estimated is called uncertain. called uncertain.
6
Uncertainty in Measurements
7
Significant Figures –Used to express the uncertainty of inexact numbers obtained by measurement –The last digit in a measured value is an uncertain digit - estimated The number of certain digits + the uncertain digit is called number of significant figures.
8
Rules for Counting Significant Figures 1.Nonzero integers 2.Zeros leading zeros captive zeros trailing zeros 3.Exact numbers
9
Rules for Counting Significant Figures Nonzero integers always count as significant figures. 3456 has 4 sig figs.
10
Zeros Leading zeros do not count as significant figures. (Zeros before the nonzero digit) 0.0486 has 3 sig figs. 0.0003 has one significant figure always count as significant figures. Captive (sandwiched) zeros always count as significant figures. 16.07 has 4 sig figs.
11
Zeros Trailing zeros are significant only if the number contains a decimal point. 9.300 has 4 sig figs.
12
Exact numbers have an infinite number of significant figures. 1 inch = 2.54 cm, exactly Zeros at the end of a number before a decimal place are ambiguous 10,300 g has: 3 or 4 or 5??
13
Practice Determine the number of significant figures in each of the following. 345.5 cm 4 significant figures 0.0058 g 2 significant figures 1205 m 4 significant figures 250 mL 2 significant figures 250.00 mL 5 significant figures
14
Significant Figures in Calculations A calculated answer cannot be more precise than the measuring tool. A calculated answer must match the least precise measurement. Significant figures are needed for final answers from 1) adding or subtracting 2) multiplying or dividing
15
Rules for Significant Figures in Mathematical Operation Multiplication and Division: # sig figs in the result equals the number in the least precise measurement used in the calculation. 6.38 2.0 = 12.76 13 (2 sig figs)
16
Rules for Significant Figures in Mathematical Operation Addition and Subtraction: # sig figs in the result equals the number of decimal places in the least precise measurement. Answer cannot have more digits to the right of the decimal than any of original numbers 6.8 + 11.934 = 22.4896 22.5 (3 sig figs)
17
Scientific Notation Addition and Subtraction (6.6 x 10 -8 ) + (4.0 x 10 -9 ) = (3.42 x 10 -5 ) – (2.5 x 10 -6 ) = 7.0 x 10 -8 3.17 x 10 -5
18
Multiple computations 2.54 X 0.0028 = 0.0105 X 0.060 1) 11.32) 11 3) 0.041 Continuous calculator operation = 2.54 x 0.0028 0.0105 0.060 = 11 2.54 x 0.0028 0.0105 0.060 = 11
19
Here, the mathematical operation requires that we apply the addition/ subtraction rule first, then apply the multiplication/division rule. = 12
20
Exact numbers –Do not limit answer because exact numbers have an infinite number of significant figures –Example: A tablet of a drug has a mass of 2.5 g. If we have three such tablets, the total mass is A tablet of a drug has a mass of 2.5 g. If we have three such tablets, the total mass is 3 x 2.5 g = 7.5 g 3 x 2.5 g = 7.5 g –In this case, 3 is an exact number and does not limit the number of significant figures in the result.
21
To get the correct number of significant digits you need to round numbers Rounding rules –If the number is less than 5 round “down”. –If the number is 5 or greater round “up”.
22
Fundamental statistical terms Central value = Arithmetic mean = average Arithmetic Mean = Mean or average; Median: Modern numerical value 20.620.120.720.020.4 Accuracy: Nearness of the experimental value to the true value –Absolute Error = X i - or - (+ve or –ve) –Relative mean error = Absolute error/ –Percent relative error = Relative error X 100
23
Accuracy and precision –Two ways to judge the quality of a set of measured numbers –Accuracy: how close a measurement is to the true or accepted value –Precision: how closely measurements of the same thing are to one another
24
Precision and Accuracy
25
25 Relationship between accuracy and precision Accurate and Precise
26
Central value = Arithmetic mean = average Arithmetic Mean = Mean or average; Median: Modern numerical value 20.620.120.720.020.4 Ways to express the accuracy of data –Absolute Error = X i - or - (+ve or –ve) –Relative mean error = Absolute error/ –Percent relative error = Relative error X 100
27
Ways to express the precision of data Average Deviation: Standard deviation: s = Relative average deviation = Relative standard deviation = RSD may be called: Coefficient of variation
28
Percent relative standard deviation = %RSD = Range = Absolute difference between the largest and the smallest values Range = w = X highest - X lowest
29
Significance of standard deviation The standard deviation, s, measures how closely the data are clustered about the mean. The significance of s is that the smaller the standard deviation, the more closely the data are clustered about the mean. A set of light bulbs having a small standard deviation in lifetime must be more uniformly manufactured than a set with a large standard deviation
30
Example Calculate the average and the standard deviation for the following four measurements: 821, 783, 834, and 855 hours. The average is To avoid round ‑ off errors, we generally retain one more significant figure for the average and the standard deviation than was present in the original data. The standard deviation is The average and the standard deviation should both end at the same decimal place. For = 823.2, the value s = 30.3 is reasonable, but s = 30.34 is not.
31
Types of Errors Determinate Errors or Systematic Errors: –They can be determined and eliminated –Those originate from a fixed cause. –They may be either high or low every time the analysis is run Sources: Methods, Equipment and materials, Personal judgment, Mistakes Random (indeterminate errors) –They originate from indeterminate processes. –They produce a value that sometimes is high and sometimes is low. –Ex: Flipping a coin!!
32
They cannot be controlled They can be evaluated statistically to supply information about the reliability of the data They vary in a nonreproducible way and never the same except by chance Characteristics of indeterminate errors
34
In mathematical description of Gaussian distribution, the scatter or precision is represented by a standard deviation if indefinitely large data set, ( the term s is used for a small data set, for a finite number of measurements) varies according to data but 68.3% of the curve area always lies within 1 of the mean 95.4% of the curve area always lies within 2 of the mean 99.7% of the curve area always lies within 3 of the mean A large means a broad error curve, a small value means a small narrow curve. Number of degrees of freedom always equals to the number of measurements –1; (n-1).
35
Linearity (Fitting the Least Square Line) Statistics provides a mathematical relationship that enables calculating the slope and intercept of the best straight line The equation for a straight line is y = a + bx a = intercept;b = slope It is assumed that the values of x are almost free of error The failure of the data points to fall exactly on the liens assumed to be caused entirely by the intermediate errors in the instrument readings, y
36
The sum of the squares of the deviations, Q, of the actual instrument readings from the correct values are minimized (made minimum) by adjusting the values of the slope, b and the intercept, a If a linear relationship between x and y does exist, this puts the line through the best estimates of the true mean values. a = ; are the means of x’s and y’s b = Once the best slope and intercept have been determined, a line with those values can be put on the graph along with the original data points to complete the plot
37
Example Many instrumental methods of analysis utilize calibration plots of measured signal versus concentration. Below are shown the data collected for such a plot. Assume that a linear relationship exists and that the concentration is the independent variable, known with a high degree of certainty. Concentration Signal (ppm) 1.000.116 2.000.281 5.000.567 7.000.880 10.001.074 Calculate, using the least ‑ squares method, the slope and intercept of the "best ‑ fit" line. along with the confidence interval of each at the 95 % level.
38
Let the independent variable of concentration equal x and the dependent variable of signal equal y. Then Data necessary for equations are:
39
From the equations given above for a and b a = b = Slope = b = = 0.092 Intercept = a = 0.584 – (0.1092)(5.20) = 0.016
40
Terms used in the control of analytical procedures ICH: The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use ICH brings together the regulatory authorities and pharmaceutical industry of Europe, Japan and the US to discuss scientific and technical aspects of drug registration.
41
41 Method validation: What is it? Method validation is the evaluation of a method to ensure that its performance is suitable for the analysis being carried out. ISO Definition Confirmation by examination and provision of objective evidence that the particular requirements for a specified intended use are fulfilled” [ISO 8402:1994] Definition of method validation Validation of analytical met hods
42
42 Validation of analytical methods Introduction: Why analytical monitoring? Definition of method validation Why method development and method validation? How method validation is done? Requirements for a Validation of analytical procedures Validation protocol for analytical method Analytical Procedure Tests Required for Validation of the Analytical Procedure Specificity and selectivity Accuracy Precision Repeatability Reproducibility
43
43 Sensitivity Linearity and range Limit of detection Limit of quantitation System Stability Ruggedness Robustness System Suitability Extent of validation Protocols Classification of analytical tests Chemical laboratory validation requirements Revalidation Standard Reference Materials (SRMs) (or Primary Standards) Accreditation of laboratory methods
44
44 Why method development and method validation? Validation of an analytical method will ensure that the results of an analysis are reliable, consistent and perhaps more importantly that there is a degree of confidence in the results. As either a customer of an analytical laboratory or as a performing laboratory, it must be demonstrated that the parameter you determine is the right one and that the results have demonstrated “fitness for purpose”. – Method validation provides the necessary proof that a method is “fit for purpose”.
45
45 Official analytical methods described in and recognized publications for a particular active constituent or formulation are regarded as validated and do not require revalidation. (Regulatory methods) However, the suitability of these methods must be verified under actual conditions of use i.e., the specificity and precision of the method should be demonstrated for the published method when applied to the relevant sample matrix and laboratory conditions. Comments on method development and method validation?
46
46 Tests Required for Validation of the Analytical Procedure Data submitted should, as appropriate, address the following parameters: Specificity of the procedure Accuracy and precision of the procedure Linearity of response for the analyte (and internal standard, if appropriate) Limit of detection Limit of quantitation Sensitivity Ruggedness/Robustness
47
47 The USP Eight Steps of Method Validation
48
48 It is the ability to assess clearly the analyte in the presence of components which may be expected to be present. Typically, these might include impurities, degradants, matrix, etc. The specificity of the analytical method must be demonstrated by providing data to show that: –Known degradation products and synthetic impurities do not interfere with the determination of the active constituent in bulk actives –Known degradation products, synthetic impurities and sample matrix present in the commercial product do not interfere with the determination of the active constituent 1. Specificity and selectivity
49
49 Accuracy Precision (could be expressed by : standard deviation, variance, or coefficient of variation) –Repeatability (intra-assay precision): same operating conditions under short interval of time) same person using single instrument) –Intermediate precision –Reproducibility 2. Accuracy and Precision
50
50 The closeness of agreement between the value which is accepted either as a conventional true value or an accepted reference value, and the value found. When measuring accuracy, it is important to spike the sample matrix with varying amounts of active ingredient(s). If a matrix cannot be obtained, then a sample should be spiked at varying levels. In both cases, acceptable recovery must be demonstrated. Accuracy
51
51 Accuracy is often described as “trueness” – Trueness: The closeness of agreement between the arithmetic mean of a large number of test results and the true or accepted reference value. [BS ISO 5725-1:1994]
52
52 For some methods the true value cannot be determined exactly and it may be possible to use an accepted reference value to determine this value. For example if suitable reference materials are available or if the reference value can be determined by comparison with another method.
53
53 Accuracy may be measured in different ways and the method should be appropriate to the matrix. The accuracy of an analytical method may be determined by any of the following ways: Measurements of Accuracy
54
54 a. Analyzing a sample of known concentration and comparing the measured value to the ‘true’ value. However, a well characterized sample (e.g., reference standard, a Certified Reference Material (CRM) must be used.
55
55
56
56 b. Spiked –Product matrix recovery method (Most widely used) A known amount of pure active constituent is added to the blank [sample that contains all other ingredients except the active(s)] matrix. Spiked samples are generally prepared at 3 levels in the range 50 - 150 % of the target concentration. The matrix is constructed to mimic representative samples in all respects where possible. For impurities, spiked samples are prepared over a range that covers the impurity content, for example 0.1 - 2.5 % w/w. The resulting mixture is assayed, and the results obtained are compared with the expected result.
57
57 c. Standard addition method This method is used if a blank sample cannot be prepared without the analyte being present. A sample is assayed, a known amount of pure active constituent is added, and the sample is again assayed. The difference between the results of the two assays is compared with the expected answer.
58
58
59
59 The accuracy of a method may vary across the range of possible assay values and therefore must be determined at several different levels. The accuracy should cover at least 3 concentrations (80, 100 and 120% or others) in the expected range. Accuracy may also be determined by comparing test results with those obtained using another validated test method.
60
60 Acceptance criteria: The expected recovery depends on the sample matrix, the sample processing procedure and on the analyte concentration. The mean % recovery should be within the following ranges:
61
61 Acceptable mean recover (%) %Active/impurity content 98 - 102> 10 90 - 110> 1 80 - 1200.1 - 1 75 - 125< 0.1
62
62 It is the closeness of agreement (degree of scatter) between a series of measurements obtained from multiple sampling of the homogeneous sample under the prescribed conditions. Precision may be considered at three levels: repeatability, intermediate precision and reproducibility. Precision
63
63 It expresses the precision under the same operating conditions over a short interval of time. Repeatability is also termed intra-assay precision Factors such as the operator, equipment, calibration and environmental considerations, remain constant and have little or no contribution to the final results Repeatability consists intra-laboratory assay: repeated analysis (a minimum of 5 replicate determinations must be made and the mean, % Relative Standard Deviation,RSD, and number of determinations reported) of an independently prepared sample on the same day by the same operator in the same laboratory. a. Repeatability
64
64 It expresses variations within-laboratories : different days, different analysts, different equipment, etc. Repeated analysis (a minimum of 5 replicate determinations must be made and the mean, % RSD and number of determinations reported) of an independently prepared sample by different operators on different days in the same laboratory. b. Intermediate Precision
65
65 It expresses the precision between laboratories (collaborative studies usually applied to standardization of methodology). It is a measure of a method’s ability to perform a routine analysis and deliver the same results using a particular method irrespective of laboratory, equipment and operator changes. Confirmation of reproducibility is important if the method is to be used in different laboratories for routine analysis. Reproducibility is expressed in terms of relative standard deviation. The unmodified Horwitz equation is used as a criterion of acceptability for measured reproducibility. c. Reproducibility
66
66 The modified Horwitz equation which suggests RSD < 2 (1 - 0.5 log C) x 0.67 Where C is the concentration of the analyte expressed as a decimal fraction (i.e. 0.1, 1x10 -6 etc.) Collaborative methods do not require validation of reproducibility since by their nature they are validated in this way (providing that the analysis falls within the validated range of that method).
67
67 Precision should be investigated using homogeneous, authentic (full scale-true) samples. However, if it is not possible to obtain a full-scale sample it may be investigated using a pilot-scale or bench-top scale sample or sample solution. The precision of an analytical procedure is usually expressed as the variance, standard deviation or coefficient of variation of a series of measurements. A minimum of 5 replicate sample determinations should be made together with a simple statistical assessment of the results, including the percent relative standard deviation. General considerations for precision
68
68 Mebendazole is used as anthelmintic
69
69
70
70 Precision RSD % %Component measured in sample 2 10.0 51.0 to 10.0 100.1 up to 1.0 20< 0.1
71
71
72
72 The following figure illustrates how precision may change as a function of analyte level. The %RSD values for ethanol quantitation by GC increased significantly as the concentration decreased from 1000 ppm to 10 ppm. Higher variability is expected as the analyte levels approach the detection limit for the method. The developer must judge at what concentration the imprecision becomes too great for the intended use of the method.
73
73 Figure 2. %RSD versus concentration for a GC headspace analysis of ethanol.
74
74 Sensitivity Linearity and range Limit of detection Limit of quantitation 3. Sensitivity, linearity, limit of detection, limit of quantification
75
75 It is the gradient of the response curve, i.e. the change in instrument response that corresponds to a change in analyte concentration within the linear range Sensitivity
76
76 A linearity study verifies that the sample solutions are in a concentration range where analyte response is linearly proportional to concentration. For assay methods, this study is generally performed by preparing standard solutions at five concentration levels, from 50 to 150% of the target analyte concentration. Five levels are required to allow detection of curvature in the plotted data. Linearity
77
77 Standards should be prepared and analyzed a minimum of three times. For impurity (low concentrations) methods, linearity is determined by preparing standard solutions at five concentration levels over a range such as 0.05-2.5 wt%. When linear regression analyses are performed, it is important not to force the origin as (0,0) in the calculation. This practice may significantly skew the actual best-fit slope through the physical range of use.
78
78 The linearity of the method over an appropriate range must be determined and reported. The range selected must cover the nominal concentration of the analyte in the product ± 25%. Duplicate determinations must be made at 3 (nominal concentration ± 25%) or more concentrations. Reports submitted must include, the slope of the line, intercept and correlation coefficient data.
79
79 Table of values (x,y) xy # Reference material mg/ml Calculated mg/ml 10.01000.0101 20.01500.0145 30.02000.0210 40.02500.0260 50.03000.0294 60.04000.0410 Linearity
80
80 Linearity Statistics Intercept -0.0002 Limit of Linearity and Range 0.005 – 0.040 mg/mL Slope 1.0237 Correlation coefficient0.9978 Relative standard deviation3.4%
81
81 Acceptability of linearity data is often judged by examining the correlation coefficient and y-intercept of the linear regression line for the response versus concentration plot. A correlation coefficient of > 0.999 is generally considered as evidence of acceptable fit of the data to the regression line. The y-intercept should be less than a few percent of the response obtained for the analyte at the target level.
82
82
83
83 Calibration and Standards In almost all chemical analysis, chemical concentrations are found by indirect measurements, which are based on direct measurements together with calibration Calibration is to ascertain the relationship between the content of the sample and the response of the assay method and is essential for quantitative analysis. Chemical standards are pure substances, mixtures, solutions, gases, or materials such as alloys or biological substances that are used to calibrate and validate all or part of the methodology of a chemical analysis. Experimental optimization of an assay normally requires the use of chemical standards
84
84 It is the interval between the upper and lower concentration of analyte in the sample (including these concentrations) for which it has been demonstrated that the analytical procedure has a suitable level of precision, accuracy and linearity. Range
85
85 In practice, the range is determined using data from the linearity and accuracy studies. Assuming that acceptable linearity and accuracy (recovery) results were obtained Determination of the range
86
86 An example of range criteria for an assay method is that the acceptable range will be defined as the concentration interval over which linearity and accuracy are obtained per previously discussed criteria and that yields a precision of 3% RSD. For an impurity method, the acceptable range will be defined as the concentration interval over which linearity and accuracy are obtained per the above criteria, and that, in addition, yields a precision of 10% RSD. Acceptance criteria for the Range
87
87 It is the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value. Limit of detection is the point at which a measured value is larger than the uncertainty associated with it. In chromatography,e.g., limit of detection is an amount that produces a peak with a height at least 3 times that of the baseline noise level. For validation it is usually sufficient to indicate the level at which detection becomes problematic. Limit of Detection
88
88 LOQ, LOD and S/N-R Limit of Quantitation Limit of Detection Signal to Noise Ratio noise Peak A LOD Peak B LOQ Baseline Noise level plays an important role in the LOD and LOQ!
89
89 The LOD may be determined by injecting the lowest calibration standard 10 (could be 20) times and calculating the standard deviation. Three times the standard deviation will be the LOD.
90
90 The limit of quantitation is a parameter of quantitative assays for low levels of compounds in sample matrices and is used particularly for the determination of impurities and/or degradation products or low levels of active constituent in a product. The limit of quantitation is the lowest amount of the analyte in the sample that can be quantitatively determined with defined precision under the stated experimental conditions. Limit of quantitation (LOQ)
91
91 The LOQ may be determined by preparing standard solutions at estimated LOQ concentration (based on preliminary studies). The solution should be injected and analyzed n (normally 6-10) times. The average response and the relative standard deviation (RSD) of the n results should be calculated and the RSD should be less than 20%. If the RSD exceeds 20%, a new standard solution of higher concentration should be prepared and the above procedure repeated.
92
92 The EURACHEM approach is to inject 6 samples of decreasing concentrations of analyte. The calculated RSD is plotted against concentration and the amount that corresponds to a predetermined RSD is defined as the limit of quantitation
93
93
94
94 Many solutes readily decompose prior to analytical investigations, for example during the preparation of the sample solutions, during extraction, clean-up, phase transfer, and during storage of prepared vials (in refrigerators or in an automatic sampler). Under these circumstances, method development should investigate the stability of the analytes. 4. System Stability
95
95 It is a measure of the bias in assay results generated during a pre-selected time interval, for example every hour up to 46 h, using a single solution. System stability should be determined by replicate analysis of the sample solution. System stability is considered to be appropriate if the relative standard deviation calculated on the assay results obtained in different time intervals does not exceed more than 20 % of the corresponding value of the system precision. If the value is higher on plotting the assay results as a function of time, the maximum duration of the usability of the sample solution can be calculated. Determination of the stability of the samples being analyzed in a sample solution
96
96
97
97 Example of system stability The effect of long-term storage and freeze-thaw cycles can be investigated by analyzing a spiked sample immediately upon preparation and on subsequent days of the anticipated storage period. A minimum of two cycles at two concentrations should be studied in duplicate. If the integrity of the drug is affected by freezing and thawing, spiked samples should be stored in individual containers and appropriate caution should be employed for study samples.
98
98 Ruggedness Robustness Variability caused by: –Day-to-day variations –Analyst-to-analyst –Laboratory-to-laboratory –Instrument-to-instrument –Chromatographic column-to-column –Reagent kit-to-kit –Instability of analytical reagents 5. Ruggedness and robustness of the method
99
99 The robustness of an analytical procedure is a measure of its capacity to remain unaffected by small, but deliberate, variations in method parameters and provides an indication of its reliability during normal usage. Ideally, robustness should be explored during the development of the assay method. For the most efficient way to do this is through the use of a designed experiment. Such experimental designs might include a Plackett-Burman matrix approach to investigate first order effects, or a 2k factorial design that will provide information regarding the first (main) and higher order (interaction) effects. Robustness
100
100 In carrying out such a design, one must first identify variables in the method that may be expected to influence the result. For instance, consider an HPLC assay which uses an ion-pairing reagent. One might investigate: sample sonication or mixing time; mobile phase organic solvent constituency; mobile phase pH; column temperature; injection volume; flow rate; modifier concentration; concentration of ion-pairing reagent; etc. It is through this sort of a development study that variables with the greatest effects on results may be determined in a minimal number of experiments. The actual method validation will ensure that the final, chosen ranges are robust
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.