Presentation is loading. Please wait.

Presentation is loading. Please wait.

Electron probe microanalysis

Similar presentations


Presentation on theme: "Electron probe microanalysis"— Presentation transcript:

1 Electron probe microanalysis
Accuracy and Precision in EPMA: Understanding Errors and the Role of Standards

2 What’s the point? How much can I trust the compositions that the probe computer spits out? Are two analyses equivalent? Can I compare my numbers with those published by other researchers using EPMA?

3 Goal and Issues Goal: achievement of high accuracy and precision in quantitative analyses, recognizing sources of errors and minimizing them Issues involved with achieving this goal: Standards Instrumental stability Sample and standard physical condition Beam impact on sample complications Spectral issues Counting statistics Matrix correction

4 Standards: how “good” are they? well characterized? homogeneous?
Instrumental conditions: beam stability; spectrometer reproducibility; thermal stability; detector pulse height stability/adjustment; reflected light optics (stage Z) Matrix correction: any issues (eg MACs for light elements)? wide range in Z for binary (eg PbO) Sample and standard conditions: rough surface? polish? etched? tilt? sensitive to beam? C coat thickness if used Counting statistics: enough counting time? Spectral issues: peak and background overlaps? Sample size vs interaction volume: homogeneous? small particles? secondary fluorescence?

5 These can be categorized into
“random” and “systematic” errors.

6 Random Errors Random errors include
random nature of X-ray generation and emission instrumental (random) instability operator inconsistency (e.g. little attention to correct optical focus) sample surface roughness interaction volume intersecting two phases secondary fluorescence from hidden (below surface) phases stray cosmic rays

7 Systematic errors Systematic errors include
instrumental instability (temperature effect on crystal 2d, and on gas pressure; stage Z drifts as it heats up) inappropriate matrix correction poor electrical ground of either standard or unknown beam change/damage to unknown (e.g. Na in glass) difference in peak shape/position (standard vs unknown) peak or background interference pulse height depression on standard fluorescence across observed phase boundaries (e.g. diffusion couple)

8 Precision and Accuracy in Error Analysis
Precision refers to the reproducibility of the counts – and thus the ability to be able to compare compositions, whether within a sample, or between samples, or between analytical sessions. It is directly tied to counting statistics. It is a relative description. Accuracy refers the “truth” of the analysis, and is directly tied to the standards used and the matrix correction applied to the raw data, as well most of the other variables listed previously that could affect the X-ray intensities (background and peak interferences, beam damage, etc). It is an absolute description. EPMA quantitative error analysis is a combination of both, the first being very easy to define, the later more difficult. Precision for major elements could easily be <1%, but when combined with accuracy, total EPMA error probably 1-2% in the best cases (for major elements).

9 Precision and Accuracy
Low Precision High Precision Low Accuracy High Accuracy

10 Instrumental Errors-1 Beam current stability: with Faraday cup measurements made for each analysis, long term drift should not be a problem as the counts for each analysis are normalized to a common reference current value (could be 1, or 20 nA). For long count times (minutes ) for trace element work, it is recommended that the peak and background counting be constantly cycled so that any longer period issues be spread out over the whole time period. Spectrometer reproducibility: with modern microprobes, this should not be a serious problem, although problems do crop up with age. Where crystals are flipped, in a small fraction of cases there is an error; generally it is not recommended to flip crystals within analyses. When spectrometer reproducibility is a problem, it is seen as backlash of the gears; to minimize errors, the peaks should always be approached from the same direction. This is set up within the software.

11 Instrumental Errors-2 Thermal stability: Spectrometers could drift if there is a change in the room temperature, though this would presumably be noticeable to the operator (air conditioning fails in hot spell). I have not seen problems with PET nor LIF, though I have with TAP which could be thermal. P10 gas pressure is sensitive to the temperature change. We attempt to keep the room at 68-70°C and the circulating water temperature in the machine is very close to this. Stage height (Z) drifts due to motor heating during long (overnight) runs. Detector pulse height adjustment/stability: The bias (voltage) of the gold wire in the detector must be set to the proper value; this is a function of the energy of the X-ray and gas pressure. The operator must verify that the bias, gain and baseline are set properly (the last particularly where the Ar-escape peak is partially resolved).

12 Instrumental Errors-3 Dead time:In WDS, counts are dead time corrected. If dead time is not accurately determined, there could be a systematic error here. Cameca probes operate somewhat differently from JEOL and others, in that Cameca introduces a “hard” constant time delay (e.g. 3 msecs) automatically into the counting circuitry and then uses that value to correct the counts. Probe labs should verify (at least once) that the manufacturer’s “official” or “default” dead time factors are correct. This is done by counting on a metal standard (e.g. Si or Ge) at varying Faraday currents, with the dead time correction turned off. These data can then be plugged into a spreadsheet that which Paul Carpenter (NASA) has developed to calculate the most accurate dead time actually present on a particular probe. Also, in our Probe for Windows software, there is an option for an alternate, more complex dead time correction equation, for high count rate (>50K cps)

13 Instrumental Errors-4 Specimen focus (stage height): Samples and standards must be positioned at the same stage height, so that they will all be at the same position vis a vis the Rowland Circle (= in X-ray focus for Bragg defraction). Sometimes it is difficult decide within 1-3 um which is the “best” height: this small Z difference is not critical. It becomes critical when it reaches the 5 or 10 um “out of focus” realm, which can occur during unattended overnight runs as the sample and stage heat up (heat from stage motors); this can be addressed by using the stage Z “autofocus” automation (but test it out first, as it must be calibrated).

14 Sample/standard Error: Physical issues - 1
Surface irregularities: the matrix correction relies upon the correct take off angle to calculate the path length for the absorption correction, and irregular surfaces will have variable path lengths and thus the measured X-ray intensities will not be consistent between analytical spots. Moreover, in using different spectrometers mounted in different directions, the path length will vary between spectrometers for one analytical spot. Etched samples: generally, etching may introduce some irregularity, and should be avoided. However, I have seen slightly etched samples analyzed without apparent problem. Polishing: samples should be polished with final stage using <1 mm diamond or alumina or silica.

15 Sample/standard Error: Surface Irregularities
These Monte Carlo simulations show the effect on K and L line X-rays of Ni and Al, of one-directional V-grooves of height (h) varying from .1 to 1 mm. The smallest (.1 mm) grooves have no noticeable effect, but the deeper grooves clearly have major impacts on Al Ka and Ni La (due to more or less absorption), with the greatest impact on the lowest energy line. Lifshin and Gauvin, 2001,Fig. 4, p. 171.

16 Sample/standard Error: Physical issues - 2
Specimen homogeneity: a key assumption of quantitative EPMA is that the interaction volume is one phase (is homogeneous). If more than 1 phase is overlapped by the beam: the matrix correction usually overcompensates and produces an erroneous composition >100 wt%. This is common for small eutectic (groundmass) phases. If trace elements are being considered, then also the adjacent surrounding volume (up to ~ mm away) must not contain phases with higher concentrations of the elements of interest, which might be secondarily fluoresced. Diffusion couples have similar constraints, in that secondary fluorescence across the boundary can yield X-ray intensities up to a couple of percent (which could also give high totals). Users need to either empirically or theoretically verify this is NOT happening.

17 Sample/standard Error: Physical issues - 3
Incorrect geometry (non-orthogonal surface): this occurs too often with 1” diameter plugs that have been automatically polished. For whatever reason, the sample surface ends up at a slant to the wall, and when the set screw is tightened in the holder, the surface ends up at an angle to the horizontal. This introduces an error in the take off angle. Also, the area of interest may be too low and impossible to reach stage Z focus. This Monte Carlo simulation shows that a 5% tilt of the sample will alter the K ratio of Al Ka by .01, which equals a 8% relative error before matrix correction. An Al ZAF of 1.5 would thus increase the error to 12%. Lifshin and Gauvin, 2001, Fig 3., p. 170.

18 Sample/standard Error: Physical issues - 4
Incorrect geometry - edge effects: materials mounted in epoxy and then polished with loose polishing compound commonly have differential erosion at the epoxy-material interface, producing a moat or channel in the epoxy, resulting in a rounding of the material at the edge. Efforts to do quantitative EPMA of the edge (rim) will be in error as the absorption path length will be non-uniform and different from the nominal length. Special polishing technique will minimize or eliminate this problem. Epoxy Specimen Common erosion problem, rounding of specimen edge Epoxy Specimen Desired geometry: no rounding of specimen edge

19 Sample/standard Error: Physical issues - 5
Oxide coating/film: this can be a significant problem for metals that oxidize (e.g., Al, Mn, Mg, Ti, etc.), particularly for standards. These can reach fractions of a mm in depth, and significantly alter the X-ray intensity of the line being acquired for the standard, resulting in an overestimate of the element in the unknown. This plot shows the effect of a thin oxide skin (TiO2) on reducing the characteristic X-rays from a pure metal standard (Ti), and is most severe for lower E0. (Modeled with GMRFilm software).

20 Sample/standard Error: Physical issues - 6
Smear coat: soft materials may smear and cross contaminate other materials that are being polished either in the same holder, or in a subsequent sample, producing a thin ‘smear coat’. I have seen one reference in the literature to Pb or Sn smearing. It is not normally considered a major problem, at least for major element analysis. Polishing artifacts: Diamond and alumina polishing particles can get caught in pores in the material been polished. I have seen mm fragments of brass from a brass sample holder become lodged in feldspar and biotite. Charging: this will reduce the effective E0 in a random manner. Conductive samples in epoxy must be grounded with conductive tape (preferred rather than paint). Semi-conductors conduct ok. Non-conductive samples need to be coated (C, Al, Ag, Be...). Porosity: There could be 2 errors in porous material: the electron range will be greater (absorption path longer) than in non-porous material of same composition); and in non-conductive material, there could be problems with charging as the electrons travel between pores (~vacuum) and material.

21 Sample/standard Error: Physical issues - 7
Carbon coat: the conductive coating on the samples should be of the same thickness as on the standards being used. This can be evaluated experimentally or with the GMRFilm modeling program. Kerrick et al. (1973) measured the effect and showed it affected the light elements most strongly, and was worst at lower E0: a difference of 200 Å between sample and standard translated to a 4% difference in F Ka intensity. There is some antidotal evidence that old (many years-decade?) carbon coats may “go bad” (oxidize? delaminate?) and lose conductivity. Kerrick et al, 1973, American Mineralogist, 58,

22 Sample/standard Error: Physical issues - 8
Beam sensitive samples: require care, such as lower current (e.g. 1-6 nA) and defocused beam (10-25 mm), or a correction for count drop (“volatile element” correction in our Probe for Windows software): Glasses with alkalis (esp. Na), particularly hydrous glasses: Na drops precipitously, K somewhat, with Al and Si counts increasing . Alkali feldspars, particularly albite: Na counts drop Carbonates and anhydrite: easily decompose with 10 nA Apatite: not as fragile, but some grains will crater with moderate currents (60 nA) after seconds.

23 Sample/standard Error: Physical issues - 9
Na Al Above, the dramatic drop in Na Ka counts versus time is demonstrate. It is worst at 20 nA, and much less at 2 nA. The related phenomenon of “grow in” is shown to the right, with Al Ka showing the increase in counts with time more than Si Ka. Si Morgan and London, 1996, American Mineralogist, 81,

24 Sample/standard Error: Physical issues - 10
Oxidation of iron in basaltic glasses: Fialin et al (2001) reported that high electron dose (130 nA, less than 30 um diameter) led to oxidation. This was in reference to a study of Fe La/Lb as indicator of Fe-oxidation state. Sample orientation: Stormer et al. showed that F and Ca Ka intensities in apatite could vary with time if the electron beam was perpendicular to the c axis. Stormer et al, 1993, American Mineralogist, 78,

25 Sample/standard Error: Physical issues - 11
Beam deflection: magnetic specimens (e.g. some Ni-Mn compounds) apparently deflect the electron beam, as seen by contamination spots offset from ‘normal’ incident position (which would affect the Rowland Circle orientation). Limited experience suggests that carbon coating as well as being rigorous in using a constant magnification (for all standards and unknowns) may help. Not much has been published on this.

26 Sample/standard Error: Procedural issues - 1
Peak interferences: If measured peaks are overlapped by peaks of other elements, obvious errors will result. Such interferences can exist both in standards and unknowns. Such errors in unknowns can yield high totals. Unavoidable peak interferences must be addressed by using interference standards, to subtract the correct fraction of counts attributed to the interfering element. Background position interferences: Incorrect placement of background counting positions can lead to errors, as the background estimate at the peak position usually is inflated, yielding less than true counts for the element. Wavescans should be done on typical phases, and/or Virtual WDS used to evaluate the situation.

27 Sample/standard Error: Procedural issues - 2
Peak shift/shape differences: We have discussed the issues of peak shifts for S Ka. Al Ka is another element with a well documented issue of differences between the metal, oxide, and alumino-silicate phases. Also F and other light elements, and L lines of Co and Ni also have such issues. Peak shifts can yield small to significant errors. PHA settings: Bias, gain, and baselines should be checked. Gross errors in them could produce significant errors in the analytical results. Pulse height depression occurs mainly where there is a large discrepancy in count rate between standard and unknown, e.g cps on std B vs 500 cps on Mo-Si-B phase); count rates up to cps should be OK. Dropping the current on the B standard from 30 to 1 nA worked.

28 Counting Statistics - 1 We desire to count X-ray intensities of peak and backgrounds, for both standards and unknowns, with high precision and accuracy. X-ray production is a random process (Poisson statistics), where each repeated measurement represents a sample of the same specimen volume. The expected distribution can be described by Poisson statistics, which for large number of counts is closely approximated by the ‘normal’ (Gaussian) distribution. For Poisson distributions, 1 sigma = square root of the counts, and 68.3% of the sampled counts should fall within ±1 sigma, 95.4% within ±2 sigma, and 99.7 within ±3 sigma. Lifshin and Gauvin, 2001, Fig. 6, p. 172

29 Counting Statistics-2 The precision of the composition ultimately is a combination of the counting statistics of both standard and unknown, and Ziebold (1967) developed an equation for it. Recall that the K-ratio is where P and B refer to peak and background. The corresponding precision in the K ratio is given by where n and n’ are the number of repetitions of counts on the unknown and standard respectively. (The rearranged sK/K -- with square roots taken-- term was sometimes referred to as the ‘sigma upon K’ value.)

30 Counting Statistics-3 From MAC shortcourse volume Another format for considering cumulative precision of the unknown is the above graph. A maximum error at the 99% confidence interval can calculated, based upon the total counts acquired upon both the standard and the unknown: e.g. to have 1% max counting error you must have at least ~120,000 counts on the unknown and on the standard; you could get 2% with ~30,000 counts on each.

31 Probe for Windows Statistics -1
PfW provides several statistics in the normal default ‘log window’ printout for bkg subtracted peak counts: average, standard deviation, 1 sigma, std dev/1 sigma (SIGR), standard error, and relative std dev. For Si: the average is 4479 cps, and the average sample uncertainty (SDEV) for each of the 3 measurements is 15 cps. The counting error (1 sigma) is somewhat larger (21 cps), and the ratio of std dev to sigma is <1, indicating good homogeneity in Si. For homogeneous samples, we can define a standard error for the average: here, 8 cps. Finally, the printout shows the relative standard deviation as a percentage (0.3%, excellent). NB: These measurements only speak to precision, both in counting variation and sample variation.

32 Probe for Windows Statistics - 2
After the raw counts, the elemental weight percents are printed, with some of the same statistics, followed by the specific standard (number) used. Following that are the std K-ratio, and std peak (P-B) count rate. Below that are the unknown K-ratio, the unknown peak count rate, and the unknown background count. Below that are the ZAF (“ZCOR) for the element, the raw K-ratio of the unknown, the peak-background ratio of the unknown, and any interference correction applied (“INT%”, as percentage of measured counts). NB: The number of digits after a decimal point in a printout composition needs to be used with common sense!

33 Probe for Windows Statistics - 3
PfW software provides for additional optional statistics. One set relates to detection limits, i.e. what is the lowest level you can be confident in reporting.We will deal with them later, when we talk about trace elements in a few weeks. The other set of statistics relates to the homogeneity of the unknowns as well as calculation of analytical error. We will now discuss these statistics.

34 Analytical error - single line
This calculation is for analytical sensitivity of each line (= one measurement), considering both peak and background count rates (Love and Scott, 1974). It is a similar type of statistic as the 1 sigma counting precision figure, but it is presented as a percentage. Love and Scott, 1974

35 Additional analytical statistics
Probe for Windows provides a more advanced set of calculations for analytical statistics. The calculations are based on the number of data points acquired in the sample and the measured standard deviation for each element. This is important because although x-ray counts theoretically have a standard deviation equal to square root of the mean, the actual standard deviation is usually larger due to variability of instrument drift, x-ray focusing errors, and x-ray production. A common question is whether a phase being analyzed by EPMA is homogeneous, or is the same or distinct from another separate sample. An simple calculation is to look at the average composition and see if all analyses are within some range of sigmas (2 for 95%, 3 for 99% normal probability).

36 Homogeneity: confidence intervals
A more exacting criterion is calculating a precise range (in wt%) and level (in %) of homogeneity. These calculations utilize the standard deviation of measured values and the degree of statistical confidence in the determination of the average. The degree of confidence means that we wish to avoid a risk a of rejecting a good result a large per cent of the time (95 or 99%) of the time. “Student’s t distribution” gives various confidence levels for evaluation of data, i.e. whether a particular value could be said to be within the expected range of a population -- or more likely, whether two compositions could be confidently said to be the same. The degree of confidence is given as 1- a, usually .95 or .99. This means we can define a range of homogeneity, in wt%, where on the average only 5% or 1% of repeated random points would be outside this range.

37 Student’s t distribution
The general problem, where the sample size is small and the population variance is unknown, was first treated in 1905 by W.S. Gossett, who published his analysis under the pseudonym “Student”. His employer, the Guinness Breweries of Ireland, had a policy of keeping all their research as proprietary secrets. The importance of his work argued for its being published, but it was felt that anonymity would protect the company. (S.L. Meyer, Data Analysis for Scientists and Engineers, 1975, p. 274.) Goldstein et al, p. 497

38 Test for homogeneity

39 Olivine analysis: Example of homogeneity tests
Recall the original analysis Olivine analysis: Example of homogeneity tests What this means: for Si, at highest level (95%), we can say that there is chance that only 5% of number of random points will be .14 wt% greater or lesser than wt% (or as a percent, 0.7%). PfW also provides a handy table to show if the sample is homogeneous at the 1% precision level, and if so, at what confidence level.

40 Counting Statistics Analytical sensitivity is the ability to distinguish, for an element, between two measurements that are nearly equal. So here, at the 95% confidence level, two samples would have to have a difference in Si of > .20 wt% to be considered reliably different in Si.

41 Numbers of significant figures-1
There have been cases where people have taken reported compositions (i.e. wt % elements or oxides) from probe printouts and then faithfully reproduced them exactly as they got them. Once someone took figures that were reported to 3 decimal points and argued that a difference in the 3rd decimal place had some geochemical significance. The number of significant figures reported in a printout is a “mere” programming format issue, and has nothing to do with scientific precision! (However, a recent added feature to PfW is an option to output only the actual significant number of digits. This is not normally enabled.) Having said that, it is “tradition” to report to 2 decimal places. However, that should not be taken to represent precision, without a statistical test, such as given before.

42 Numbers of significant figures - 2
In the example of the olivine analysis above, where Si was printed out as wt%, it would be reported as but looking at the limited number of analyses and the homogeneity tests, I would feel uncomfortable telling someone that another analysis somewhere between 18.6 and 19.2 were not the same material. Nor would I be uncomfortable with someone reporting the Si as 18.9 wt% (though I stick to tradition.) Considering silicate mineral or glass compositions, Si is traditionally reported with 4 significant figures. If we were to be rigorous regarding significant figures, we would follow the rule that we would be bound by the least number of figures in a calculation where we multiply our measurement (K-ratio, which will have thousands of counts divided by thousands of counts) by the ZAF. As you can appreciate there are many calculations that comprise each part of the ZAF, and it would be stretching it to argue that the ZAF itself can have more than 3 significant figures. Ergo, we should not strictly report Si with more than 3 significant figures.

43 Numbers of significant figures - 3
When we enable the PfW Analytical Option “Display only statistically significant number of numerical digits” for the olivine analysis, heres the result: Wrong For comparison, here’s the original printout:

44 Errors in Matrix Correction
The K-ratio is multiplied by a matrix correction factor. There are various models – alpha, ZAF, f(rz) – and versions. Assuming that you are using the appropriate correction type, there may be some issues regarding specific parameters, e.g. mass absorption coefficients, or the f(rz) profile. There is a possibility of error for certain situations, particularly for “light elements” as well as compounds that have drastically different Z elements where pure element standards are used. The figure above shows that a small (2%) error in the mass absorption Lifshin and Gauvin, 2001, p. 176. coefficient for Al in NiAl will yield an error of 1.5% in the matrix correction. This is a strong incentive to either use standards similar to the unknown, and/or use secondary standards to verify the correctness of the EPMA analysis.

45 Standards In practice, we hope we can start out using the “best” standard we have.* There have been 2 schools of thought as to what is the “best” standard is: a pure element, or oxide, or simple compound, that is pure and whose composition is well defined. Examples would be Si or MgO or ThF4. The emphasis is upon accuracy of the reference composition. a material that is very close in composition to the unknown specimen being analyzed, e.g. silicate mineral or glass; it should be homogenous and characterized chemically, by some suitable chemical technique (could be by epma using other trusted standards). The emphasis here is upon having a matrix that is similar to the unknown, so that (1) any potential problem with the matrix correction will be minimized, and (2) any specimen specific issues (i.e. element diffusion, volatilization, sub-surface charging) will be similar in both standard and unknown, and largely cancel out. * This is based upon experience, be it from prior probe usage, from a more experienced user, from a book or article, or trial and error (experience comes from making mistakes!) It is commonly a multiple iteration, hopefully not more than 2-3 efforts.

46 Standards - Optimally “Round Robins”
Ideally the standard would be stable under the beam and not be able to be altered (e.g., oxidizable or hygroscopic) by exposure to the atmosphere. It should be large enough to be easily mounted, and able to be easily polished. If it is to be distributed widely, there must be a sufficient quantity and it must be homogeneous to some acceptable level. However, in the real world, these conditions don’t always hold. “Round Robins” On occasion, probe labs will cooperate in “round robin” exchanges of probe standards, where one physical block of materials will be examined by several labs independently, using their own standards (usually there will be some common set of operating conditions specified). The goal is to see if there is agreement as to the compositions of the materials.

47 Sources for standards :
Purchased as ready-to-go mounts from microscopy supply houses as well as some probe labs ($ ) Alternately, most probe labs develop their own suite of standards based upon their needs, acquiring standards from: Minerals and glasses from Smithsonian (Dept of Mineral Sciences: Ed Vicenzi, free) Alloys and glasses from NIST (~$100 ea) Metals and compounds from chemical supply houses (~$20-60 ea) Specialized materials from researchers (synthesized for experiments, or starting material for experiments) – both at home institution as well as globally (some $, most free) Swap with other probe labs Materials from your Department’s collections, local researchers/ experimentalists , local rock/mineral shop (e.g., Burnie’s) or national suppliers (e.g., Wards)

48 USNM Standards 1980: Gene Jarosewich, Joe Nelen and Julie Norberg at the Smithsonian Dept of Mineral Science (US National Museum) published results of an effort to develop epma standards for minerals and glasses. They had crushed, separated, then examined for homogeneity; once a subset found, it was analyzed by classical methods (wet chemistry), and then made available for distribution. This list included 26 minerals and 5 glasses. In 1983, Jarosewich and MacIntrye published data on 3 carbonate standards (calcite, dolomite and siderite), and in 1987, Jarosewich and White published data on a strontianite (SrSO4) standard. These all are available at no cost to probe labs. These are excellent standards. Users must be aware of course that the “official value” represents a bulk analysis and individual splits may be slightly different. One problem is the small size of many grains (~ mm).

49 Other Mineral Standards
In the 1960s, Bernard Evans developed a suite of silicate and oxide mineral standards (at UC Berkeley?) that were available for epma work. Some of these are still around. 1992, McGuire, Francis and Dyar published report on evaluation of 13 silicate and oxide minerals as oxygen standards. They included data for all elements. Available from Harvard Mineralogical Museum for small cost (~$ ). Here in Madison, I have evaluated several minerals from the Mineralogy collection for standards and found some very good: casserite (SnO2), wollastonite (CaSiO3), Mg-rich olivine and enstatite. Other minerals from Wards have been found to be useful (biotite and F-topaz). On the other hand, other efforts have been unsuccessful (e.g., ilmenite from Wards -- zoned/exsolution lamallae)

50 Synthesized Standards
1971, Art Chodos and Arden Albee of Caltech contracted Corning Glass to produce 3 Ca-Mg-Al borosilicate glasses (95IRV, W and X) containing a number of (normally) trace elements, at 0.8 wt% level, to be used as EPMA trace element standards. They are available now from the Smithsonian. 1971, Gerry Czamanske (USGS) synthesized 73 sulfides and 3 selenides/tellurides (for phase equilibria studies). Some of these were made available to EPMA labs. We have them here. 1972, Drake and Weill (U. Oregon) synthesized 4 Ca-Al silicate glasses each with 3-4 REE elements. 1991: Jarosewich and Boatner published data on a set of 14 rare-earth (plus Sc and Y) orthophosphates (synthesized by Boatner). These are also available at no charge from the Smithsonian. (A recent study by Donovan et al. showed that many have some unreported Pb impurities.) Recently, John Hanchar (George Washington U) has been working on synthesizing zircon, hafnon, thorite and huttonite; some are now available for standards. There are other synthetic standards available, usually in limited quantities; one discovers these sources by “asking around”. Have skilled users (who have experimental equipment) make up some compounds of elements for difficult analyses (e.g. Al, Mg, Ti, Mn where pure metal standards oxidize)

51 Evaluation of synthetic glasses
Recently Paul Carpenter et al did a rigorous evaluation of the 95IRV, W and X glasses. Shown here are the results for one of the glasses, 95IRW. This is a very valuable study, and is unusual in its thoroughness, as demonstrated in the X-ray maps, a few of which are shown here. The glasses have the trace oxides at ~.8 wt %, and with good homogeneity ( ppm range) for all but Cs, which has a much wider (1000 ppm) range. From Carpenter et al NIST-MAS presentation, 2002.

52 NIST Standards The National Institute of Standards and Technology (previously National Bureau of Standards) began to develop EPMA standards over 30 years ago. SRM = Standard Reference Material

53 NIST Standard SRM 482: Example... and problem
To the right are the documentation as well as examples of the materials supplied when one purchases a NIST standard: here, a set of 6 wires in the Cu-Au binary. At the recent (April 2002) NIST-MAS workshop on accuracy in epma and the role of standards, Eric Windsor of NIST presented the results of a study into these Cu-Au standards. For some time, there had been some reports of small levels of impurities in these standards. It turns out that there are micron-size Cu-oxides present, and the abundance is a function of the type of surface preparation/polish. From Eric Windsor, NIST-MAS presentation, 2002

54 Supply House Standards
Some pure elements and compounds purchased from chemical suppliers may be good epma standards. However, it pays to pay close attention and be careful and test them carefully. It is apparent that many materials are processed and sometimes have two phases present, whereas they are ‘certified’ as one phase. They get away with this ‘error’ because the one of the phases is an oxide of the first, and the compositions are stated to be pure to some level (e.g. 99+% on a metal basis). This in fact can be a benefit, and provide 2 standards-in-one, provided the second phase is easily distinguished. Cr2O3 (99.7%) turned out to have small Cr blebs CuO (99.98%) grains turned out to have cores of Cu2O Cr fragments and Re and Ir rods seem to be pure MgAl2O4, FeTiO3 & MnTiO3 (99.9%) were not homogeneous

55 How do you evaluate your Standards?
The traditional answer is that decide your standards are “good” by testing if they give you the answers you think you should be getting, i.e. you run other standards as secondary standards and see if you get the correct composition for them (optimally they haven’t been used in calibration). This is done one-by-one, comparing one pair of primary and secondary standards. However, we now have a powerful rapid technique that compares the functioning of several standards against each other at the same time, e.g. you acquired Si counts on your forsterite, fayalite, plagioclase, pyroxene, garnet, and sillimanite standards. You can then plot up the “official” compositions against the count rates that have been adjusted for the matrix effects in each standard. If they all plot up on a straight line*, then they all are good. If one is ok, there is a good chance there is something amiss about it (could be slightly different composition from the “official” value). I suggested to Donovan that this would be a useful addition to the Probe for Windows software 2 summers ago, and he soon developed the “Evaluate” program. * The line is pinned at the high end by the standard with the highest concentration of the element in question (which could be pure element or oxide), and should go through the (0,0) origin at the low end.

56 “Evaluate” Standards Best fit Max count forced thru (0,0) Here 2 standards (Al-Fe-Si alloys) synthesized by Fanyou Xie (MSAE) are plotted with Si defined by #1100, Al2Si4F2. Note that #614 is above the line, suggesting its real composition may be higher (shift to right). Al can be a problem (oxide layer). Here I was testing std #9979, Al-Mg alloy (98 wt% Al) and #9978, Al-Si alloy (99 wt% Al) against other standards including #13 (Al2O3) and Mg-Al alloy (#8903). Fanyou’s standards are better for unknowns with ~60 wt% Al.

57 “Evaluating” Silicate Standards
SiO2 CaSiO3 NaAlSi3O8 Al2Si4F2

58 Virtual Standards Occasions arise when there is no standard available, for one reason or another. Above is a case where a low total in a specimen led to a search for the missing elements, and after some leg work, it was learned that the specimen had been produced by sputtering in Ar. A wavescan showed an Ar Ka peak. However, I had no Ar standard. This led to discussions with John Donovan, and he subsequently developed the “Virtual Standard “ routine now in PfW.

59 Summary: How to know if the EPMA results are “good”?
There are only 2 tests to prove your results are “good” – actually, it is more correct to say that if your results can pass the test(s), then you know they are not necessarily bad analyses: 100 wt% totals (NOT 100 atomic % totals). The fact that the total is near 100 wt%. Typically, a range from wt% for silicates, glasses and other compounds is considered “good”. It extends on the low side a little to accomodate a small amount of trace elements that are realistically present in most natural (earth) materials. These analyses typically “do oxygen by stoichometry” which can introduce some undercounting where the Fe:O ratio has been set to a default of 1:1, and some the iron is ferric (Fe:O 2:3). So for spinels (e.g. Fe3O4), a perfectly good total could be 93wt%. Stoichometry, if such a test is valid (e.g. the material is a line compound, or a mineral of a set stoichometry.

60 Checking our olivine analysis
The total is excellent, wt% The stoichometry is pretty good (not excellent): on the 4 oxygens, there should be 1.00 Si atoms and we have The total cations Mg+Fe+Ca+Ni should be 2.00, and we have 2.03. The analysis is OK and could be published. If this were seen at the time of analysis, it might be useful to recheck the Si and Mg peak positions , and reacquire standard counts for Si and Mg. If this were only seen after the fact, you could re-examine the standard counts and see if there are any obvious outliers that were included and could be legitimately discarded.


Download ppt "Electron probe microanalysis"

Similar presentations


Ads by Google