Foundations for the critical discussion of analytical methods

Foundations for the critical discussion of analytical methods

Spectrochlmlca Acta. Vol.338,pp.551lo576 Pergamon Press Ltd 1978 Prmted in Grenl Brifaln Foundations for the critical discussion of analytical method...

2MB Sizes 2 Downloads 45 Views

Spectrochlmlca Acta. Vol.338,pp.551lo576 Pergamon Press Ltd 1978 Prmted in Grenl Brifaln

Foundations for the critical discussion of analytical methods* HEINRICHKAISER* Abstract-In the following

chapter the principles pertaining to chemical analysis are critically observed, partially from a new point of view. Therefore, some of the concepts, relationships, and words used may appear unfamiliar to the reader accustomed to the normal chemical literature. However, the adequate presentation of the ideas has required the use of special terms. Their choice was not made haphuzardl,r from a stock of appropriate synonyms, but by careful selection from mathematics, physics, and from the colloquial language. To assist the reader to grasp the rather abstract ideas and to stimulate his imagination, words with somewhat “plastic” meanings were chosen. However, they have been given a precise meaning in this text. It is best to take them literally in their original sense and to transfer their understanding as far as possible into the field of the new relationships.

1. BASICCONCEPTS 1.1. Zntroduction analyses are information processes planned-in most cases fairly well-with the aim of obtaining knowledge about the composition of the substance under investigation. For assessment and evaluation of such processes and of the information provided by them, two essentially different aspects must be considered : content and form. A critical discussion of the content is possible only in the context of a given individual case, rather than as a generality: for instance, the content of a formally simple analysis may be of utmost importance in a trial in a court of law, for commercial negotiations, or for solution of a scientific problem. However, information processes and the information arising from them always have a formal structure, upon which the type, the scope and the reliability of the obtainable information depend ; these questions can be treated in a general way within the framework of a theory. When such a theory is applied to an individual case, it may produce the basis for assessment of the content and the importance of the reported results. The formal structure of an information process also determines the structure of the information in the sense of knowledge which is produced by this process. For chemical analyses, this means: the questions of importance regarding the type, scope, and reliability of analytical information, and the questions of uccuracy, precision, power of detection, sensitivity, specificity and selectiuity all of which relate to the analytical procedure as such. All these concepts, used here in their colloquial sense, if exactly defined, lead to$gures of merit for analytical procedures. Only from these is it possible to arrive at a critical conclusion regarding analytical results which have been produced by a definite analytical procedure (cf. Section 1.2.6).

CHEMICAL

1.2. Concept of a “complete analytical procedure” Figures of merit in information theory must be capable of being stated in an objective way. They can, therefore, be given only in relation to concrete analytical procedures, not to general analytical principles, such as titrimetry, spectrometry, polarography, neutron activation, etc. Figures of merit for an analysis always relate only to a definite complete analytical procedure, which is specified, in every detail, by fixed working directions (order of analysis) and which is used for a particular analytical task. * Reprinted from Methodicurn Chimicum (Edited by FRIEDHELM KORTE), Vol. 1, Analyricd Methods, Part A: Purification, Wet Pmcesses, Determinution of Srructure. Academic Press, New York, San Francisco, London/ Georg Thieme Verlag, Stuttgart (1974), with permission of the Publishers, 551

HEINRICH KAISER

552

For a complete analytical procedure, everything must be predetermined : the analytical task, the apparatus, the external conditions, the experimental procedure, the evaluation and the calibration. If any feature is altered, a different analytical procedure results. If, for instance, quartz vessels are used instead of glass vessels, or if normal distilled water is replaced by doubly-distilled water, or if one moves to another laboratory with air conditioning and clean air, then one may have a new (and probably better) analytical procedure, in spite of the fact that the general mode of operation has remained unchanged. One of the most effective means of achieving higher precision in chemical analysis is to take the average of the values from a number of repeated analyses. However, even this alteration of the evaluation process constitutes a change in the analytical procedure. The same would apply if the measurement period were extended (equivalent to better time averaging), for example in X-ray fluorescence analysis. 1.3. Structure

of analytical procedures,

classes

qf measurable

quantities

Chemical analytical procedures are a special group of technical and scientific measurement procedures. Very little, attention has hitherto been paid to their logical structure, because interest has been concentrated on the chemical reactions and the operations to be carried out. Any measurement procedure is embodied in the experimental setup, the instruments, substances, etc., and also in the relevant directions, which must specify the procedure in every detail. If a measurement procedure is employed in a particular case, it produces a measured oalue x (sometimes called a measure for short) for the measurable quantity in question. The aim of the measurement is to find the true value of the measurable quantity. However, the “measure” may deviate to some extent from the unknown true value. The difference (measure minus true value) is the error of this particular experimental result. It would be possible to determine how large or small the error is in a particular case only if, in addition to the measured value, the true value were also known in some way. Such a case is, however, only of theoretical interest since a measurement is normally made because one wishes to know the so far unknown true value of the measurable quantity. Chemical analyses are carried out in order to determine the concentrations (symbol : c) of the substances under study, or their absolute quantities (q), and ‘often the ratios (c, or q,.) of such quantities. Various units for these quantities are used, depending on circumstances and practice; it is, therefore, essential always to indicate which units are used, in order to avoid misunderstandings. (Symbols in formulae stand for the measurable quantities, and always denote the product of a numerical value and the appropriate unit). This class of measurable quantities is the class of the target quantities of the analysis. To make the following considerations more concrete, we shall apply the term content to all target quantities, and will use the symbol c for them. Chemical contents cannot be measured directly; their numerical values, the “results are derived from measurements of other quantities such as weight, of an analysis”, volume, current strength, refractive index, absorbance (extinction in German), intensity of a spectral line, pulse rate, photographic blackening etc. For these measurable quantities, which relate to the very core of a procedure, the term measure (symbol: x) should be reserved in its specific sense. What may be considered as the “core” of an analytical procedure is often indicated by the type designation, for instance one speaks of gravimetric, polarographic, chromatographic, spectrochemical, and neutron activation analysis. The measurable quantities which are of decisive importance in analysis are nowadays almost exclusively physical in character. However, in many cases these measures are not observed directly, but are derived from the “reading” of an indicating instrument. Very often this is a geometrical quantity, for instance the position of a needle, or a recorder trace, or the print-out from a digital voltmeter, etc. This is so obvious that one often overlooks the fact that the “indicated quantities” are necessary intermediate steps, mainly because most instruments are calibrated to give numerical values of the

Foundations

for the critical

discussion

of analytical

methods

553

important physical quantities. For this reason, the intermediate steps, such as reading the position of a needle, will not be considered separately in the following text. It must not, however, be forgotten that indicating instruments may limit the power of a measurement procedure by having too low a precision, too restricted a range of measurement, or too high a “reading threshold”. Besides the two quantitative classes of decisive importance, considered thus far, the contents c and the measured quantities x (including the readings, etc.), there is a third class of quantities, the parameters (symbol: r) of the analytical procedure. The parameters (i.e. temperature, pressure, concentration of chemical reagents, pH value, wavelength in a spectrum, time) determine the conditions under which the measurements are made during the analysis. The values of the parameters can be measured; they are often given by the external conditions (e.g. room temperature, atmospheric pressure), and in many cases they can be preset (e.g. pH value, reaction temperature, wavelength). Parameters may be adjusted to fixed values, or they may be varied over wide ranges during an analysis. Unnoticed changes of experimental parameters are often the cause of analytical errors which may appear to be accidental, but which in fact are systematic in nature and, therefore, can be corrected for if the appropriate parameter is measured (e.g. fluctuations in temperature, varying background in a spectrum). This is often disregarded; consequently, all directions relating to procedure should state specifically which parameters are important, and to which values they should be adjusted. The various quantities, contents c, measures x, and parameters r are interdependent through a system of empirical correlations, theoretically derived functions, systems of equations, diagrams, etc.

2. ASPECTSOFTHEASSESSMENTOFANALYTICALPROCEDURES 2.1.Calibration

The relationship between measures x (weight, volume, current strength, intensity, etc.) and the desired values for the contents c is described by the “analytical function”, which must be regarded as an essential component of an analytical procedure. If this function is very simple, for instance if it is given by a stoichiometric conversion factor, its involvement may go unnoticed. However, many problems of chemical analysis cannot be understood correctly if one forgets that the link between the measure and the content is always the analytical function. If one wishes to rely upon stoichiometric relationships or upon theoretical derivations, it is essential to have tested whether the reactions, which have been presupposed as the basis of the calculations, proceed completely and undisturbed. This is why, in publications on new analytical procedures, tables are very often given comparing the “given” chemical contents with those “found” by analysis. When there are systematic deviations between the “given” and the “found” values, these can be corrected by establishing a so-called “empirical factor”. This procedure is none other than a calibration of the analytical procedure with standard samples of known composition, even if not so named. Calibration is a necessary operation when building up a quantitative analytical procedure. There are no analytical procedures which are “absolute” in the true sense of this word, even if some are so called. To analyse absolutely-i.e. without any presuppositions-it would be necessary to identify the atoms and the molecules of the substances to be determined, to sort them out, and to count them individually and completely. During any analysis, the sample under investigation is compared with the standard samples which were used to calibrate the analytical procedure. This is done by using the relevant “analytical function” (which sometimes may be given only by a conversion factor). The calibration of many of the common, well proved procedures may have taken

554

HEINRICH KAISER

place a long time ago; the author and his paper may be forgotten; his results are, however, part of the anonymous treasure of experience in Analytical Chemistry. Every analysis, therefore, contains comparison as a basic operation in its structure. However, comparison of different objects is possible only if they are in some way “alike”--this general concept of likeness must be specified for each particular case. The samples submitted for analysis, and the standard samples used for calibration of the analytical procedure, must be of the “same kind” with respect to all operations, reactions and measurements to which they are submitted during the course of the “particular complete analytical procedure” : they must belong to the same “family” of analytical samples. 2.2. Classijication

of calibration methods

The method by which an analytical procedure is calibrated is an important characteristic for its assessment. The accuracy (see Section 1.2.6) of the analytical results, the range of applicability of the procedure, and the instrumental and running costs are partly dependent on the method of calibration which was used. The various methods for calibration of analytical procedures may be classified in order of decreasing effectiveness ; in the following text, they are indicated by small Greek letters. 2.2.1. Complete calibration methods. 2.2.1.1. a-Calibration (with synthetic standard samples). Standard samples* which have been synthesized from pure substances-together with the calibration functions based on them-are in their entirety the supporting structure of analytical chemistry. During the course of a long historical development, these samples have been made more and more reliable by many cross-checks, controls and corrections. Today, we are able to prepare many substances in a pure state as starting materials for the calibration of analytical procedures; in particular, we have learned to determine quantitatively how pure these substances are. When it is possible to prepare standard samples in a reliable way, for a particular analytical procedure from pure substances, then an analytical function can be established which has no bias ; and subsequent analytical results will, consequently, be free from systematic errors. Since the true contents of the standard samples are known from their composition, they can be directly correlated with the measured quantities; this allows the inevitable accidental errors of the individual calibration measurements to be eliminated by taking the average of a large number of such measurements. a-calibrations are found everywhere in the practice of chemical analysis ; many of the stoichiometric conversion factors in the literature have been determined in this waymostly, however, for ideal analytical conditions. Standard samples for o-calibrations can be prepared with reasonable expenditure only for relatively simple analytical problems, for which the type of the sample and of the components to be determined is known. Furthermore, only a few components should be involved or if many components are involved lower analytical precision should be regarded as adequate. Examples are: calibration solutions for flame spectrometric methods, mixtures of gases, and simple metallic alloys with known and clear phase diagrams. In analytical organic chemistry, and particularly in biochemistry, it may be difficult to obtain the starting materials. If traces of contaminants are to be determined in substances of high purity, it may be impossible to obtain these substances free from these contaminations; they cannot, therefore, be used to synthesise a-standard samples. Even when synthetic samples have the correct composition, it may not be feasible to transfer them into the same physical or chemical state as the samples for analysis; this means that the standard samples and the analytical samples are not of the same kind with respect to the analytical procedure, and cannot, therefore, be directly compared. * The term “standard sample” follows the present official nomenclature. However. there is a strong movement to restrict the use of the term “standard” to samples which have been officially analyzed and issued by some authoritative organization, It has been proposed to use the term “reference sample” or “calibration sample” for such material coming from other sources without an official certificate of composition.

Foundations

for the critical

discussion

of analytical

methods

555

In practice, direct a-calibration of a complex analytical procedure is the exception rather than the rule. “Accurate analytical” results are, however, to be expected only if they relate directly or indirectly to o-calibrations. This is achieved with the next class, cc-calibration. 2.2.1.2. x-Calibration (with analysed standard samples). This type of calibration is especially important for analytical procedures used for complex analyses of large series of similar samples, for instance for production control in steel mills. a-calibration proceeds in a way converse to synthetic calibration; it starts with the selection of a set of homogeneous samples which cover the whole range of the compositions in question. Tests must be carried out to determine whether the desired range is covered, and whether these selected samples are of the “same kind” as the other samples: belong to the same “family”. Such tests can often be carried out by using the measured value given by the uncalibrated procedure, or by performing auxiliary experiments (e.g. investigation of the crystal structure). In other cases a thorough knowledge of the task may be helpful. The second step of an cr-calibration is the analysis of the selected samples by another analytical procedure which has been o-calibrated. There are two different ways of achieving this goal: either all samples involved must be treated in such a way that ultimately they are of the same kind with respect to the analytical procedure used for the a-calibration, or the total analytical procedure must be split up into a number of different parts. The first way is feasible only with relatively simple analytical procedures. For instance, metallic samples which are to be used as standards for rapid spectrochemical analysis of solid cast samples may be dissolved and then compared with g-standard solutions made from pure salts of the elements to be determined. The second way involves more work, but is more practicable. The sample for analysis is selectively subdivided into portions of different composition by a series of separation operations, in such a way that each of the final preparations contains only one or a few of the components. Their contents can then be determined by relatively simple reactions. At the end of such an analytical procedure, the o-calibrations for the different components occur independently of each other. (A prototype of such a procedure is the classical inorganic analysis by a sequence of precipitations). The topological scheme for such a multi-branched method of analysis is the “tree” (see 2.3.1. Fig. l.A). It may be very difficult to get reliable results from a highly branched method of analysis. Incomplete separations may lead to systematic errors; and also, the random errors of the result will, in the end, become systematic analytical errors of the subsidiary cc-calibrated procedure. This possible transfer of errors to all subsidiary analyses requires the utmost care. Checks by parallel analyses using other procedures and in other laboratories must be made. Analyzed standard samples of good reliability are generally expensive, and are commercially available only for the most important technical analytical purposes. If standard samples are required for the determination of chemical compounds rather than elements, it may be very difficult or even impossible to design separation processes for the analysis of these standard samples such that the analytical information is not partly lost during the course of the operations. This is especially true for sensitive organic compounds and natural products, which are easily destroyed. In order to draw conclusions about what the original substance may have been, one must try to obtain as many relationships as possible between the observed fragments and the measured quantities in the same way as when elucidating a chemical structure. The topological scheme for such an analytical procedure is the “network” (see 2.3.2. Fig. l.N). 2.2.1.3. b-Calibration (by differential additions). In many cases, an analytical procedure can be calibrated by adding small but known amounts of the component to be determined to the sample undergoing analysis. This is like scanning the analytical function in small differential steps. In order to obtain a smooth and simple function, it is necessary to make a suitable choice for the measurable quantity, and to eliminate the influence of interfering parameters by applying corrections (e.g. corrections for blank

HEINRICH KAISER

556

values in chemical reactions, background in a spectrum, distortions of recorded traces). If the result is a calibration function with an easily apparent form, then it may be possible to extrapolate this function beyond the range which was covered by the additions, and thus to determine the unknown content which was originally present in the analytical sample. This h-calibration procedure, sometimes called the addition method of culibrution, is the only one which allows quantitative determination of very small trace amounts when the basic material of the analytical sample cannot be obtained completely free from impurities [ 1,2]. The d-calibration procedure presupposes that the added amount of the component to be determined behaves analytically in the same way as that part of the component which was originally present in the sample (the standard sample and the analytical sample must be of the same kind). An example of how difficult the task can be is the analysis of hard refractory ceramics, such as A1203, for traces of alkalis. The alkali atoms which are added do not enter the crystal lattice; the atoms in the lattice do not come out [3]. If the sample is dissolved or melted, contamination or losses are probable, and, if the sample is diluted during the course of such operations, the detection limit for traces may become undesirably high. 2.2.1.4. &Calibration (on a theoretical basis). With some reservations, it is also possible to include in this group of complete calibration methods those which derive the contents to be determined from the measured quantities by using quite general principles and known data. Such laws, for instance, include the law of mass action, the Lambert-Beer law, the Boltzmann distribution, etc. General laws are, of course, idealizations; they are valid only if definite assumptions are correct. In most cases they have been found and verified only under special, very pure experimental conditions. “Pure” conditions are, under the practical conditions of chemical analysis, nearly a contradictio in adjecto. Interfering factors of all kinds must be taken into consideration. Factors of yield or activities demonstrate clearly that pure theory is often not sufficient. Even in cases where a theoretically derived relationship has been established and has long been proved, its validity has in most cases been verified by the use of synthesized standard samples (for instance for many analytical reactions which follow the theoretically predicted stoichiometric relationships). However, &calibrations in the two meanings of this term are of importance for new analytical procedures. They provide a first approximation, until they are either verified or replaced by a-calibrations. They often lead to analytical results which are at least almost correct; in this sense, &calibrations represent a transition to the next group. 2.2.2. Abridged calibration methods. 2.2.2.1. ti-Calibration (by convention). Very often the results of chemical analyses provide a basis for decisions in industry, commerce, or in legal affairs. In such cases, the chemical composition of a substance may be of interest only in so far as it determines the required qualities. When technical requirements as to quality and the conditions of delivery are fulfilled, and the prescribed tolerances are observed, then one may be able to manage without knowing the “true contents”. On the other hand, it may be necessary to state differences or ratios exactly, in order to ensure that quality specifications are fulfilled, or to decide a dispute. In such cases, the partners must agree, by convention, on a procedure which leads to directly comparable results. There are two possibilities for reaching such a procedure : either one agrees upon an “umpire assay”, including fixed calibration factors, or one calibrates the analyses by using “c?ficiul standard samples”. The convention is then to accept the results of such umpire analyses as correct within the limits set by the inevitable random errors. This assumption may be true in [l] [2] [3]

Chew 209, 35 (1965). Acca 10,201 (1957). H. WAECHTER in ReinststafSprobleme (Edited by E. REXER) Bd. IILReinststoffanalytik, G. EHRIXH

and R. GERBATSCH, Z. Awl.

F. ROSENDAHL, Spectrochiti. Berlin (1966).

p. 245. Akademie

Foundations

for the critical

discussion

of analytical

methods

557

many cases; effort is made to select optimal procedures for arbitration and standard samples; however, it is essential that one need not check the validity of the assumption for each single case, but rather appeal to the convention. 2.2.2.2. fl-Calibration (broad band calibration). This term is used for analytical procedures for which it is known in advance that the calibration will not give a high analytical precision. Instead of a calibration curve (with relatively small scatter), the relationship between measurable quantity .Yand content c (see 1.1.3) is represented by a broader band. One may be compelled to forego analytical precision because this is no longer attainable; there are however, many analytical problems whose solution does not require very high precision. Examples include many procedures for survey analysis, where classification of the contents according to their orders of magnitude may be sufficient. Such analytical procedures are sometimes termed “semi-quantitative”. The costs of instruments and time are generally much smaller than with highly refined precision procedures. Furthermore, the problem of ensuring that the analytical samples and the standard samples are of the same kind is considerably simplified. In the broad band which represents the calibration function, the systematic errors (see 2.6) which are due to the different composition of the samples disappear at least partially. They can now be regarded as random errors caused by accidental differences in the composition of the samples. This opens the way to “universal analytical procedures”. [4] The smaller the requirements for precision and limits of detection are, the “more universal” such procedures may be as regards the nature of the samples. 2.2.3. Calibration with auxiliary scales. For all the classes of calibration methods mentioned above, it is, in principle, possible to give the “target quantities” of the analysis (contents, concentrations, absolute quantities) in SI units, i.e. in kg, m and mol. However, this presupposes that the nature of the substances to be determined is known, i.e. that their density or their molecular weight can be determined. If this is not the case, units or suitable scales must be introduced in an ad hoc manner. 2.2.3.1. o-Calibration (with agreed units). o-calibrations are often found in biochemical or pharmacological analyses. The interest may, for example, be in a substance, such as a hormone, an enzyme or a poison, principally as regards its biological effect, which is usually dependent on the quantity of substance present. In such cases, one attempts to find an analytical measure, for instance the color or turbidity of a solution, which can be attributed to the active component of the substance. In order to be able to derive from such an observation a useful value of the quantity present, the procedure must be calibrated with some “standard sample”, whose content has been determined in some arbitrarily fixed unit derived from the observed biological effect. Of this type are, for instance, the international units of antibiotics, or the LD50 values (lethal dose) used in the case of poisons. The logical structure of the o-calibration, therefore, corresponds to an a-calibration, with the difference that the basic unit is specific and valid only for the substance in question. 2.2.3.2. t-Calibration (with technical scales). In calibration with technical scales, no attempt is made to give any value for contents; instead, the analytical measures are directly correlated with characteristics or properties of technical or practical importance. This is possible when raw materials, chemicals, metallic alloys, etc. are to be classified. In general, only a very rough subdivision into different classes is necessary, i.e. a broad-band calibration is involved. 2.2.4. Stability ofcalibration. The question of how long and under what circumstances the calibration values derived for a definite analytical procedure remain valid, has always been of importance for the critical assessment of a procedure and of the analytical results produced by it. Recently this question has reemerged in a hidden form, with the catch-phrase “inter-laboratory reproducibility” which is used in the statistical treatment of series of analyses. Obviously it has not hitherto been realized 141 Cn. E. HARVEY, (1964).

Srmiquunriratice

Spectrochentisrry.

Applied

Researk

Laboratories,

Glendale,

California

HEINRICHKAISER

558

that this is not so much a question of analytical precision, but rather a question of successful calibration. Stability of calibration involves two questions: How “robust” is the structure of an analytical procedure towards variations of the experimental parameters? To what extent has one control over an analytical procedure, as regards theory and technique, in order to keep the experimental conditions constant or to take into account the influence of changing parameters by suitable measurements and corrections? Let us consider a system of separated analytical functions, which may also be dependent on various parameters rh, h = 1. j; then a procedure may be termed “robust” if the partial differential quotients

a-r,

i=l,..., n; h=l,... >.I drh are small throughout the whole range of application; at best, they would be zero. This may be achieved most nearly with analytical procedures whose structure is simple, and in which only a few variables, a few parameters, and relatively simple operations are involved. Complex, analytical procedures are in most cases not “robust”, because not all sensitivities towards parameter variations,

can be made small simultaneously and throughout the whole range. For assessment of analytical procedures as regards stability, it is useful to distinguish three degrees of calibration stability. 2.2.4.1. Perfect calibration Q). The highest degree of calibration stability exists if it is possible to describe the analytical procedure sufficiently precisely, i.e. to prescribe every detail, so that it can be reproduced anywhere and at any time in such a way that the calibration function, once established, can be accepted. Analytical procedures with such a generally transferable calibration should be called perfectly or permanently calibrated (symbol : p). Perfectly calibrated in this sense are many of the classical precipitation reactions, as well as analytical procedures using titrimetry, coulometry and spectrophotometry. Most of these are relatively simple procedures as regards task and operation. A complex analytical procedure which is composed of perfectly calibrated subsidiary procedures may not as a whole be perfectly calibrated, because the transitions between the component parts of the total procedure (for instance preliminary separations) may not be totally reproducible. On the other hand, it should be mentioned that very complex analytical procedures do exist which can be perfectly calibrated. 2.2.4.2. Fixed calibration 0. The second degree of calibration stability exists when the calibration values, once derived, cannot be generally transferred, but remain valid for a definite experimental arrangement, a type of instrument or even an individual instrument. A necessary condition for this degree of stability is that attention is paid to the “environmental parameters” (pressure, temperature, humidity and cleanness of the air), and also to the identity of chemical reagents originating from different sources. Analytical procedures for which the calibration validity is restricted to definite conditions and experimental arrangements should be termed firmly calibrated (symbol: f) or “fixed” in calibration. Most analytical procedures used in routine production control are firmly calibrated. In production control laboratories where many analyses are made with firmly calibrated procedures, it is the custom to check the calibration from time to time, for instance every day, with the aid of analysis control samples. This must be considered as a summary inspection of the procedure and the apparatus, but not as a “control calibration”. A new calibration requires much more work, and should not be necessary for a procedure with sufficient calibration stability. 2.2.4.3. Calibration linked to “leader standard samples” (I). For many types of analysis,

Foundations

for the critical

discdssion

of analytical

methods

559

it is not possible to keep the experimental conditions constant over a relatively long time. To calibrate such procedures of limited stability, it is necessary to include measurements on the standard samples in each particular analysis; these standard samples lead the analysis samples through the whole procedure, and are called leader standard samples, the analytical procedures being designated “calibrated with leader samples” (symbol : 1). 2.2.4.4. Uncertain calibration (u?). Characteristic difficulties may be observed when new types of analytical procedures are being developed; at this stage, the new procedures are not yet “complete”. The cause of large variations in the calibration function is generally some kind of factor concerned with yield, which depends greatly on the experimental conditions. What is missing, then, is a reference quantity which runs through the whole procedure and which is influenced in the same way as the quantity to be measured. A classical example of a procedure without a definite calibration function is the “ultimate line” techniques of DE GRAMONT, used in spectrochemistry. It has been superseded by the “homologous line pair” technique of Gerlach, in which the analysis line is compared with one or more lines of a reference element [5]. 2.3. Topology of analytical procedures The logical interrelation of operations and decisions which, in the course of an analysis, lead from the analytical sample to the analytical result, can be represented in topological diagrams (Fig. 1). These diagrams are called “topological” because only the general schemes of the interrelations are given, without any metrical aspect. 2.3.1. Simple and branched structures. The topological tree (A) gives the interrelation of classical analytical procedures, characterized by their sequence of separations. Starting from the stem, the analytical sample, the procedures branch increasingly until finally, at the end of the last twigs, the measured values for the particular components or elements to be determined are found. The structure of the “tree” demonstrates clearly how, in this case, the procedure as a whole is composed of individual, mostly simple, analytical operations (see 2.2.4.1.). The topological structure of a bundle (B) is characteristic of all analytical procedures in which the different measured values for the various components to be determined are obtained in parallel, practically simultaneously, and mostly by using the same principle. This group includes particularly all spectroscopic procedures with several parallel mkasuring channels. Large numbers of measurement channels require expensive instrumentation, but the advantage is the saving in time during the course of the analyses themselves. The topological structure of a chain (C) corresponds “dually” (in the sense of mathematics) to the structure of a bundle (II); the axis of the chain may be considered as representing the sequence in time; and, the individual members of the chain may represent the analytical decisions, which are mostly based on one analytical principle, as for the bundle. Analytical procedures with this structure are “single channel procedures”, usually requiring low instrument expense, but are time-consuming. This group includes procedures involving electrochemical separation, fractional distillation, ,fractional e.utraction, chromatography, or the recording of spectra in a time sequence, etc. The topological structure of a point (P) can be regarded as a degenerated form of either B or C ; it can be applied to analytical procedures in which the decision is made on the basis of one simple measurement, without analytical (A), spatial’(B) or sequential (C) preseparation. Examples are: determination of the concentration of a particular substance in solution by measuring the density, the vapor pressure, the optical rotation, the melting point or the refractive index, etc. Simply weighing a grain of gold in order to determine the quantity of gold also belongs to this group. (In contrast, the determination of gold by a docimastic process, during which the less noble metals in the sample are first removed, obviously has the structure C.) [5] W.

GERLACH,

W(B) 33 9-R

Z.

Among

Chem.

142,383(1925).

HEINRICHKAISER

560

n

Operations

? No result l Result

Fig. 1. Topological structure of analytical C--chain (temporal); P--point; N---net;

2.3.2. Complex (compound

analytical procedures). The

procedures

procedures. A -tree (arbor); By -~bundle (local): Inf = information; W = loop; Res = total result.

with

the

topological

structure

of

a

network

topological structure of a network (N) especially describes procedures by means of which complicated problems in organic analysis, for instance the elucidation of chemical structures [63 are solved. It is a characteristic feature of such procedures that the strategy of the analytical process cannot be fixed in advance, but is developed during the course of the work. At the knots of the net, different partial results which have been gained so far are combined, and together determine the subsequent procedure. It may happen that the same knot of a network is passed several times by going along a loop. For instance, the first attack on the problem by means of a number of measurements started in parallel may have demonstrated that other types of measurements must be [6]

W. SIMON,2. Anal. Chem. 221, 368 (1966).

Foundations

for the critical

discussion

of analytical

methods

561

made before sufficient information can be produced to decide how to proceed further with the analysis. It may happen that additional information must be brought in from an external source and must be fed into the network. It is very useful for the analyst himself to write down the course and the interrelations of a complicated analytical procedure in the form of a network; it may be possible to recognize a dead-end at an early stage, and also to detect a shorter way. A typical analytical problem requiring a compound procedure of this type exists, for example, when one has to determine the nature and structure of a new organic compound. Methods of purification, such as distillation and crystallization, combined with analytical control measurements, provide a homogeneous and well-defined substance for any further work. An elemental analysis or a high resolution mass spectrum provides the empirical formula. By spectrochemical emission analysis, metals which are present may be detected. Infrared spectrometry indicates the functional groups and some features of the molecular skeleton. Raman spectrometry particularly reveals highly symmetrical arrangements of atoms, whose vibrations do not appear in the infrared spectrum. The U I/spectrum indicates the electronic configuration ; the NMR spectrum gives the numbers and the positions of the protons in the molecule. In a mass spectrum the stable fragments from the molecule can be observed. X-Ray diflraction provides information about the crystal structure and the texture of the substance. The classical reactions of organic chemistry, preparative or analytical, provide further indications or confirmation, This whole network of measurement procedures, operations, and decisions constitutes a complex analytical procedure with very high “informing power”. 2.4. Optimization 2.4.1. Aims of optimization. The task of discussing a particular analytical procedure critically very often includes the question of what is the best procedure for solving a definite analytical problem. According to the particular needs, the optimum may mean low costs, short times for analyses, high analytical precision, low limits of detection, etc. Furthermore, one must describe the limitations, or the requirements to be fulfilled, for instance which instruments are available or can be obtained, the maximum admissable cost, the number of required analyses per unit time, etc. The range of possibilities may thus be limited to such an extent that the problem of optimization can be tackled. The solution of such problems has already been well treated in the mathematical theories of operations research. In order to make such mathematical investigations feasible, a formalized mathematical model for chemical analysis must be developed. 2.4.2. Optimization with respect to topology. 2.4.2.1. Tree structure (A). For example, let us suppose that several single possible operations from which such an analytical procedure is built up are equivalent as regards cost and time. The task of optimization is then reduced to the problem of isolating the individual components by the smallest possible number of separations necessary to determine their contents. To achieve this, the separation processes must be so chosen that the subgroups of separated components produced are nearly equally occupied, thus permitting simultaneous further treatment of as many of the components as possible. Isolation and determination of single components must take place at the end of the whole procedure. A problem of optimization is also hidden in the question: which analytical tasks can be solved by the use of analytical procedures of a definite topological and calibration structure? The answer to such a question can provide only general indications. Procedures following the scheme of classical separation analysis with the topological structure of a tree (A) are composed of many relatively simple and independent operations. It is, therefore, possible to calibrate them perfectly (p) with synthesized samples (a); the calibration values can generally be transferred, and may be found very often in the literature as stoichiometric factors. This type is, therefore, designated by the symbol A(ap). A(ap) procedures are highly versatile, because they are designed according to a

562

HEINRICHKAISER

building box principle. The bricks of this analytical building box are available readymade, in large numbers. The number of possible combinations is unimaginably high. When an analytical method must be found in an ad hoc manner for a new and hitherto untreated analytical task, one should first try to compose this new procedure as one of type A. The instrumental costs will in most cases be relatively small, whereas the required time may be considerable. In spite of their versatility, the A(ap) procedures are not very suitable for survey analyses and for universal analytical procedures; they are too tedious and too expensive. Here, other procedures are required whose details are all established, and which, therefore, need no check measurements or alterations for particular cases. Complex analytical procedures of type A are also generally not very suitable for automation. Apparatus used for the automation of A-procedures (automatic balances ; transporting, filling, dividing and tapping devices ; optical and electrical sensors) are very often robots that replace the human hand, eye and sense of touch, which formerly had to play an active part in the procedure. In this article, the concepts “automated” or “partly automated” are applied to all analytical procedures which operate wholly or in part without human action. This corresponds to the use of the word “automated” in colloquial usage. By stating whether the analytical procedures are governed by a program or controlled by feedback, or whether they are partly governed and partly controlled, one has a clear distinction between the various possibilities, particularly if the topological structure with A, B, C and P is also given. 2.4.2.2. Bundle structure (B). Procedures of this type, (B), were in most cases originally determined by modern techniques rather than by “craftsmanship”, and can be automated without fundamental difficulties; however, the technical expenditure may be very great. The main point is that, in the core of such procedures, a single analytical principle operates, not a variety of branched operations and reactions as in the A-procedures. The “analytical work” of separation is taken over by a variable parameter, such as potential, energy, wavelength, mass number or (sometimes) time. (Regarding, the cpncept of “resolving power”, see Section 2.5.2). Analytical procedures with the bundle structure (B), i.e. mostly with spatial separation, are suitable for rupid malyses. Depending on the analytical task, they must have many parallel measurement channels, and very often the evaluation of the measured quantities is done by an electronic computer. Examples include the direct reading optical emission spectrometer, and the X-ray fluorescence spectrometer, which are nowadays used in the laboratories of steel works, with very extensive but fixed analytical programs. These procedures are firmly calibrated with analysed standard samples, and, therefore, are of type B (cx,,j). 2.4.2.3. Chain structure (C). These procedures in general are slower, but in most cases cheaper, because one measurement channel is sufficient. The separating variable parameter is either time itself (e.g. time consumed during chromatography or electrophoresis) or a time-dependent parameter (e.g. the potential applied during a polarographic analysis). These procedures can be automated very easily without great expense. If many samples are to be analysed, it is possible to have a large number of installations running in parallel. The number of components to be determined with a C-procedure is generally smaller than the number of components with a B-procedure, because constant conditions must be maintained throughout the longer duration of the analyses, and this may be difficult to achieve. However, when time is available, it is possible to use not only the rapid x-calibration but also the more time-consuming a-calibration; the most frequent types are C(c$) and C(qf). 2.4.2.4. Network structure (4. Compound procedures of this structure must be considered in the same way as procedures with the topological tree structure (A). They may also be composed of subsidiary procedures, as in a building box system ; they can be adapted not only to individual problems, but also to whole fields of problems in chemical analysis, for instance to the determination of structures of organic compounds.

Foundations

for the critical

discussion

of analytical

methods

563

The urgent necessity of finding any suitable solution for a difficult analytical task, may predominate over the question of costs and of the time required for analysis. Furthermore, the question as to whether the procedure as a whole can be automated has little meaning, whereas the subsidiary procedures co-operating in a compound analytical procedure may themselves proceed automatically, either partially or completely. These subsidiary procedures have mostly the structure B or C. It is interesting to consider what happens at the knots of the network where the intermediate results are evaluated together. Very often, the experience and the imagination of the analytical chemist is indispensable for deciding how the work must be continued. However, if the decisions are made according to formal rules, then logical tools may be employed, for instance correlation tables, punched cards, or a computer. 2.5. Informing

power

2.5.1. Dejnitions. Analytical procedures are information processes (cf. [7,8]). Information of all kinds* is transferred and retained by signals. Signals are either configurations in space (e.g. printed letters, pictures, punched tapes, or magnetic tapes) or they are processes which occur in space and time (e.g. sound, light, electric currents). All these signals are finite in space and time: in consequence, only a finite number of structural details can be distinguished in each signal. These can be represented by a finite number of numerical values. In order to understand the meaning of this representation, one must know the classification system or code. Because the binary system of numbers is widely used in electronic data processing, this system has been generally adopted for the theoretical treatment of information processes as well ; it will, therefore, be applied for the following considerations. A binary position (bit) is considered as the unit with dimension 1, by which an “amount of information” can be measured as a metric quantity. (As all units it is to be written in the singular form. If, in a context, this term is used as an abbreviation qf “binary digit” in this concrete sense, the plural should be used). A single position in a binary system, with its two digits 0 and 1, makes it possible to represent symbolically the two cases of a yes/no decision. Let us now consider how many binary digits would be required to assign binary numbers to all the different results which might possibly be produced by an analytical procedure, so that one could then look up the corresponding text (for instance values for the concentrations). This number of required binary digits is a metric quantity which indicates how much an analytical procedure can do as an information process, considered from a purely formal point of view. This quantity may be called the “informing power” Pinf of the analytical procedure [9]. The informing power of an analytical procedure is determined by the number n of the different measurable quantities, and by the number S of distinguishable steps of values for each of these quantities. The formula is Pinf

=

i i=

1

log, Si

(1)

*The nomenclature in this field is still ambiguous. The word “information” is used in at least four different meanings, often in the same text. In this article the following nomenclature will be used: In the c/mrro/ sense. Information = process of instruction. Information (s) = (new) knowledge with respect to content (preferably plural). In the sense clf’wetric quantities. Informing power, P,,, (of a method); amount of information, Minr (required technically to communicate some definite desired knowledge); information capacity (capability of a technical system to transmit or store an amount of information). [7] [8] [9]

L. M. IVANCOV, P. G. KUZNECOV, Ju. I. STACHEEVin Reir~srr,Jl~roh/r,~?~, (Edited by E. REXER), Bd. IIReinststoffanalytik, p. 3 I. Akademie. Berlin ( 1966). General literature about information theory see [J7] (with many references), also articles in [48] Vol. II and [SO]. H. KAISER[27].

HEINRICH KAISER

564 or

if the number of the distinguishable steps for all n measures is of nearly the same order of magnitude, so that the calculation can be made with an average value S. It is obvious that the relatively coarse measurement (S small) of a second quantity may give much more additional informing power than a considerable increase in the precision of measurement (S large) which can very often only be achieved at high cost and with much effort. If the number of measurable quantities is doubled Pinf will be increased by a factor of 2; if the number of distinguishable steps is duplicated, Pi”f rises by one bit only. 2.5.2. Parameters and resolution. The IZ different measures may belong to different values of a variable parameter (frequency, wavelength, mass number, time, location in space, electrical potential etc.). If only a few distinct values for the parameter (measurement positions or “measurement channels”) are allowed, formulae (1) and (la) are valid. However, if the experimental parameter is varied over a wide range in a continuous way, then the number of distinguishable measurement positions will be very large; in this case a reformulation of formula (1) is most enlightening. Let the symbol for the parameter be v. The concept of resolution R is defined by R(v) = v/h, where 6v is the smallest distinguishable difference in v which can be recognized for practical purposes. Let the smallest distinguishable difference from basic principles be called 6,~; the corresponding quantity R, = v/6,v is called the (theoretical) “resolving power”. Within a small range of Av, there are Av/6v different measurement positions. Then, Av/& = R(v) y. If this is inserted into formula (l), and if summation is replaced by integration, the informing power of an analytical procedure with (spectral) decomposition through a parameter v in the range from v, to vb is given as

s \‘h

Pi”1 =

R(v) log2 S(v) a$. vu

(2)

A hint for a formula for the “channel capacity” of a spectrograph was given by WOLTER and MARBURG [lo]. When R and S are practically constant throughout the range of application, their average values may be taken, and the formula is then simplified to : Pinf = R log, 9.1,;.

It is obvious

(24

that the resolution R is decisive for a high informing power. High is, therefore, much more important than a wide spectral range or a large number of steps S (precision) in the measurement. 2.5.3. Useful resolution. From formula (l), the informing power of an analytical procedure by which only one component is to be determined, using only one measurable quantity, must be of the order of 10 bit. (One-dimensional, non-dispersive procedure.) Analytical procedures with several measures (multichannel- or polychromatic procedures) give about 100 to 500 bit. In contrast, spectroscopic analytical procedures may have informing powers of lo4 to lo6 bit, assuming that the spectral resolving power is used to its full extent. One restriction must be considered in particular. In formula (2), R(v) stands for the practical resolution taken for the analytical procedure as a whole, and not the theoretical resolving power R,(v) of the spectroscopic instrument in the core of the procedure. spectral

[lo]

resolution

H. WOLTER,Private

communication,

Foundations

for the critical

discussion

of analytical

methods

565

The informing power offered by the instrument very often cannot be fully used, because the observed physical phenomenon does not permit this. For instance, in molecular spectroscopy the absorption bands of liquids and of solids are relatively broad; the practical resolution R which has to be used in the formula is, therefore, entirely determined by the width of the bands. During discussion of the solution of an analytical problem, the question may arise as to the minimum number of bit required to communicate the desired information. This quantity is called the required amount of information Mi,r. It can be calculated by a logical analysis ; one must ask how many yes/no decisions are necessary to represent completely the desired information. Should the informing power Pi”r offered by the analytical procedure be smaller than the required amount of information, Mi,r, three possibilities remain : (a) reduction of the task ; (b) use of “pre-information” or “joint information” from other sources; (c) combination of several analytical procedures into one compound procedure (usually of topological structure N), which procedure as a whole may yield the required amount of information. Example: It is required to analyse a sample of unknown composition quantitatively for all 100 elements of the periodic system; the concentrations of the different elements are to be given in a concentration scale having 1000 steps. If such a scale is logarithmic and equally subdivided, then the concentration from step to step would grow by a factor of 1.023; such a scale would cover the range from 100% down to lo-*%. From equation (la), the amount of information required for the determination of the 100 elements would be Minr = 100 *log, 1000 z 1000 bit. 2.5.4. Principles with high resolving power. Analytical principles (not methods) which provide such a high informing power are optical emission-, X-ray- and mass-spectroscopy. The universal analytical procedures which are based on these principles offer informing powers from 10,000 to 200,000 bit (see formula 2a). The surplus of informing power (redundancy) may be used to increase the reliability and the precision of the analysis. Comparison of the amount of information Mi,r, which is required to solve an analytical problem with the informing power Pi”r, of an analytical procedure is very useful, but one should not exaggerate such formal considerations. For example, no practical conclusions can be drawn from the statement that about IO* bit are necessary merely to represent the numerical values of all possible organic analyses.

2.6. Figures

qf merit

There are two groups of figures of merit in use for analytical procedures: a functional group, and a statistical group. The functional group comprises sensitivity, selectivity, specijcity, and accuracy. The statistical group comprises precision, detection power, limits of precision. 2.61. Functional jigures

of merit for simple analytical procedures. In a simple analysis, the content c of the component to be determined is derived from the measure x of the appropriate measurable quantity, with the use of the analytical function. c =f(.x). This function is the inverse of another one, x = g(c), which is experimentally determined during the process of calibration of the analytical procedure, performed with standard samples of known composition. There is, therefore, a pair of functions which are inverse to each other. The inversion is possible within a limited range, when the differential quotient of the function g(c) exists in the whole c,. . . cb range and when it is nowhere zero. This must be expressly stated, because calibration functions are known which cannot be reversed everywhere, and above all not unambiguously. Where the analytical curve is retrogressive (Fig. 2), the differential quotient is zero,

566

HEINRICH KAISER

Such functions often occur: for instance, the electrical conductivity of a solution sometimes decreases at increasing concentrations; the main spectral lines used in emission spectroscopy sometimes show so strong a self-reversal at increasing concentrations that their intensity decreases with rising concentrations. Such functions can be used only in the restricted range of c in which they can be reversed unambiguously. The sensitivity of a measurement procedure is, in general, defined as the differential quotient of the characteristic function of the procedure [ll]. An analytical procedure should, therefore, be termed sensitive when a small variation in c causes a large variation in the measured quantity x. The “sensitivity” is constant only in those cases where the calibration function is linear; in general, the sensitivity is a function of c. The “sensitivity” of an analytical procedure has nothing at all to do with the power qf detection of the procedure. The term sensitivity of detection, which used to be employed, is misleading and its use should be abandoned. Accuracy: If an analytical procedure were “completely accurate”, then its calibration function (and also the inverse analytical function) would have to be free from systematic errors. If the law which such systematic errors follow is known, the calibration function can be corrected. When systematic errors are suspected to be present, but when their size and their functional dependence are not known, then it is necessary to investigate the analytical procedure critically and to compare analytical results which have been obtained for identical samples by different procedures. Sometimes it is possible by such observations to derive an upper and a lower limit for the systematic errors which may be occurring, and to indicate an interval within which the correct calibration curve should lie. If AC is the width of this interval, then the smallest value of the ratio C/AC which occurs in the whole range of application may be regarded as a useful measure of the overall accuracy A of the analytical procedure : A = min Fc.

(3)

A procedure will have a higher figure of merit, the more “accurate” it is, and the smaller the uncertainty interval which must be taken into account because of the possible, but unknown, systematic errors. For instance, when it is known that the systematic errors in the whole range are smaller than 0.01 c, the accuracy can be numerically expressed as A = 100. The situation is much simpler when it is known that the systematic errors of the calibration function are only the “frozen in” random errors of the calibration measurements. In this case, over the interval in which the calibration function may be found, it is possible to establish a probability function for its course. Paradoxically, it is then possible to state the probability with which analytical results might be systematically wrong by a certain amount, if they were derived from a particular calibration function.

Cl CII Fig. 2 Calibration [ll]

DIN 1319, Abschnitt

cln

curve (left) with ambiguous

6, Grundbegriffe

der MeBtechnik.

x

A

inverse analytical

curve (right)

Foundations

for the critical

discussion

of analytical

methods

561

These considerations can be applied in a similar way to multicomponent analyses ; for calculation of the regions of uncertainty, the new methods of “interval arithmetics”, may be useful. 2.6.2. Functionaljgures of meritfor complex analytical procedurek When n components whose contents are independent of each other are to be determined in one sample, at least n independent measurable quantities are required (cf. Ref. [12]). When there are more measures available, the superfluous ones may be dropped, or several of the original measures may be combined into new ones in such a way that n independent measures remain. The one measurable quantity x used for a simple analysis is now replaced by the set (x1, . , x,). These n figures may be considered as the coordinates of a point in a space of n dimensions. Correspondingly, the contents (cr, . . , c,) can be regarded as the coordinates of a point in another space, that of the chemical constitution. Calibration of an analytical procedure experimentally establishes a correlation between the points (cl, . . . . c,) and the points (x r, . . . , s,). This leads to a system of “calibration functions”, n in number Xl

=

Sl(Cl,

. .

34

x2

=

Y2(Cl,

. ‘. 3 4

(4)

......... ...... . .. .. xn = gn(c1, . ” 3C”) The inverse system of these functions is used for the analysis. This leads to a system of analytical functions cl =f1(x1,...,x.) cz =.Mx1,...,xn) . . . . . . . . .. . . . . . . . . . .

(5)

c,=f,(x1,...,-&I) If the functions in (4) are throughout continuously differentiable, then they can be approximated in the environment of each point by the first linear terms of a Taylor series. The system of calibration functions can, therefore, be locally represented by a system of n linear equations. The coefficients of this system of equations are the partial differential quotients of the measures with respect to the contents, dXi jlik

=

-.

dCk

Obviously, these yik are none other than the (partial) sensitivities of the individual measures towards variations of the contents of the different components. The relationship between contents and measures in its totality is throughout represented in a sufficiently small region about the points (cl,. . . , c,) under consideration, by the appropriate matrix of the partial sensitivities. YllY12~..Yln Y21Y22.‘.Y2” . . .. . . .. . . . . . . y. lYn2

. . Ynn

The functional figures of merit must be derived from this matrix, which may be termed the “calibration matrix” (of the analytical procedure). From the system of calibration functions (4), the system of analytical functions can be derived as its inversion only when the determinant of the matrix (6) is not zero. When this requirement is fulfilled throughout the range of application, then the system of the n analytical functions can always be calculated in linear approximation ; however, it must be noted that the solutions of the calibration functions for the contents cl, . . . , c, are valid only locally, and not for the whole range of application. Only when

HEINRICHKAISER

568

the partial sensitivities Yik have constant values throughout the whole range, i.e. when the calibration functions are really linear and not merely in local approximation, does the inversion give the corresponding system of n (likewise linear) analytical functions which are valid for the whole range. (The matrix of the analytical functions is the inverted calibration matrix.) In this case the work expended in calibration measurements and their evaluation is relatively small, because then n2 coefficients Yik have to be determined only once, and are valid for the whole range. It is, therefore, an important and positive verdict on an analytical procedure, when it can be stated that the calibration and analytical functions are (practically) linear throughout the whole range of application. Often the system of calibration functions can be linearized, if the variables xi and ci are suitably transformed mathematically. An example is Beer’s law in photometric analysis; it can be written as a linear equation if, instead of the original measured quantity (the transmittance of the sample), its negative logarithm, the absorbance, is used. In many cases one must find the best way by trial and error: power functions, simple rational functions, and logarithms offer many possibilities. The value of the determinant of the calibration matrix, det (;‘ik), is the transfer factor by means of which a volume element in the n-dimensional space of the constitution (ci, . , c,) is represented in the space of the measurable quantities (.~i, , s,), and, to this extent, it is the generalization of the concept “sensitivity” from the simple to the complex analytical procedures. 2.6.2.1. Selectivity. A generally applicable quantitative definition for the concept “selectivity” can also be derived from the mathematical properties of the calibration matrix. Obviously, an analytical procedure would be called “fully selective” (in the colloquial language of analytical chemists) when only the elements of the principal diagonal of its calibration matrix, yik, are non-zero (i = 1,. , n). The procedure would then break down into n independent subprocedures-at least as regards calibration and evaluation. Each of the components (i) to be determined would then be measured by its own measurable quantity xi, which depends only on the content ci of this one component. Analytical procedures which are fully selective in this sense occur for instance in emission spectroscopic anulysis and in mass spectrometry. For a procedure of moderate selectivity, there should be at least some correspondence between measures and components, such that the corresponding .Y and c can be given identical indices, without ambiguity. It must, therefore, be possible to choose the index numbers in such a way that the matrix elements of the principal diagonal are the greatest in their row (regardless of sign). This is presupposed in the following, but is still not sufficient for a definition of “selectivity”. One gets further, if one requires that the system of calibration functions shall be soluble, for the contents ci, . . c, to be determined, by the use of an “iteration process”. In such a process, the system of equations is solved in several sequential steps by successively inserting approximate solutions until the numerical values for the solutions no longer change. The first approximation is obtained by taking into consideration only the matrix elements in the principal diagonal; in this first step, one proceeds as if the analytical procedure were fully selective. This iteration process for solution converges only, when in each row of the calibration matrix the element in the diagonal is larger than all other elements of the same row taken together i.e. if the inequality (7) is valid for all i [12], p. 159. ,ti

1 ?/ii 1 1Yik1- 1Yii 1> 1.

(7)

The larger these quotients are, the better the iteration procedure converges. If, therefore, we take the smallest value of these quotients which occurs in the matrix and in the [12]

R. ZURM~~HL,Pruktische

New York (1965).

Mothewrtik

firr

It~~qrnirurr

und Physikrr,

5. Aufl. Springer,

Berlin.

Heidelberg.

Foundations

for the critical

range of application of the procedure, in the following expression : t=

discussion

of analytical

methods

we have a quantitative

min i=l,..,n

n

2

IYiiI lYikl -

-1.

569

measure 5 of selectivity

(8)

IYiiI

k=l

For a “fully selective” procedure, 4 becomes very great (formally infinite); when the value of 5 is only a little above zero, one can hardly speak of selectivity. Even analytical procedures which are not selective in this sense may be very useful for the determination of several components. The system of calibration functions can always be solved for the contents when the determinant of the calibration matrix is different from zero. It is in no way necessary that measurable quantities and components should be connected in pairs. However, selective procedures have many practical advantages : they are clear, and relatively simple to calibrate and to evaluate. To avoid misunderstandings, a practical difficulty which may occur during the course of numerical solution of such systems of linear equations must be mentioned: When the unavoidable measurement errors for the main components have too strong an effect in the equations for the minor components, or when the y& are not known sufficiently exactly to allow calculation of the determinant, then the calculated results may be nonsensical. For instance, one should not try to determine trace elements in pure iron indirectly, on the basis of the measures obtained for the iron content. However, these are problems of numerical calculation, and have nothing to do with the functional relationships which lead to a definition of selectivity. 2.6.2.2. Specificity. An analytical procedure is generally called specific for a substance if an analytical signal is generated only by that particular component (i) in a multicomponent sample. A specific procedure, therefore, is always also selective. If a procedure is only approximately specific a formal analogy to “selectivity” can lead to a quantitative definition of the degree of “specificity” in terms of the following formula : Y=



,Ci

IYiiI

1.

I “hk I - I YiiI

(9)

The most notable feature of these definitions is their freedom from arbitrariness. The calibration matrix alone contains all the information about the sensitivity, selectivity, and specificity of a complete analytical procedure, for each point in the chemical constitution space. 2.6.3. Statistical jigures of merit. Methods of mathematical statistics are tools, which can be used with three very different groups of tasks : (a) to describe and to classify large masses of data; (b) to compress, in a descriptive way, numerous data by the use of statistical characteristic figures ; (c) to derive predictions, to establish decisions with indications of the risks. In the following, only the most important statistical figures of merit will be treated, briefly and with reference to the very extensive literature. 2.6.3.1. Descriptive statistics. From extensive observations material can be ordered, subdivided and presented in a clear fashion by “classification” and by stating the observed frequency distribution [ 13,391. Two statistical quantities in particular are used for data compression: the “mean” and the “standard deviation”. The calculation is purely formal. Suppose N measurements (observations, analyses) have been made, and that the measured values are x1, s2, . . xN [ 131 J. PFANZAGL [44] Bd. I. Sammlung

G&hen,

Bd. 746, 746(a) (1967).

570

(these need defined as

HEINRICH KAISER

not all be different).

x= The standard

deviation

The “mean”

Xl

+x2

X (often

+ . . ..Y.y

jI

termed

the “average”)

is

(10)

-yi

s is defined as s=

+

J&J ,i 1

If the standard deviation standard deviation, s,.

= f

N

also

s is divided

(.Yi -

2)’

(11)

1

by the mean

2, the result

is called

the relative

s, = s/x.

(12)

In chemical analysis, the relative standard deviation should always be given as a decimal fraction, not as a percentage, in order to avoid confusion with concentration values which also are often given as percentages [ 141. In many cases, there may not be enough material for a large number of analyses on the same sample. Alternatively, the time required for an analysis may be so long that for practical reasons it is impossible to make numerous analyses merely for the purpose of checking the procedure. However, it is possible to determine the standard deviation not only from a large number of analyses made on one particular sample, but also from a large number of analyses on slightly different samples, each of which has been analyzed several times. Analyses which are made routinely in many laboratories, very often involving double or multiple determinations, can be used to give the numerical values necessary for investigation of the precision of the procedure in question [ 151. In practically all cases, the standard deviation is the best value to characterize the “scatter” of an analytical procedure. Under no circumstances should one instead use the range between the highest and the lowest measured values, which may by chance have occurred in a particular series of experiments, since these are in fact the two most uncertain values obtained. 2.6.3.2. Prognostic statistics. The real problems begin when the frequency of occurrence of future events must be predicted on the basis of past observations. Such considerations are necessary for the estimation of risks connected with decisions to be taken. The bridge between the past and the future is formed by mathematical probability theory, which is nowadays being developed in an axiomatic way following the model established by KOMOLGOROFF [16]. Mathematical models can be established for frequency distributions of the different types observed in practice [9]. However, it is possible to find out whether such a model is suitable for treatment of a particular case only by critical comparison of the model with empirical findings. The same is true also for critical assessment of analytical procedures. The widespread opinion that the distribution of errors in chemical analyses can always be adequately described in terms of a Gaussian normal distribution is plainly wrong. The essential problem of mathematical statistics is to find an appropriate set of observations. probability function which best “fits” a more or less numerous This must be done in such a way that conclusions for successful practical action can be drawn from this probability function. At present, there is a tendency to give the so-called confidence limits-generally for 95% or 99% confidence (or statistical certainty)---instead of the observed standard deviations. This tendency is dangerous for two reasons. Firstly “confidence”, “certainty”, and “risk” are concepts of practical ethics; “how much” confidence one has, i.e. how much certainty is required to make a decision, cannot be fixed once and for all by R. W. FENNELL, T. S. WEST, Recor,lment/~~fi~~f~.~ ,$w the Prrw~trrtim of rhr Rrvults of Ch~ictr/ IUPAC Commission on Analytical Nomenclature, Purr Appl. Chern. 18, 439 (1969). [15] H. KAISER and H. SIWKER [28]. [ 161 A. N. KOLMOCXIRO~F.Giur~rlhq~ifle drr M/irh~,\~h~ir~lic~hkritvrec~hnfrr~~/. Springer, Berlin (1933).

[14]

AM/J~s~,~.

Foundations

571

for the critical discussion of analytical methods

convention ; in every individual case, this must be decided after taking into account the facts and problems of life. Secondly, the conventional figures for the relationship between statistical confidence and standard deviation (for instance 95% confidence corresponds to the range +20) are all based on the assumption that the population in question has a “normal” Gaussian distribution. The same is true for cases where the confidence range was calculated using Student’s t-distribution, since this too presupposes a Gaussian distribution for the whole population from which a relatively small statistical series was taken. As GAUSS [17] himself first demonstrated, many observations in science can be described, to a good approximation, by assuming a normal distribution. This is especially true for the distribution of random measurement errors. In chemical analysis, however, for practical reasons it is often not possible to make very long series of experiments in order to verify that the distribution of errors about the mean does actually correspond to a normal distribution. For such an investigation, very many measurements, at least 100, would have to be made in order to be sure that the dangerous (but rather rare) large deviations from the mean do not occur too often. The most important reason for accepting a Gaussian normal distribution is the “central limit theorem” of probability theory. In order to understand its importance, one must realize that the so often investigated distribution of the measured values about the mean of many measures is by no means the distribution which is of importance for discussion of chemical analyses. A different distribution function, which indicates how the possible true values are distributed round a given measured value, is needed. This at first sight seems surprising, since it is natural to adhere to the correct view that there is just one true content in the sample for the element being determined. The meaning of the above can best be clarified by an example. Suppose that in an analytical sample one finds 2.70~18 Fe per ml. The “true content” is unknown; it might in fact be, say, 2.62 or 2.75 or 2.71 or 2.67pg. One might select, from a very large series of such analyses, all the observations in which one found exactly 2.7Opg Fe. Now imagine that the true values are subsequently found by a better analytical process, or by looking them up in a list. Then, for the many different samples which gave precisely the same measured analytical value, quite different true Fe contents would have been established, distributed for example in the range 2.552.93pg, probably with most of them near to 2.7Opg. The conclusion is that one particular measured value can arise from different true values. Obviously, this distribution of the possible true values round a measured value must be dependent on the reserve of true values which are admitted. The type will be different, according to whether a continuous sequence of values is possible with equal probability, or whether the measurable quantity can assume only one or two discrete values. It is fortunate that in many cases of chemical analyses it can be assumed that the possible true values occur in a continuous distribution with equal frequency. The following statements are valid for this case : Distribution.fimction

for the possible true value leading to a particular measure:

If (a) the possible true values in the range of application under consideration are distributed continuously and with equal relative frequency and if (b) the random errors are produced by the combined action of many independent sources of error of approximately equal magnitude, then a Gaussian normal distribution (bell-shaped curve) is valid for the inference from the measured value to the true value. The position of the maximum of the Gaussian curve is at the measured value. Distribution function ,for the possible

true values leading

to a particular

mean of a large

series of measures.

If the final result is the mean from a very large number of single measured values, then the distribution function for the inference from the mean to the true value is always given by a normal Gaussian distribution function, with its maximum at that [ 171 H. KAISER.Specfrwhin~. Acttr 3, 40 (1947).

HEINRICHKAISER

572

mean. This statement is independent of the assumption that the total error of a measure is the sum of many individual errors, and is also independent of the distribution of the possible true values. In addition, for this case the special form of the Gaussian function is determined by the “standard deviation” which can be calculated from the many individual measured values. Experience indicates that these assumptions, in any combination, correspond to most cases of practical analysis. It is therefore reasonable to base discussion of the size and distribution of random errors first on a Gaussian function. One should not, however, forget the risk connected with such an assumption. 2.6.3.3. Limit of detection. For a correct appraisal of an analytical procedure by which very small contents, “traces”, are to be determined, it is necessary to know which is the smallest value the procedure can give for a content. This is a problem both of measurement and of statistics. At low concentrations, there is uncertainty about whether (and to what degree) and observed “measure” is really due to the amount of the desired substance in the sample, or whether it is caused by uncontrolled chance disturbing influences. This uncertainty of assessment, although limited by the statistical definition of the limit of detection, is not completely removed. Nevertheless, a decision with calculable risk can be made by using a criterion agreed upon by convention [17, 181. Chance disturbing influences are already operative when the analytical procedure is applied to a blank sample, and leads to measures xbl, which exhibit chance fluctuations whose origin cannot be investigated in detail, nor indeed would one wish to do so. It is important that the cause of the uncertainty in the analytical value is not the size itself of the blank measure, but the size of its fluctuations. A constant blank measure of any size can always be compensated for. The same is also true for the constant component of the fluctuating blank measures which is given by the statistical mean .?b[. The fact that chance fluctuations set a limit to the measurement of low values has been known since Brownian motion was observed. Recently it has gained wide-spread popularity under the term “signal to noise ratio” which was coined by electrical engineers. In chemistry it is appropriate to speak of an “analytical signal”, however, it is hard to speak of (analytical) “noise”, even if the final measurement is made electronically. The disturbing influences which give rise to the chance fluctuations of the blank measurements can be of many different kinds. Examples are: impurities in reagents, losses through adsorption on the walls of the vessels, errors of weighing or titration, secondary reactions, temperature fluctuations of the light source in spectrochemical analysis, etc. The size of the fluctuations due to such causes is usually not predictable from theory. In practice, however, the magnitude of the fluctuations for each analytical procedure can be found numerically by carrying out a sufficiently large number of blank analyses, and then statistically evaluating the measures .%,l found in the course of them. The mean .?b[and the standard deviation sbl are calculated. For the definite detection of a substance, a requirement is that the difference between the analytical measure .Yund the meun blunk value Sbl must be greater than a definite multiple of the standard deviation sbl of the blanks. Smaller measures are discarded as not sound. Experience has shown that it is appropriate to choose k = 3. This compensates for the uncertainty which is due to the fact that only “estimates” for the mean blank value and the standard deviation can be obtained, and above all that the type of the distribution function is not known. (One should not without further investigation assume a normal distribution). If at least 20 blank analyses are carried out, and k is taken as 3, then the risk of erroneously eliminating a measure as not sound is at most a few percent. The measure x at the limit of detection is, therefore, defined by the equation x = ?(b,+ 3sbI. (13) L The content _c at the limit of detection also follows from the analytical c = cfs) _ [18]

H. KAISER[21].

function, as (14)

Foundations

for the critical

discussion

of analytical

methods

573

This is the smallest value for the content which the analytical procedure in question can ever yield. This, therefore, is a characteristic feature of the procedure itself: However, has been formulated with regard to the analytical the concept of “limit of detection” problem. If one were speaking of the procedure as such a better colloquial term would be “power of detection”. Near the limit of detection call quantitative determinations of the content are rather that-depending on the slope of the analysis imprecise ; it follows from the definition function--the relative standard deviation for c must be about 0.3. For critical assessment of the efficiency of an analytical procedure at small concentrations, not only is there the question of the possibility of detection of a desired substance, but also that of the “precision” of the analysis in the region of small concentrations in general, i.e. the question of the extent to which the concentration may be reduced before the noise level peculiar to the particular analytical procedure pushes the standard deviation of the analytical measure over a preselected threshold value. This could be described by a “limit of precision” (for . .), in which term the required standard deviation must be inserted in the brackets. The particular analytical task will determine what is admissible. The limit of detection c is a figure of merit for the analytical procedure as such, not for the individual analysis. All analytical results which can be obtained by this procedure must be 2 c. If the desired substance has not been found, it must be asserted that the content is below the limit of detection c, The “limit of guarantee for purity”, which one would wish to indicate if the wanted substance could not be detected, is always higher than the limit of detection of the procedure. The limit qj’guurtrnter .f;w purity cc is not a chamcteristic property of the analytical procedure as such, but it provides an interpretation for an “empty” result which was obtained by analyzing a quite definite concrete sample. The value which can be given for cG depends not only on the analytical procedure, but also especially on the homogeneity of the sample as observed under the given experimental conditions (cf. [21,25]). 2.7. Correlation

between the analytical

problem

and the analyticcrl

procrdure

Finally, the question again rises of how one can find an optimal procedure for solution of a given analytical problem. At present, no general systematic way for this correlation is available. We depend upon the knowledge and experience of the analytical chemist, and on the many experiences which are recorded in the literature. In addition, we are strongly dependent on habits and even fashions. In a mathematical picture, one must represent the different possible analytical problems as points in a multi-dimensional space, and in such a systematic order that related problems appear as neighboring points in this space. A corresponding order must be given to the various analytical procedures, which can be represented as points in the space of procedures. The question then involves the correlations between points or regions in the two spaces. Certainly, this cannot be an unambiguous correlation, for a particular analytical problem can be solved by various procedures, and inversely, many different problems can be treated with a definite procedure. Many such correlations are already known, but they have not so far been set out systematically. In fact, they are-buried under much more or less incidental material-an essential part of the content of the literature concerning analytical chemistry. Correlations between the “space of problems” and the “space of procedures” collectively will certainly not lead to any transformation which can be mathematically expressed. However, correlations which have already been found empirically, can be transferred into a computer memory in such a way that they can be retrieved at a later time. This conception can be realized in practice by characterizing the analytical problems and, likewise, the procedures by numerical codes. The principal concepts of such a classification system correspond to the coordinates in a multi-dimensional space. The code numbers of the minor concepts (descriptors) can then be regarded as the coordinate

574

HEINRICH KAISER

values of the particular problems or procedures. Parameters coordinates must be independent of each other. For analytical example, correspond to the following groups :

which may be used as problems, they might, for

(a) the elements or compounds which are to be detected and determined, their ranges of concentration ; (b) the type, composition, homogeneity, and variety of the analytical samples ; (c) the quantity of sample which is available for analysis; (d) general requirements (precision, accuracy, selectivity, limit of detection for the required analyses); (e) particular and practical requirements (local analysis, micro-analysis production control, forensic analysis, etc); (f) restrictions as regards costs, laboratory space, time available, ability of the analyst, etc. Some of these parameters will also recur in the space of the procedures. In addition others are necessary to describe the analytical operations, reagents, instruments and the evaluation. Certainly, such a system cannot be developed simply from a subject index with cross references; not even from a very extensive one. Decisive is, on the contrary, the systematic hierarchical order of the two related classifications. This only, reduces the number of descriptors and relations which are needed for an effective classification system. It seems possible to describe the analytical problems and the procedures collectively and adequately, and to establish the necessary correlations, with about 350 carefully selected descriptors. This may appear surprising, but it must be realized that the power of a systematic code resides in the many possibilities of forming combinations. For instance, if 10 descriptors were taken together as “words” to be formed from the alphabet of the code, then with 350 descriptors there would be more than lO25 different words. Many of these combinations will be meaningless, but the remaining meaningful vocabulary will be large enough to express everything necessary. Such conceptions can already be realized at an acceptable cost. (As a relatively limited example, the punched card index used for the Documentation of Molecular Spectroscopy, DMS, Verlag Chemie, Weinheim, Bergstrasse, and Butterworth Scientific Publications, London, can be mentioned). 2.8. Bibliogruphy The critical assessment of analytical procedures, which is naturally part of the scientific task of the analytical chemist, is only in the early stages of development. Only since 1945 have statistical tests been made use of to a greater extent (frequency distribution, standard deviation, limits of detection, calculation of regression coefficients for the determination of calibration functions, etc.). However, the terminology employed is not yet uniform, and the concepts themselves are not yet adhered to rigidly and precisely enough. This is reason why numerical values which are given in the literature, for instance for detection limits, analytical precision, sensitivity, etc., cannot be directly compared. In every case, one must examine critically what is actually meant. whether numerical values are merely “estimated”, or which formulae and rules were used for their calculation. Other important concepts, for instance selectivity, specificity, and stability of calibration of analytical procedures, are only conceived “qualitatively”, i.e. by instinct; they are still floating about in the outer periphery of scientific concept formation, and thus of the specific technical field. Definitions have belong to the “colloquial language” been tried, in most cases, only for a particular problem. This situation is characteristic of a scientific field which is in a state of rapid development. Characteristic also is the inclination to introduce the new intellectual tools-particularly mathematical statisticsdefinitions and limitations being given the in an “unconsidered way”, i.e. without

Foundations

for the critical

discussion

of analytical

methods

515

necessary precision. Alternatively, these may be laden with too much unnecessary erudition. In such an open situation, the reader himself must form his judgment by comparison ; sometimes, on the same page he may find both wheat and chaff. These considerations have compelled the author to present the bibliography differently from the other chapters of this book. The bibliography contains mainly new books and papers, arranged according to general subject. Many of these contain extensive references to the older literature, and indicate literature for more extensive studies. Besides the books describing the basis of mathematical statistics, it seemed appropriate also to add some literature dealing with the theory of information and mathematics in general. This was not merely for the purpose of giving the sources of various terms used in this chapter, but rather, because as Analytical Chemistry develops into an all-embracing field of science, it will need the intellectual approach of modern mathematics to develop its own systematic concepts. 2.8.1. Trace determination, detection limits, general. Z. Anal. Chem. 209, 1 (1965). Cl91 H. KAISER, Zum Problem der Nachweisgrenze, Lindau 1966, Teil 1: Analytik kleinster Substanzmengen, Z. Anal. Chem. 221, 1 (1966). PO1 Analytiker-Tagung, der Garantiegrenze und der dabei benutzten Begriffe, PI H. KAISER, Zur Definition der Nachweisgrenze, Z. Anal. Chem. 216,80 (1966). Trace Characterization, Chemical and Physical, U.S. Department of Commerce, National Bureau of Standards Monograph 100, Washington (1967). Papers and Discussion) in [22] p. 149. [231 Optical and X-Ray Spectroscopy (Contributed fiir das Nachweisvermiigen, Z. P41 V. SVOBOUA and R. GERBATSCH, Zur Definition von Grenzwerten Anal. Chem. 242, 1 (1968). PI H. KAISER and A. C. MENZIES, The Limit of Detection of a Complete Analytical Procedure. Adam Hilger, London (1968). in PI G. EHRLICH, H. SCHOLZE and R. GERBATSCH, Zur objektiven Bewertung des Nachweisvermdgens der Emissionsspektroskopie IV. Spectrochim. Acta 24B, 641 (1969). in Elemental Analysis, Anal. Chem. 42, Nr. 2,24A, Nr. 4,26A (1970). P71 H. KAISER, Quantitation

PI

2.8.2. Statistics in chemistry. WI

H. KAISER and H. SPECKER, Bewertung und Vergleich von Analysenverfahren, Z. Anal. Chern. 149, 46 (1956). c291 W. J. YOUDEN, Statistical Methodsfor Chemists, 5th ed. John Wiley, New York (1961). chemischen Analyse, Die chemische Analyse, Bd. 49, [301 G. GOTTSCHALK, Statistik in der quantitativen Ferdinand Enke, Stuttgart 1962. Press, c311 V. V. NALIMOV, The Application of Mathematical Statistics to Chemical Analysis. Pergamon Oxford (1963). [321 K. DOERFFEL, Beurteilung van Anulysenuerfuhren und-ergebnissen, 2 Aufl., Springer, Berlin, Heidelberg, New York, J. F. Bergmann, Miinchen (1965). [331 BROOKES,BETTELYand LOXTON, Mathematics and Statistics for Chemisrs, John Wiley, New York (1966). I341 G. GOTTSCHALK, Einfihrung in die Grundlagen der chemischen Materialpriifung, S. Hirzel Verlag, Stuttgart (1966).

2.8.3. Theory and statistics of experiments. [35] [36] [37] [38]

0. L. DAVIES, Design and Analysis oflndusrrial Experiments, Oliver & Boyd, London (1960). N. L. JOHNSON and F. C. LEONE, Statistics and Experimenral Design, Vol. I, II, John Wiley, New York, London, Sydney (1964). M. G. NATRELLA, Experimentul Statistics, NBS, Handbook 91, Washingon (1966). V. V. NALIMOV, Theory of the Experiment (russ.), Publ. of the Academy, Moscow (1971).

2.8.4. Mathematical statistics, tables, standards. [39] [40] [41] [42] [43] [44]

Deutsche Normen, DIN 55 302, Blatt 1 und Blatt 2. E. KREYSZIG, Statistische Methoden und ihre Anwendungen. Vandenhoeck & Ruprecht, Giittingen K. STANCE and H.-J. HENNING, Formeln und Tabellen der mathematischen Statistik, 2. Aufl. Berlin, Heidelberg, New York (1966). G. W. SNEDECON and W. G. COCHRAN, Statistical Methods. Iowa State Univ. Press, Ames, Iowa I. M. CHAKRAVARTI, R. G. LAHA and J. ROY, Handbook of Methods of Applied Sratistics, Vol. John Wiley, New York (1967). J. PFANZAGL, Allgemeine Methodenlehre der Statistik II, 3. Aufl. Sammlung G&chen, Bd. Walter de Gruyter, Berlin (1968).

(1965). Springer, (1967). I and II. 747/747a.

576

[45] [46]

HEINRICH KAISER

S. KOLLER, Neue graphische Tafeln zur Beurteilung statistischer Zahlen, 4. Aufl. Dietrich Steinkopff, Darmstadt (1969). L. SACHS, Srotische Aus~ertun~/sn~ethoden, 2. Aufl. Springer, Berlin, Heidelberg, New York (1969).

2.8.5. Information [47] [48] [49] [50]

theory, mathematics

in general.

W. MEYER-EPPLER, Grundlugen und Anwendunyen der lnfornlutionstheorie, 2. Aufl. Springer, Heidelberg, Berlin, New York (1969). H. MARGENAU and G. M. MURPHY, The Mathenutics ofPhysicstrnd Chemistry, Vol. I, 1964. Vol. II., D. Van Nostrand Company, London (1966). H. BEHNKE, R. REMMERT, H. G. STEINER and H. TIETZ, Murhematik 1, Fischer Lexikon 29/l, Fischer Biicherei, Frankfurt/Main, Hamburg (1964). H. BEHNKE and H. TIETZ. Mtrrhe/tltrrick 2, Fischer Lexikon 9/2, Fischer Biicherei. Frankfurt/Main. Hamburg (1966).