Physica B 306 (2001) 1–9
Micromagnetics: past, present and future Amikam Aharoni Department of Electronics, Weizmann Institute of Science, 76100 Rehovoth, Israel
Abstract The theory of micromagnetics started with great hopes of describing rigorously all magnetization process in ferromagnets. This goal has never been achieved, or even approached, for several reasons. Some of the reasons are misinterpretations and mistakes, introduced by those who misunderstood the whole theoretical approach. These mistakes must be removed and corrected. Others may be inherent to the physical limitations of the theory, but they can be removed by some modification and generalization of the basic assumptions. The ways to proceed, and the directions that micromagnetics should take in the future, are listed here, based on extrapolating from its history. r 2001 Elsevier Science B.V. All rights reserved. PACS: 75.60.d; 76.50.+g Keywords: Micromagnetics
1. Introduction One of Brown’s early reports [1] on his new theory was entitled ‘‘Micromagnetics: Successor to Domain Theory?’’. In spite of the cautious question mark, this title expressed the naive belief of those days that a complete and rigorous theory of all the magnetization process in ferromagnets was just around the corner. Domain Theory was not satisfactory, because it assumed, not derived, the existence of domains and walls. In micromagnetics, Brown [1] claimed, ‘‘domains and walls are not postulated; when they are valid concepts, they must emerge automatically from the theory’’. It was realized, of course, that the theory lacked something before it could replace domain theory, because its simplest results were orders of magniE-mail address:
[email protected] (A. Aharoni).
tude away from the experimental data. Actually, Brown [2] proved already in 1945 that the theory, as it then was, could not fit experiment. It was somehow believed, however, that since this theory treated rigorously almost all energy terms, only a slight modification was still needed to make it complete. We all accepted then Brown’s conclusion [1] that ‘‘Clearly micromagnetics is not yet ready to eject domain theory from the position that it now holds by default. But micromagnetics can at least formulate explicitly, and attack honestly, problems that domain theory evades. With the new digital calculators to help us, there is little excuse for resorting any longer to such evasion.’’ The ‘‘digital calculators’’ mentioned by Brown did not help. Even modern computers, whose computing power is orders of magnitude larger than that of those computers, cannot resolve all the difficulties of micromagnetics, as is seen in the
0921-4526/01/$ - see front matter r 2001 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 1 - 4 5 2 6 ( 0 1 ) 0 0 9 5 4 - 1
2
A. Aharoni / Physica B 306 (2001) 1–9
following. The main reason is that surface and body imperfections, that are ignored in the theory, play a very important role in determining the real magnetic properties. Qualitatively, this role was quite obvious almost from the beginning, and the discrepancy between the theory and most experimental data, that became known as the Brown paradox, could be explained away in terms of such deficiencies of the theory, as I discussed in two reviews [3,4]. It was thus clear in principle how to improve the theory, and resolve the paradox. It took, however, many years to realize that the problems are more difficult than we used to believe, requiring a serious study of the imperfections, so that the correction is not just a matter of a minor change. In the meantime, the outstanding fact was that micromagnetics led to wrong results, that could not be fitted to experiment, and the explanations of why there was a misfit did not change this fact. After all, there is always a big difference between doing something right and explaining why it is not right. For this reason, very few were attracted to the new theory. In particular, engineers who designed magnetic circuits and devices kept using the old domain theory, leaving the study of the less useful micromagnetics to those eccentrics who worked on things out of this world. This lack of sympathy, and lack of researchers who were ready to take part in this effort, slowed progress. This situation changed when some results of micromagnetic calculations began to fit quite well the experimental values for particles used as the magnetic media for recording information on tapes or disks. To improve the recording performance, particles used in this industry were made smaller and smaller from one year to the next, till they reached the size at which some of the neglected effects in the micromagnetics theory happen to be negligible. The theoretical results could now be used to analyze some experimental data for those particles, which encouraged researchers to start working in micromagnetics. Unfortunately, some of the new researchers were used to different kinds of theories, and failed to understand that when there is more than one nucleation mode, only the one with the smallest eigenvalue has a physical meaning. Instead, when
they compared theoretical nucleation fields with experimental values of the reversal field, and the theoretical value for nucleation by buckling e.g. was closer to the experimental value, but larger than that of the nucleation by curling, they concluded [5] ‘‘that particles of the type investigated here do not reverse by curling.’’ Of course, physics prefers theories that agree with experiment, but this criterion is not always [6] the only one. A theory that does not agree with experiment needs to be corrected. This correction, however, cannot be done by a wrong and irrelevant substitution. The rigorous nature of the reversal modes obtained by solving Brown’s differential equations was also misunderstood. People were only interested in the names of eigenmodes, and assumed that by inventing new names they could have more mechanisms, that were on equal footings with the [7] ‘‘existing models’’. The energy of these illdefined models was calculated by using very poor and unjustified approximations, but the desired result could be obtained by choosing values for some poorly-defined adjustable parameters. I tried to point out [8] that such quasi-theories are not more than an exercise in futility, but at least Knowles was sufficiently unconvinced to publish [9] a ‘‘reply’’, in which he claimed that the irregular shape of real particles invalidated the theoretical results for ellipsoids, allowing him to legislate curling out of existence, and choose models at will. The fallacy of this argument is obvious. Valid results for non-ellipsoidal bodies call for a theory that takes into account the exact shapes, and not for bad approximations in a theory that maintains the ellipsoidal shapes. Yet, this kind of logic is still used to justify approximations in the calculations of ellipsoids while ignoring the rigorous results for ellipsoids. See also Section 7 here, and Section 9.4 of Ref. [10]. Some of the mistakes in this new phase of micromagnetics are more subtle and less obvious than in that example. Many of them are inaccuracies in the calculation of the magnetostatic energy, that do not seem serious at a first glance, but can lead to completely wrong final results. Therefore, the next section outlines the special nature of this energy term, that requires a particular care in its
A. Aharoni / Physica B 306 (2001) 1–9
computation. Other aspects of micromagnetic, and an outlook at its future, are then listed by subjects in the following sections.
2. Magnetostatic energy One of the most outstanding features of this energy term is that it has a very long range. This range makes it defined by a six-fold integral in three dimensions, in contradistinction to the anisotropy and exchange energy terms, that are defined by three-fold integrals. In a numerical computation of n unit cells, the long range means the magnetostatic energy term includes an interaction of every cell with all the other cells, thus involving n2 terms, whereas only n terms are required for computing the other energy terms. Therefore, computing the magnetostatic energy takes almost all the computer time in a typical micromagnetic computation. It is also the energy term with the heaviest demand on the computer memory, which means that it determines the limit of the size of the body that a computer can handle. This feature is a nuisance, but it must be borne in mind that it is an inevitable nuisance, without any way around it. A nice attempt [11] to replace the long-range force by an ‘‘equivalent’’ shortrange one, only led [12] to wrong results. The easiest problems to solve are those in which the magnetostatic term is negligible, and can be left out, but such problems are very rare. It was neglected anyway, unjustifiably, in several cases. One example is leaving out the magnetostatic energy in a reversal model [13] that claimed to replace ‘‘the curling models’’, hoping it would turn out to be negligible, at least for some materials. It did not, and an exact calculation [14] showed that this approximation could not be valid for any material, and that in the region of any physical interest, the neglected energy term was much larger than the terms that were taken into account. Braun [15] then came up with the interesting idea that the magnetostatic energy was not neglected, but was included in the renormalized values of the anisotropy constants. For comparing nucleation fields, however, he [15] did not use the same normalizing value for the different cases. It means
3
[16] using a scale for the nucleation field by his model which is different from the one used for the nucleation field by the curling mode, making their comparison meaningless. If such a normalization were possible, it would have solved the age-old problem1 of how a camel can go through the eye of a needle. The camel could be normalized to half the size of the eye of the needle, and then it could walk easily through, with some space to spare. A long range also means that magnetostatic energies must be computed to a high accuracy, because errors accumulate when many terms are added together, and a small error can grow to an intolerable one. It is thus strange that many workers just ignore it, and use rough approximations. The first to compute the magnetostatic energy efficiently was LaBonte (then a student of Brown), in one [17] and then in two [18] dimensions. A two dimensional magnetization structure is assumed to be independent of the coordinate z, and the computation is carried out in a given part of the xy-plane, divided into Nx Ny square prisms. The basic assumption is that the magnetization M does not vary within each of the prisms. Then the magnetostatic energy per unit length along z can be written [18] as a sum over I, J, I0 and J0 of all the products of Mx and My at the prisms (I, J) and (I0 , J0 ). These products are weighed by certain coefficients, Am and Cm, expressed by some integrals that have been evaluated [18] analytically. The analytic evaluation is not essential, in this case or in its extension [19] to three dimensions. I could never understand why, but some [20,21] prefer to obtain these coefficients by numerical integration. This personal preference should not make any difference, if the numerical evaluation of Am and Cm is done to an adequate accuracy. It must actually be a higher accuracy than that used in the rest of the computations. Otherwise, errors grow. The main advantage of this method is that Am and Cm need not be evaluated over and over again with every iteration of the minimization process, that typically calls for computing the energy thousands of times. They are only computed once, 1
Mark 10:25 and Luke 18:25.
4
A. Aharoni / Physica B 306 (2001) 1–9
and stored to be used in the main computation. Therefore, the time it takes to compute Am and Cm is always negligibly small, so that they may as well be computed rigorously. This point, however, was somehow missed by many workers, who introduced approximations into the computation of Am and Cm (or their equivalent). Thus the integral over the faces of a cube was replaced by a dipole at its center, or by the field at the cube center of the charge on its surfaces, or similar approximations, as reviewed in Ref. [22]. Such approximations, that introduce errors without even saving any computation time, must be eliminated before computations can be taken as significant. Some use other methods, also reviewed in Ref. [22], that are more suitable for paramagnets than for ferromagnets. They either need an impractically long computation time, or use rough approximations, or both. Another, and a very serious, problem with many of the published results, is the use of too rough subdivision, either in the LaBonte method, or in the recently more popular FFT, or in any other method. A rough subdivision saves computer time and resources, but yields meaningless results, that are not easy to distinguish from valid results. In my mind, such computations are the main reason for lack of any real advance in Micromagnetics in recent years, and a way must be found to eliminate them, see also Section 5.
3. Bulk hard materials Brown’s paradox was first found in hard magnetic materials, namely materials for which jK1 jbpMs2 ; where K1 is the first-order anisotropy constant, Ms is the saturation magnetization. It started with the proof [2] that the coercivity Hc of any ferromagnet must be at least 2jK1 j=Ms NMs ; where N is its demagnetizing factor, but the second term is negligible for hard materials. Experimentally, Hc is usually smaller than 2jK1 j=Ms ; often smaller by several orders of magnitude. There are indications that this paradox is caused in hard materials by crystalline imperfections, that are not taken into account in theory. Qualitatively, detailed observations, such as [23], show that reversed domains nucleate in well-defined cities of
a given crystal, presumably at the points at which there are some crystalline imperfections, and then propagate to more perfect parts of the crystal. The nature of these imperfections has not been established, and they may be different in different cases, but impurity atoms and dislocations seem to be good candidates. At least a model assuming dislocations had [3] a semi-quantitative success. Some such ‘‘nucleation centers’’ were produced [24] by pricking a crystal with a needle. Other experiments and theories are discussed in Section 9.5.1 of Ref. [10], but they are all inadequate to count as a valid theoretical picture of the magnetization processes even in almost perfect crystals, let alone polycrystalline materials. The situation is similar to saying that Ge or Si do not behave according to the theory of ideal crystals, because there are impurity atoms in the lattice. It took a detailed theory of the role that such impurities play to achieve a transistor, but there is no such theory for imperfections in hard magnetic materials. The current theoretical studies are just the old domain theory that Brown has tried to replace. Moreover, even the domain theory is complicated by the tendency of the domain wall to be so narrow that the angle between neighboring spins in the wall is too large to apply the usual micromagnetics continuum approximation. Walls in hard materials must be studied [25] using the exchange interaction between discrete atoms. Another complication in developing a micromagnetics theory is that hard materials are very difficult to saturate, so that many experimental data are for minor loops. Often they are not even presented as minor loop, see Section 9.5.1 of Ref. [10], thus complicating the problem. Of course, a good theory should also deal eventually with minor loops, but it is easier to start with a simpler case. It may also be interesting to note that Brown [26,27] found upper and lower bounds to the ‘‘single-domain’’ size of a ferromagnetic sphere. For soft materials, these bounds are nearly the same, so that it is not necessary to calculate the exact value. For hard materials, however, the upper and lower bounds differ by orders of magnitude, so that their evaluation does not
A. Aharoni / Physica B 306 (2001) 1–9
supply any information, and the problem needs a completely different approach. Also, most hard materials are made of two different magnetic phases, in contact with each other. A theory that applies to such composite materials should specify the boundary conditions at the interface, but this part is not known. An attempt at such a theory considered [28] the different exchange on the surface between two materials, expressed as a certain surface integral. For the case of no other surface anisotropy, it led to postulate C1 qM1 M1 K12 M1 M2 ¼ 0; 2 qn1 M1
ð1Þ
C2 qM2 M1 K12 M2 M1 ¼ 0; 2 qn2 M2
ð2Þ
on the surface between M1 (with exchange constant C1)and M2 (with exchange constant C2). Here n1 and n2 are the normals from either side of the interface, and K12 is a parameter of the theory. This postulate, however is [29] impossible. A boundary condition must pass continuously to any other surface when M1 -M2 : And in this limit, Eqs. (1) and (2) lead to M qM=qn ¼ 0; which must apply to every arbitrary surface inside the ferromagnet. Such a requirement is much too strong to adopt, making these boundary conditions unacceptable. Others are badly needed.
4. Bulk soft materials Already in 1959 De Blois [30] found that the micromagnetics nucleation theory, that does not agree with measurements of a whole iron whisker, does agree with experimental values in the perfect parts of the whisker. Later refinements of the experiment [31] showed that the discrepancy in the imperfect parts was due to nucleation of reversed domains at surface pits or bulges. This conclusion was supported by a statistical correlation [32] between the probability of finding a defect and the measured nucleation field. It was made quantitative by measuring [33] the local nucleation field as a function of the volume of the surface roughness (or internal voids) there. Other extensions of this
5
experiment include similar measurements [34] of local nucleation in thin films, and a study [35] of whiskers under tensile stress. An attempt [36,37] to explain the effect of surface roughness on the nucleation field failed to reproduce the quantitative results of [33]. Since the primitive computers of that time were inadequate for this task, the problem was tackled by analytic model of a pit with a cylindrical symmetry, namely a groove machined all around the whisker. Obviously, such a model is too crude to represent a real surface roughness, so that the failure is hardly surprising. With present-day computers it should be easy to solve this problem of nucleation in a nearly perfect, soft whisker rigorously. All it takes it to divide a prism into a sufficient number of cubes, and remove some of these cubes at different places on the surface. The modeled whisker need not even be very elongated. It is sufficient to apply a large enough bias field for saturating the part that is not under investigation, as is indeed done [30,31,33] in the De Blois experiment. Such a separation of the edges also removes the effect of the whisker tips, which is a problem by itself, and should be studied separately. It is connected to the problem of the effect of the sharp edge of a cube or a prism, that is further discussed in Section 7. Such a detailed modeling of the De Blois experiment is long overdue. If reliable computations fail to reproduce the experimental data, something must be wrong with the whole micromagnetics approach, as some people claim. If it does reproduce those results, it will be a big step towards understanding the whole field, and an important start for advancing to a solution of the more complicated problems. I do hope it will be done soon, and I can only add that I believe the whisker geometry is essential for a clear-cut result. I cannot prove it, but I guess that there may be too many difficulties if this removal of cubes from the surface is tried for the thin film geometry, such as that of the NIST problem #1, that I prefer to discuss separately in the next section. At some stage, computations must also address the possible effect of crystalline imperfections, by having some subdivisions with, for example, different anisotropy than the others. Surface
6
A. Aharoni / Physica B 306 (2001) 1–9
roughness alone is most probably sufficient to account for the nucleation, as in the De Blois experiment. It is, however, quite likely that crystalline defects, such as dislocations, and impurity atoms, affect the rest of the magnetization curve, after nucleation. They seem very likely to have a large effect on the domain wall motion, but at this stage their role is an open question, that should be investigated. Another problem with micromagnetic computations is that only rather small samples can be studied numerically with a sufficient accuracy, because rough subdivisions lead to big errors, and computer resources are limited. This effect has been encountered in the computation of a domain wall in soft materials, discussed in Section 6, and probably also in the NIST problem #1, discussed in Section 5. A question which has been completely ignored so far, is whether this limited sample size is sufficient to reveal the long-range nature of the magnetostatic energy, discussed in Section 2. I cannot answer this question, but the following may provide a possible hint on how to study it. The theoretical turn-over of an isolated sphere into a ‘‘single domain’’ particle is [25,26] when its radius R is larger than pffiffiffiffi rffiffiffiffiffiffi q2 3C 1:017 C Rc0 ¼ ; q2 ¼ 2:0816 ð3Þ E Ms 4p Ms and smaller than the smaller of the two entities: pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2:2646 C=pMs2 Rc1 ¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi; ð4Þ 1 1:4038ðjK1 jÞ=ðpMs2 Þ and Rc2 ¼
9
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi CðjK1 j þ 8psMs2 Þ ; 8ð3s 2ÞMs2
s ¼ 0:785398;
bounds, when they are modified to approximate the interactions among particles. The modification is to replace Ms2 in Brown’s bounds by lMs2 =m where m is the initial permeability of the material, and l is a numerical factor, of the order of 1. The justification for this l comes from estimating [39] the effect of neighboring grains on the magnetostatic energy term of ‘‘moderately hard’’ polycrystalline ferrites in several simplified geometries. For most of his models, Knowles [39] obtained l of around 1/4 to 2/3, although he also found some other values. Similar calculations [40] led to l=1.5. I tried a slightly more elaborate model, in which a detailed magnetization structure, belonging to a true upper bound of an isolated sphere, interacted with eight saturated spheres surrounding it. The result [41] was similar to that of the cruder models, with lE0.5. None of these calculations leads to the initial permeability, m, which is [42] about 100 for NiZnferrite, and even considerably larger [38] for MgMnZn-ferrite. This factor was superimposed [39] on the magnetostatic energy calculation not as part of micromagnetics, but as a conclusion from the old Domain Theory. Since the theory does not agree with experiment without this large factor, that cannot be obtained from micromagnetics with short-range interactions included, it must be caused by the long-range interactions of very many particles, which are necessary to support a subdivision into domains. It may thus indicate that all computations with present-day computers will turn out to be inadequate to include the real long-range nature of ferromagnetism.
ð5Þ
where C(=2A) is the exchange constant. For a soft material, these bounds of Brown are close together. For example, for Fe the bounds imply a critical radius between 8.46 and 11.0 nm, which is almost as good as evaluating the critical size itself. This critical size was not measured for isolated particles. Experimental values exist only for bulk ferromagnets, made of highly interacting grains, for which there is no good theory. It was noted [38], however, that the experimental critical size in some ferrites fits between these upper and lower
5. NIST problem #1 For many years there were doubts concerning the validity and reliability of many of the published micromagnetic computations, especially since publications often omitted [10] crucial parameters, such as discretization size, or convergence criterion, etc. In 1997, NIST set to check this point by comparing the hysteresis curve computed by different groups for the same body shape and size,
A. Aharoni / Physica B 306 (2001) 1–9
with the same physical parameters. It was advertised on the web, as problem #1. The results [43] had a very wide distribution. Thus, for example, the coercivity with the field applied along the longest axis of the given prism, varied from 2.4 mT in one result to 32.9 mT in another. For the field applied along the secondlongest axis, coercivities varied by two orders of magnitude, from 0.1 to 9.8 mT. NIST regarded this spread as an indication of ‘‘serious issues to address in computational micromagnetics’’2 but was happy to treat them as information to be used in preparing the next problems. I fail to understand why they would not try to find out the reason for these large differences, by asking each participant to go back and try to change something, such as convergence criterion. In my mind, these results are not just a lesson to be learned, but a catastrophe, that casts doubt on all published computations. Thus e.g. Rave explained all the precautions they took, but then found it necessary to add [44] ‘‘of course we cannot exclude errors for the present calculations’’. After all, the people who submitted results to problem #1 cannot be different from the people who publish results in journals, and must be doing the same mistakes. The success of other NIST problems is not relevant, before we know why computations by one group lead to a value which is of a different order of magnitude than by another. And as long as there is no satisfactory answer to this failure, including a recipe for avoiding it, no published computational result can be trusted, including the ones in this symposium. The discrepancy here is not even similar to comparing theory with experimental results that may depend on effects that are neglected in the theory, such as defects. There are no defects here, and the results of two computations of the same problem must be identical, to within computational accuracy. I believe that correct computations are possible, and that the main difficulty with problem #1 is the choice of a too large body, that forces people to use too crude subdivisions, or crude approximations for the magnetostatic energy, or both. My 2
R.M. McMichael, in a circular to the Discussion Group.
7
belief, however, is not a substitute for a valid scientific check. There is no way to be sure except for restarting this (or a very similar) problem, and asking the participants to change, and specify, the crucial parameters, such as discretization size, and see if the different results converge towards a common answer. The method of computing the magnetostatic energy must also be specified, because it is likely that one of the important results will be that one or more of these methods is just wrong. Perhaps it was difficult to do it in the original problem because of the requirement of anonymity, but this requirement is not necessary, and should be eliminated in restarting the problem.
6. Domain walls Computations of the structure and energy of a single, isolated domain wall start by imposing its existence on the boundary conditions. As such, they are not really part of micromagnetics as Brown saw it in the beginning. Nevertheless, they have been taken as a part of it, ever since [17], and now they are actually the most complete, and most reliable computations. This problem is generally considered to be solved for medium thickness films of soft materials, such as iron or permalloy around 102–103 nm film thickness. In this range, the energy minimization is rigorous, and results are in good agreement with measured details [45] of the surface part of the wall. For thinner films, all computations use approximations for the magnetostatic energy, and are unreliable, see Section 11.3.1 of Ref. [10]. This situation will hopefully be soon improved by avoiding [46] the approximations. It should then be possible to compare the theoretical wall width and wall energy with the experimental values, that are mostly [47] in the under 102 nm film thickness range. Rigorous computations for Fe films can be carried [48] up to 3 to 4 mm, at which thickness computer resources are exhausted. It is not known what structure the wall assumes, or what its energy is, for thicker films, and ways should be found to extend this region. The estimations of the wall
8
A. Aharoni / Physica B 306 (2001) 1–9
energy in the bulk are based on the Landau and Lifshitz theory for an infinite material, and are most probably wrong.
tion has to be redefined for a cube, but the essential and most important part is to use a fine discretization, in spite of the claim [54] that a rough discretization is sufficient for studying recording particles. See also [49,50].
7. Recording particles Iron-oxide particles used as recording media are ideal for micromagnetic computations, because their size is compatible with today’s computer resources. There are several experimental studies of individual particles, with which it should be easy to compare the theoretical results. Moreover, their behavior roughly agrees with the rough micromagnetic estimations, which makes it possible to study the corrections that the theory needs in order to make it more realistic than it is. Computations are thus expected to bring the theoretical values even closer to the experimental ones, by using reasonable values (preferably taken from direct measurements) for effects that are not considered in the simplified micromagnetics calculations, such as magnetostriction, surface roughness, surface anisotropy, impurities, and crystalline defects. Some of them were left out of the old studies only because [10] there were no reliable experimental values to use, not because they are negligible. Instead of doing that, those who compute properties of recording particles mostly set their minds on ‘‘proving’’ that they can do better than the old results. They stick to the old approximations, justified or otherwise, and try to fit experiment by producing new modes that have already been proved to be physically impossible. For comparison they use [49,50] the analytic formula for an infinite cylinder, instead of putting the correct dimensions, with the assumption of a single crystal, when polycrystals are used in the experiment, use arbitrarily chosen values for the unknown exchange constant, etc. As long as this approach prevails, the subject cannot be advanced. For a long time the sharp corner of the cube used in these computations, was claimed to make it special, and not allow a comparison with the previous analytic results. This point is settled now by the systematic studies [51–53] that have proved that there is nothing special in a cube, provided the discretization is sufficiently fine. Of course, nuclea-
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
[11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39]
W.F. Brown Jr., J. Phys. Radium 20 (1959) 101. W.F. Brown Jr., Rev. Mod. Phys. 17 (1945) 15. A. Aharoni, Rev. Mod. Phys. 34 (1962) 227. A. Aharoni, Phys. Stat. Sol. 16 (1966) 3. J.E. Knowles, IEEE Trans. Magn. 16 (1980) 62. A. Aharoni, Phys. Today (June 1995) 33. J.E. Knowles, J. Magn. Magn. Mater. 61 (1986) 121. A. Aharoni, IEEE Trans. Magn. 22 (1986) 478. J.E. Knowles, IEEE Trans. Magn. 24 (1988) 2263. A. Aharoni, Introduction to the Theory of Ferromagnetism, Oxford University Press, Oxford, 1996 (2nd Edition, 2001). H. Hoffnann, IEEE Trans. Magn. 4 (1968) 32. W.F. Brown Jr., IEEE Trans. Magn. 6 (1970) 121. H.-B. Braun, H.N. Bertram, J. Appl. Phys. 75 (1994) 4609. A. Aharoni, J. Magn. Magn. Mater. 140–144 (1995) 1819. H.-B. Braun, J. Appl. Phys. 76 (1994) 6310. A. Aharoni, J. Appl. Phys. 80 (1996) 3133. W.F. Brown Jr., A.E. LaBonte, J. Appl. Phys. 36 (1965) 1380. A.E. LaBonte, J. Appl. Phys. 40 (1969) 2453. M.E. Schabes, A. Aharoni, IEEE Trans. Magn. 23 (1987) 3882. Y.D. Yan, E. Della Torre, IEEE Trans. Magn. 25 (1989) 2919. W. Chen, D.R. Fredkin, T.R. Koehler, IEEE Trans. Magn. 29 (1993) 2124. A. Aharoni, IEEE Trans. Magn. 27 (1991) 3539. C. Kooy, U. Enz, Philips Res. Rep. 15 (1960) 7. T. Kusunda, S. Honda, Appl. Phys. Lett. 24 (1974) 516. H.R. Hilzinger, H. Kronmuller, . Phys. Stat. Sol. (B) 54 (1972) 593. W.F. Brown Jr., J. Appl. Phys. 39 (1968) 993. W.F. Brown Jr., Ann. NY Acad. Sci. 147 (1969) 461. F. Goedsche, Acta Phys. Pol. A 37 (1970) 515. A. Aharoni, CRC Crit. Rev. Solid State Sci. 1 (1971) 121. R.W. De Blois, C.P. Bean, J. Appl. Phys. 30 (1959) 225 S. R.W. De Blois, J. Appl. Phys. 32 (1961) 1561. W.F. Brown Jr., J. Appl. Phys. Lett. 33 (1962) 3022. A. Aharoni, E. Neeman, Phys. Lett. 6 (1963) 241. F. Schuler, J. Appl. Phys. 33 (1962) 1845. W.B. Self, P.L. Edwards, J. Appl. Phys. 43 (1972) 199. A. Aharoni, J. Appl. Phys. 39 (1968) 5846, 5850. A. Aharoni, J. Appl. Phys. 41 (1970) 2484. P.J. van der Zaag, M. Kolenbrander, M.Th. Rekveidt, J. Appl. Phys. 83 (1998) 6870. J.E. Knowles, Br. J. Appl. Phys. 1 (1968) 987.
A. Aharoni / Physica B 306 (2001) 1–9 [40] P.J. van der Zaag, J.J.M. Ruigrok, A. Noordermeer, M.H.W.M. van Delden, P.T. Por, M.Th. Rekveldt, D.M. Donnet, J.N. Chapman, J. Appl. Phys. 74 (1993) 4085. [41] A. Aharoni, J. Appl. Phys., in press. [42] P.J. van der Zaag, P.J. van der Valk, M.Th. Reveldt, Appl. Phys. Lett. 69 (1996) 2927. [43] www.ctcms.nist.gov/Brdm/std1/prob1report.html. [44] W. Rave, A. Hubert, IEEE Trans. Magn. 36 (2000) 3886. [45] M.R. Scheinfein, J. Unguris, J.L. Blue, K.J. Coakley, D.T. Pierce, R.J. Celotta, P.J. Ryan, Phys. Rev. B 43 (1991) 3395. [46] A. Aharoni, J. Phys.: Condens. Matter 10 (1998) 9495.
[47] [48] [49] [50] [51]
9
A. Aharoni, J. Phys. (Paris) Colloq. C1 32 (1971) 966. A. Aharoni, J.P. Jakubovics, Phys. Rev. B 43 (1991) 1290. A. Aharoni, J. Magn. Magn. Mater. 146–147 (1999) 786. A. Aharoni, J. Magn. Magn. Mater. 203 (1999) 33. W. Rave, K. Ramst.ock, A. Hubert, J. Magn. Mater. 183 (1998) 329. [52] W. Rave, K. Fabian, A. Hubert, J. Magn. Mater. 190 (1998) 332. [53] A. Thiiaville, D. Tom!as& , J. Miltat, Phys. Stat. Sol. (A) 170 (1998) 125. [54] C. Seberino, H.N. Bertram, IEEE Trans. Magn. 33 (1997) 3055.