Meaning of NUMERICAL ANALYSIS in English

NUMERICAL ANALYSIS

the general study of methods for solving complicated problems, using the basic operations of arithmetic (addition, subtraction, multiplication, and division). The development of digital computers, which perform only the basic arithmetic operations, has made numerical analysis one of the basic areas in applied mathematics, and it is now central in much of modern science and engineering. To use computers in solving problems of science and engineering, numerical analysis must be applied at some stage. Numerical analysis includes the development of methods for approximating the processes of ordinary mathematical analysis (i.e., calculus) by arithmetic operations. The plan of how the arithmetic is to be organized to solve a specific class of problems is called an algorithm. The detailed organization of procedures that a machine may take to implement a given algorithm is called a (machine) program. The list of detailed instructions or commands a specific machine employs to carry out a program is called a code. The historical roots of numerical analysis are identical to those of mathematics. The problems considered during the dawn of mathematics concerned what are now simple problems of arithmetic and geometry, but the Greeks were already treating sophisticated problems of calculus by methods related to current techniques of numerical analysis. For example, the circumference, C, of a circle of radius r is known to be C = 2pr, where the Greek letter pi (p) represents the irrational number 3.1415926. . . . By constructing the regular N-sided polygons (N-gons) that are inscribed in and circumscribed about this circle, we easily obtain, respectively, lower and upper bounds on C. Specifically, if lN = NhN is the sum of the lengths, hN, of the sides of the inscribed regular N-gon and LN = NHN is the sum of the lengths, HN, of the sides of the circumscribed regular N-gon, then lN i, is obtained. The Nth such equation specifies xN directly. The value of xN can then be substituted into the (N - 1)st equation to get xN - 1, and so on. This is an entirely mechanical process that can be programmed easily for a computer. It is called an algorithm, a specification of a computational process that will determine the answer to a problem in a finite number of steps. It should be noted that if exact arithmetic could be used in this algorithm and if the division in equation (12) and subsequent divisions were possible (that is, if A11 and subsequent Aii were not zero), the solution of the linear equations could be computed exactly. Figure 6: Graphic solution of a pair of linear equations (see text). Figure 6: Graphic solution of a pair of linear equations (see text). In a practical computation, arithmetic cannot be performed exactly, so round-off errors will affect the answers. Their effect can be seen by considering a graphic interpretation of an equation: an equation of the form (7) represents a straight line on a graph relating x1 and x2, as in Figure 6. Any point on the line l1 shown has coordinates x1 and x2 that satisfy equation (7). Similarly, equation (8) represents another straight line shown as l2 in Figure 6. Any point on that line satisfies equation (8). Hence, the values of x1 and x2 that satisfy both (7) and (8) must lie on both lines, so that these values are the coordinates of the intersection of the two lines. If the lines are not parallel, they have a unique intersection, so that the equations have a unique solution. Figure 7: Effect of round-off errors on the graphic representation of a linear equation. Figure 8: Graphic solution of two linear equations, showing the effect of round-off error. Round-off errors in the computation have an effect similar to that of changing the coefficients of the equations. Hence, in the presence of round-off errors, a computer will work with a narrow region that can be thought of as a thick line, as shown in Figure 7, rather than an ideal line. The coordinates of any point inside the shaded region satisfy the equation of the line within the accuracy of the errors introduced into the coefficients. A pair of equations represents two thick lines within the accuracy of their coefficients, and the solution determined by the pair of equations lies somewhere within the intersection of the two thick lines, as shown in Figure 8. Figure 9: Graphic solution of two linear equations for which the slopes of the lines are nearly If the equations represent a pair of lines that are nearly parallel, the numerical problem represented by the pair of thick lines in Figure 9 does not have an accurately determined answer because the region of intersection is large. The problem is said to be ill-conditioned. Any slight changes in the coefficients can cause large changes in the answer, so any round-off errors in the numerical procedure may cause large changes in the answer. Figure 10: Graphic representation of the Gaussian eliminationback substitution procedure Figure 6: Graphic solution of a pair of linear equations (see text). In the Gauss eliminationback substitution procedure described above, it was noted that it could always be completed unless one of the Aii divisors was zero. If one of those divisors is very small, intermediate results may be very large, and the final answer may be obtained as the difference of two large numbers. This means that cancellation may occur so that there may be a large effect due to round-off. This effect can be seen in a graphic view of the process of Gaussian elimination. As described above, the equations represent a pair of lines. Dividing an equation by a number, as in dividing (7) by 3 to get (9), does not change the line the equation represents. Substituting one equation into another, as in substituting (9) into (8) to get (10), gives another equation that also represents a straight line, l4. Since the solution of the original equations lies on the new line, this new line must pass through the intersection of the two original lines. It should be noted that equation (10) does not involve x1, so that line l4 is horizontal, as shown in Figure 10. The final step, the back substitution, evaluates x2 from (10) and substitutes into (9)which represents the same line as (7)to get x1. Thus, the solution is determined from lines l1 and l4. In Figure 6 they intersect nearly at right angles, as do l1 and l2. Hence, both the original problem of finding the intersection of l1 and l2, and the intermediate problem of finding the intersection of l1 and l4, are well-conditioned. If the line l1 had been nearly horizontal and the line l2 had been nearly vertical, however, the result of the Gaussian elimination would have been a nearly horizontal line, l4, to be used with the nearly horizontal line l1 to determine the solution. As described earlier, this is an ill-conditioned problem subject to large numerical changes in the answer. In this case, although the initial problem is well-conditioned (lines l1 and l2 intersect nearly at right angles), the intermediate problem given by lines l1 and l4 is ill-conditioned because of the poor choice of the numbering of the equations. The algorithmic solution to this is to reorder the equations during the elimination process to ensure that the Aii elements are large. This is called partial pivoting. Even with partial pivoting, solution errors for ill-conditioned problems can be large. A formal error analysis provides a pessimistic bound of the errors that can occur even for a well-conditioned problem, especially as the number of equations increases. Before the introduction of electronic computers this situation did not present a difficulty because large problems could not be tackled. Around 1950 it was thought unlikely that computers could produce accurate solutions of large systems of linear equations. The English mathematician J.H. Wilkinson and other researchers, who had been solving problems by hand methods, showed that problems could frequently be solved much more accurately than the conventional analysis predicted. Finally, Wilkinson's backward error analysis showed that the Gaussian elimination process would solve a system of equations with a residual or backward error that did not increase too rapidly as the size of the problem increased. Experience has confirmed Wilkinson's theoretical result: the error does not increase very much with the size of the problem. Today it is common to solve sets of hundreds of thousands of linear equations on computers. There is major interest in ways in which such large problems can be solved rapidly. If the system is dense, which means that most of the coefficients, Aij, are not zero, the Gaussian elimination process takes an amount of time equal to aN3 + bN2 + cN + d, where a, b, c, and d are constants that depend on the computer. As N gets large, the dominant term in the time is aN3, and subsequent doublings in the problem size result in eightfold increases in the solution time. The method is said to have a running time asymptotic to N3. The asymptotic behaviour of a method provides a measure of its speed for very large problems. For example, if a method has an asymptotic speed of N2.5, it will execute more rapidly than one with asymptotic speed N3 for very large N, although for small N it may not be faster. The asymptotically fastest method for linear equations is not known, and it is a question of considerable theoretical interest. It is known that they can be solved at asymptotic speeds close to N2.5. For practical problems that will fit into computers, the Gaussian elimination technique remains the best method known for dense problems. Many large problems are sparse; that is, most of the coefficients Aij are zero (because each equation describes a small piece of the physical system being modeled, and each piece is influenced only by a few other pieces). In such cases, the numerical analyst is interested in methods that do not change many coefficients from zero to non-zero (called fill-in). Gaussian elimination can still be used, and the equations can be re-ordered to minimize fill-in. In many cases of large sparse problems, however, iterative methods (discussed below) are preferable. the general study of methods for solving complicated problems using the basic operations of arithmetic (addition, subtraction, multiplication, and division). Numerical analysis comprises three related activities: the study of a problem by the computation of numerical values that solve the mathematical equations (called mathematical models) describing the behaviour of some system, the development of methods (called numerical algorithms) for finding those values, and the analysis of the properties of those methods. The mathematical models are developed by professionals in various subject areas of application, that is, by scientists, engineers, or economists. A numerical analyst is a person who develops or analyzes methods for computing the numerical values. Mathematical models are the starting point for the numerical analyst. Examples of models include an astronomer's mathematical description of the motion of a planet or a satellite (called its equations of motion), the equations used by a designer to describe the radioactivity in a nuclear reactor and the operation of its control system, the equations hypothesized by a research chemist to describe a chemical reaction, the equations used by government planners to express the relationships between economic and social indicators such as gross national product (GNP), prime rate, and unemployment rate, and the equations used in computer-aided tomography (CAT scans) to construct a view of the interior of a human body from an X-ray scan. Among the uses of models are the following: prediction of future behaviourfor example, the determination of the future locations of the heavenly bodies (a matter of continuing interest in scientific calculation but of great concern in previous centuries because of its importance to navigation before the development of modern instruments); the selection of the best designfor example, the choice of the lowest cost nuclear reactor that meets the requirements of power output and safety; the comparison of computed values with experimental observations to test the validity of the theory represented by the model or to estimate the values of unknown constants that appear in the modelfor example, the research chemist's effort to determine the rate at which materials react by comparing changes in chemical concentrations predicted from model equations with the values measured in the laboratory; and the determination of values that are not directly observable from other values that arefor example, in computer-aided tomography, the use of X-ray observations to provide the average density of a large set of narrow, pencil-shaped sections of the body, which then are used to construct a map of the density of the interior of the body. Some problems are described by models so simple that they can be manipulated without numerical computation. For example, if a metal rod is subject to a stretching force (the stress, F) and it lengthens a certain amount (the strain, L), the stress-strain relationship might be modeled by the equation L = lF, in which l is a numerical constant. This says that the stretching is linearly proportional to the applied force, so that if the force is doubled, the stretching doubles. A person can understand this relationship and use it to solve problems such as What force is needed to stretch the rod L millimetres? in the symbolic or explicit form F = L/l, so that it is unnecessary to compute actual values until they are needed in a design task. However, many models are so complex that it is not possible to solve problems explicitly or to understand the relationships expressed by the model. In that case, the model user turns to numerical methods, either to compute specific solutions or to study the behaviour of the model as it is changed. The goal of numerical analysis is the efficient computation of accurate approximations of the values that satisfy mathematical equations. Two major problems confront the numerical analyst: the round-off errors that unavoidably arise during computation, and the representation of problems involving infinite amounts of information with the finite number of values that a person (or computer) can handle. Additional reading General works Herman H. Goldstine, A History of Numerical Analysis from the 16th Through the 19th Century (1977); Kendall Atkinson, Elementary Numerical Analysis (1985); S.D. Conte and Carl De Boor, Elementary Numerical Analysis: An Algorithmic Approach, 3rd ed. (1980); F.B. Hildebrand, Introduction to Numerical Analysis, 2nd ed. (1973); Germund Dahlquist and ke Bjrck, Numerical Methods, trans. from Swedish (1974); Robert W. Hornbeck, Numerical Methods (1975); Peter A. Stark, Introduction to Numerical Methods (1970). Numerical methods Robert F. Churchhouse (ed.), Numerical Methods (1981); Eugene Isaacson and Herbert Bishop Keller, Analysis of Numerical Methods (1966); George Boole, Calculus of Finite Differences, 5th ed., edited by J.F. Moulton (1970); Gilbert Strang and George J. Fix, An Analysis of the Finite Element Method (1973); J.H. Wilkinson, Rounding Errors in Algebraic Processes (1964); J.F. Steffensen, Interpolation, 2nd ed. (1950); John R. Rice, The Approximation of Functions, 2 vol. (196469); L. Collatz, Functional Analysis and Numerical Mathematics (1966; originally published in German, 1964); and E.W. Cheney, Introduction to Approximation Theory, 2nd ed. (1982). Solution of equations J.H. Wilkinson and C. Reinsch, Linear Algebra (1971); Ben Noble and James W. Daniel, Applied Linear Algebra, 2nd ed. (1977); A.S. Householder, The Numerical Treatment of a Single Nonlinear Equation (1970); J.E. Dennis, Jr., and Robert B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations (1983); George E. Forsythe, What Is a Satisfactory Quadratic Equation Solver?, pp. 5361 in Bruno Dejon and Peter Henrici (eds.), Constructive Aspects of the Fundamental Theorem of Algebra (1969); J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables (1970); J.F. Traub, Iterative Methods for the Solution of Equations, 2nd ed. (1982); A.M. Ostrowski, Solution of Equations in Euclidean and Banach Spaces, 3rd ed. (1973); L.M. Milne-Thomson, The Calculus of Finite Differences, 2nd ed. (1981); Peter Henrici, Discrete Variable Methods in Ordinary Differential Equations (1962); G.D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed. (1985); C. William Gear, Numerical Initial Value Problems in Ordinary Differential Equations (1971); L.G. Chambers, Integral Equations: A Short Course (1976); and Gene H. Golub and Charles F. Van Loan, Matrix Computations (1983). Applications and implementations G.M. Phillips and P.J. Taylor, Theory and Applications of Numerical Analysis (1973); R.W. Hamming, Numerical Methods for Scientists and Engineers, 2nd ed. (1973, reprinted 1987); Curtis F. Gerald and Patrick O. Wheatley, Applied Numerical Analysis, 3rd ed. (1984); Carl-Eric Frberg, Numerical Mathematics: Theory and Computer Applications, rev. ed. (1985; originally published in Swedish, 1962); George E. Forsythe, Michael A. Malcolm, and Cleve B. Moler, Computer Methods for Mathematical Computations (1977); Bruce W. Arden (ed.), What Can Be Automated?: The Computer Science and Engineering Research Study (1980); and C. William Gear, Numerical Software: Science or Alchemy?, Advances in Computers, 19:229248 (1980). Charles William Gear

Britannica English vocabulary.      Английский словарь Британика.