Meaning of OPTIMIZATION in English

OPTIMIZATION

field of applied mathematics consisting of a collection of principles and methods used for the solution of quantitative problems in many disciplines, including physics, biology, engineering, economics, and business. This mathematical area grew from the recognition that problems under consideration in manifestly different fields could be posed theoretically in such a way that a central store of ideas and methods could be used in obtaining solutions for all of them. Optimization is a technique for improving or increasing the value of some numerical quantity that in practice may take the form of temperature, air flow, speed, pay-off in a game, political appeal, destructive power, information, monetary profit, and the like. Techniques of optimization assume such varied forms that no one general description is possible. With the advent of modern technology, more and more emphasis has been placed on optimization of various types, and special thinking has developed to the extent that it is meaningful to speak of a mathematical theory of optimization. Computer technology has been critically important in practical applications. Further advances in the optimization and control of complex systems will probably depend more on mathematical theory than on technological invention. Optimization includes linear and nonlinear programming, cybernetics, and control theory, the various aspects of which are treated in this article. Game theory is often included as well, since early work on optimization was extended by the development of that branch of mathematics. field of applied mathematics consisting of a collection of principles and methods used for the solution of quantitative problems in many disciplines: physics, biology, engineering, economics, business, and others. This mathematical area grew from the recognition that problems under consideration in manifestly different fields could be posed theoretically in such a way that a central store of ideas and methods could be used in obtaining solutions for all of them. A typical optimization problem may be described in the following way. There is a system, such as a physical machine, a set of biological organisms, or a business organization, whose behaviour is determined by several specified factors. The operator of the system has as a goal the optimization of the performance of this system. The latter is determined at least in part by the levels of the factors over which the operator has control; the performance may also be affected, however, by other factors over which there is no control. The operator seeks the right levels for the controllable factors that will optimize, as far as possible, the performance of the system. For example, in the case of a banking system, the operator is the governing body of the central bank; the inputs over which there is control are interest rates and money supply; and the performance of the system is described by economic indicators of the economic and political unit in which the banking system operates. The first step in the application of optimization theory to a practical problem is the identification of relevant theoretical components. This is often the most difficult part of the analysis, requiring a thorough understanding of the operation of the system and the ability to describe that operation in precise mathematical terms. The main theoretical components are the system, its inputs and outputs, and its rules of operation. The system has a set of possible states; at each moment in its life, it is in one of these states, and it changes from state to state according to certain rules determined by inputs and outputs. There is a numerical quantity called the performance measure, which the operator seeks to maximize or minimize. It is a mathematical function whose value is determined by the history of the system. The operator is able to influence the value of the performance measure through a schedule of inputs. Finally, the constraints of the system must be identified; these are the restrictions on the inputs that are beyond the control of the operator. The simplest type of optimization problem may be analyzed using elementary differential calculus. Suppose that the system has a single input variable, represented by a numerical variable x, and suppose that the performance measure can be expressed as a function y = f (x). The constraints are expressed as restrictions on the values assumed by the input x; for example, the nature of the problem under consideration may require that x be positive. The optimization problem takes the following precise mathematical form: For which value of x, satisfying the indicated constraints, is the function y = f (x) at its maximum (or minimum) value? From calculus, the extreme values of a function y = f (x) with a sufficiently smooth graph can be located only at points of two kinds: (1) points where the tangent to the curve is horizontal (critical points) or (2) endpoints of an interval, if x is restricted by the constraints to such an interval. Thus the problem of finding the largest or smallest value of a function over the indicated interval is reduced to the simpler problem of finding the largest and smallest value among a finite set of candidates, and this can be done by direct computation of the value of f (x) at those points x. The theory of linear programming (q.v.) was developed for the purpose of solving optimization problems involving two or more input variables. This theory uses only the elementary ideas of linear algebra; it can be applied only to problems for which the performance measure is a linear function of the inputs. Nevertheless, this is sufficiently general to include applications to important problems in economics and engineering. Additional reading Linear and nonlinear programming The history, theory, and applications of linear programming may be found in George B. Dantzig, Linear Programming and Extensions (1963, reissued 1974). G. Hadley, Linear Programming (1962), is also informative. L.R. Ford, Jr., and D.R. Fulkerson, Flows in Networks (1962), stands as the classic work on the subject. An alternate source is T.C. Hu, Integer Programming and Network Flows (1969). James K. Strayer, Linear Programming and Its Applications (1989), addresses problems and applications. G.V. Shenoy, Linear Programming (1989), focuses on the field from the point of view of management and is accessible to readers without a background in mathematics. Howard Karloff, Linear Programming (1991), provides an introduction from the perspective of theoretical computer science. Another useful work is Ami Arbel, Exploring Interior-Point Linear Programming: Algorithms and Software (1993). One of the pathbreaking books on linear and nonlinear programming is G. Zoutendijk, Methods of Feasible Directions (1960). Leon S. Lasdon, Optimization Theory for Large Systems (1970), deals with linear and nonlinear programming problems. F.H. Clarke, Optimization and Nonsmooth Analysis (1990), serves as a good exposition of optimization in terms of a general theory. G.B. Dantzig and B.C. Eaves (eds.), Studies in Optimization (1974), is a volume of survey papers by leading experts. Olvi L. Mangasarian, Nonlinear Programming (1969, reprinted 1982), deals exclusively with theory; while Willard I. Zangwill, Nonlinear Programming (1969), primarily discusses algorithms. Mokhtar S. Bazaraa, Hanif D. Sherali, and C.M. Shetty, Nonlinear Programming: Theory and Algorithms, 2nd ed. (1993), is an introductory textbook. Stephen A. Vavasis, Nonlinear Optimization (1991), focuses on complexity issues. H.th. Jongen, P. Jonker, and F. Twilt, Nonlinear Optimization in IRn, 2 vol. (198386), offers an extensive and advanced approach to the subject from the standpoint of differential geometry. From various conferences have come J. Abadie (ed.), Nonlinear Programming (1967), and Integer and Nonlinear Programming (1970); George B. Dantzig and Arthur F. Veinott, Jr. (eds.), Mathematics of the Decision Sciences, 2 vol. (1968); and Jean-Paul Penot (ed.), New Methods in Optimization and Their Industrial Uses (1989)all feature papers on a wide range of subjects. Cybernetics Thomas M. Cover and Joy A. Thomas, Elements of Information Theory (1991), provides an introduction to information theory. Norbert Wiener, Cybernetics, 2nd ed. (1961), presents a very general discussion. Viktor M. Glushkov, Introduction to Cybernetics, trans. from Russian (1966), is also of interest. Additional sources include Stafford Beer, Cybernetics and Management, 2nd ed. (1967); Jir Klr and Miroslav Valach, Cybernetic Modelling, trans. from Czech (1967); and Constantin Virgil Negoita, Cybernetics and Applied Systems (1992). Control theory A good book on modern control and system theory is Richard E. Bellman, Dynamic Programming (1957, reissued 1972). More mathematical treatments include L.S. Pontryagin (L.S. Pontriagin) et al., The Mathematical Theory of Optimal Processes (1962, reprinted 1986); and E.B. Lee and L. Markus, Foundations of Optimal Control Theory (1967, reprinted 1986). Control theory in the wider context of system theory is treated in R.E. Kalman, P.L. Falb, and M.A. Arbib, Topics in Mathematical System Theory (1969), especially chapter 2. Roger W. Brockett, Finite Dimensional Linear Systems (1970), surveys the fundamental problems of description and optimization of linear constant-coefficient systems. A historical overview of feedback devices may be found in Otto Mayr, The Origins of Feedback Control (1970; originally published in German, 1969). D. Stanley-Jones and K. Stanley-Jones, The Kybernetics of Natural Systems (1960), covers information on biocontrol. Leslie M. Hocking, Optimal Control (1991), gives an accessible introduction, including various applications. Atle Seierstad and Knut Sydster, Optimal Control Theory with Economic Applications (1987), presents a mathematically rigorous development of optimal control theory from an economic perspective. E.J. McShane, The Calculus of Variations from the Beginning Through Optimal Control Theory, SIAM Journal on Control and Optimization, 27(5):916939 (1989), surveys the field with emphasis on recent developments, especially the work of Pontryagin. H.T. Banks (ed.), Control and Estimation in Distributed Parameter Systems (1992), is another informative work. George B. Dantzig Richard W. Cottle Rudolf E. Kalman The Editors of the Encyclopdia Britannica

Britannica English vocabulary.      Английский словарь Британика.