PROBABILITY THEORY


Meaning of PROBABILITY THEORY in English

a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance. The word probability has several meanings in ordinary conversation. Two of these are particularly important for the development and applications of the mathematical theory of probability. One is the interpretation of probabilities as relative frequencies, for which simple games involving coins, cards, dice, and roulette wheels provide examples. The distinctive feature of games of chance is that the outcome of a given trial cannot be predicted with certainty, although the collective results of a large number of trials display some regularity. For example, the statement that the probability of heads in tossing a coin equals one-half, according to the relative frequency interpretation, implies that in a large number of tosses the relative frequency with which heads actually occurs will be approximately one-half, although it contains no implication concerning the outcome of any given toss. There are many similar examples involving collections of people, molecules of a gas, genes, and so on. Actuarial statements about the life expectancy for persons of a certain age describe the collective experience of a large number of individuals but do not purport to say what will happen to any particular person. Similarly, predictions about the chance of a genetic disease occurring in a child of parents having a known genetic makeup are statements about relative frequencies of occurrence in a large number of cases but are not predictions about a given individual. A second interpretation of probability, as a personal measure of uncertainty, is discussed below. For further discussion of the applications of probability theory, see statistics. a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance. The entire set of possible outcomes of a random event is called the sample space, and each outcome in this space is assigned a probability, a number indicating the likelihood that the particular outcome will arise in a single instance. The probabilities are nonnegative and their sum is 1. An example of a random experiment is the tossing of a coin. The sample space consists of the two outcomes, head and tail, which usually are considered to be equally likely, so that each is assigned the same probability, 1/2. Games of chance were the first random experiments to be analyzed. The 17th-century French mathematicians Blaise Pascal and Pierre de Fermat, in response to the requests of prominent gamblers, initiated the mathematical study of particular games. A typical problem was that of the Gambler's Ruin. Two players, Peter and Paul, toss a coin. For each head, Paul pays Peter $1; and for each tail, Peter pays Paul $1. If Peter initially has a dollars and Paul b dollars, what is the probability that Peter will be ruined? The probability is equal to b / (a + b), the proportion of the total capital initially in Paul's possession. Other questions associated with this game are: How long can one expect the game to last before one player is ruined? What is changed if the coin shows a bias in favour of head or tail? As science grew in the later centuries, analogies appeared between certain biological, physical, and social phenomena and games of chance. For example, the sexes of newborn infants follow sequences similar to those of coin tosses. As a result, probability became a fundamental tool in modern genetics. Molecules, particles, and quanta of heat and light behave in a random manner and can be treated mathematically as outcomes of games of chance. For example, a smokestack emits many small particles; as they emerge from the stack, they are carried parallel to the ground in the direction of the wind. They also move up and down in a manner analogous to that of the fortune of a gambler in a coin-tossing game. Thus the height of the particle above the ground after a specified time is governed by the laws of the game. The physicist is less interested in the motion of a single particle than in the behaviour of the collection of particles. Estimation of the proportion of particles falling below a specified height at a given time can be answered by reference to the mathematical solution of the corresponding problems for the game of chance. Probability also forms the rational basis of the institution of insurance. An insurance company may be compared to a bettor who places a series of bets on the health, life, or welfare of specified individuals or properties. Using past records, the insurer employs probability theory to derive the requirements for staying ahead in the game. Two of the primary results of the mathematical theory of probability are the law of large numbers and the central limit theorem. If a random experiment is repeated many times under identical conditions, and the outcomes are recorded, the law of large numbers implies that the proportion of performances in which some specified outcome occurs is roughly equal to the underlying probability of that outcome. The important consequence of this is that the probabilities can be estimated by observing relative frequencies of the outcomes in a long series of performances. The central limit theorem implies information about the probable deviation of the observed relative frequency of a particular outcome from the underlying probability of the outcome. It states that this deviation is governed by a universal probability law that is described mathematically in terms of the so-called normal curve. Additional reading F.N. David, Games, Gods, and Gambling: The Origins and History of Probability and Statistical Ideas from the Earliest Times to the Newtonian Era (1962), covers the early history of probability theory. Stephen M. Stigler, The History of Statistics: The Measurement of Uncertainty Before 1900 (1986), describes the attempts of early statisticians to use probability theory and to understand its significance in scientific problems. W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1, 3rd ed. (1967), and vol. 2, 2nd ed. (1971), contains a masterly exposition of discrete probability theory in vol. 1, while vol. 2 requires a more sophisticated mathematical background. A.N. Kolmogorov, Foundations of the Theory of Probability, 2nd ed. (1956; originally published in German, 1933), is eminently readable, although it requires knowledge of measure theory. Joseph L. Doob, Stochastic Processes (1953, reissued 1964), is a comprehensive treatment of stochastic processes, including much of Doob's original development of martingale theory. Nelson Wax, Selected Papers on Noise and Stochastic Processes (1954), collects six classical papers on probability theory, especially in its relation to the physical sciences. M. Love, Probability Theory, 4th ed., 2 vol. (197778), is an encyclopaedic reference book covering discrete probability theory and developing measure theory, the laws of large numbers, the central limit theorem, and stochastic processes. See also Walter Ledermann (ed.), Handbook of Applicable Mathematics, vol. 6, Probability, ed. by Emlyn Lloyd (1980), a practical text written for the educated lay reader. David O. Siegmund

Britannica English vocabulary.      Английский словарь Британика.