Meaning of ALGEBRA in English
branch of mathematics in which the procedures of arithmetic are generalized and applied to variable quantities, as well as to specific numbers. To the layperson, algebra means elementary algebra, in which one learns to calculate with variables instead of just the numbers of arithmetic and to solve polynomial equations. To the professional mathematician, however, as well as to increasing numbers of scientists in other fields, algebra means rather what is called modern, higher, or abstract, algebrathe study of abstract mathematical structures in which there are operations that have the properties of addition and multiplication. Essential to both elementary and higher algebra is the fact that the calculations should always involve only a finite number of quantities and end after a finite number of steps; in other words, processes in which the answers are obtained in the limit generally do not belong to algebra. Thus is algebra, but is not. A second characteristic of algebra is its abstractness. Even in elementary algebra, calculations are made not with numbers but with letters that represent numbers. In higher algebra the letters may represent much more general objects, and the system of calculation is itself an abstraction of systems having similar properties. The demands placed on algebra by other branches of mathematics have been the richest and most significant source of new results in algebra, while, at the same time, the axiomatic, abstract algebraic viewpoint has simplified and clarified work in other fields, providing techniques leading both to new results and to unexpected connections between work in widely separated fields. This algebraization of mathematics has been one of the most characteristic features of 20th-century mathematics. In this way the influence of algebra has actually been far greater than its results. At the same time, modern algebra has brought a clearer understanding of the processes of elementary algebra and has enabled mathematicians to gain an understanding of the principles that underlie the calculations. The basic task of elementary algebra is the solution of polynomial, i.e., algebraic, equations and the stepwise introduction of new types of numbersnegative, real, and complexto use in solving them. Determinants and matrices are devices for facilitating the solution of simultaneous equations and thus have their place. The quadratic formula is the explicit solution of the quadratic equation, and so also merits attention. Permutations and combinations (typical problem: in how many ways can six men choose their wives from among six women, assuming no bigamy) are really elementary probability theory, but they use only algebraic arguments and the formulas are occasionally usefulthe binomial coefficients, for instance. Exponentials and logarithms, on the other hand, are certainly not algebra, since they involve passage to the limit in their definition. Of course, the emphasis in elementary algebra is not on the precise definition of these functions but rather on the formal rules for operating with them, such calculation with formal rules being universally interpreted as algebra. Modern algebra evolved from elementary algebra in a series of jumps beginning essentially with the work of variste Galois in 1830. It includes all of elementary algebra and a great deal more, but in an extensively generalized form; each topic of elementary algebra is thus viewable as the starting point of one of the theories of modern algebra. The principal abstract objects of study are groups, rings, fields, vector spaces, and algebras. Groups have one operation (called addition or multiplication); rings have two; fields are commutative rings in which one can divide; vector spaces are additive groups in which there is a scalar multiplication by elements of some field; algebras (sometimes called linear algebras) are vector spaces with a multiplication law defined. The theory of rings came from study of the formal properties of integers and polynomials. Fields arose from the attempts to provide solutions for algebraic equations. The proof that the equation of fifth degree could not be solved by a formula analogous to the quadratic formula gave rise to group theory. Finally, the solution of simultaneous linear equations in elementary algebra led directly to the theory of vector spaces and matrices; the square n n matrices themselves give one of the most important examples of an algebra. Finally, a word should be said about some of the applications of algebra to sciences other than mathematics. In theoretical physics, the theory of groups and their representations has played an important part in the development of quantum theory, particularly in connection with solid-state physics. The theory of Boolean algebras has been widely used in the design of computing machines. In the social sciences, psychology and economics are finding use for matrices and linear algebra in what is called linear programming. The introduction of algebra into other disciplines has stimulated the further development of algebra itself. branch of mathematics in which the operations and procedures of addition and multiplication are applied to variables rather than specific numbers. Algebra may be described as a generalization and extension of arithmetic. Elementary arithmetic is concerned primarily with the effect of certain operations, such as addition or multiplication, on specified numbers, hence, for instance, the multiplication tables; elementary algebra is concerned with properties of arbitrary numbers. For instance, the fact that 2 added to 3 gives the same result as 3 added to 2 is one of arithmetic; the formula a + b = b + a for all numbers a, b is one of algebra. The particular operations of arithmetic that came to be extended and generalized to provide the materials of algebra emerged only slowly. The earliest writings on the subject dealt with many topics that are not now regarded as part of algebra. (For a treatment of the evolution of algebra as a well-defined branch of mathematics, see mathematics, history of.) The earliest extant work with any claim to be regarded as a treatise on algebra is by the Greek philosopher Diophantus of Alexandria (c. AD 250). This work is devoted mainly to problems in the solution of equations. For this purpose a suitable notation had to be invented, and Diophantus gave rules for generating powers of a number and for the multiplication and division of simple quantities. Of great significance is his statement of the laws governing the use of the minus sign, which did not, however, imply any idea of negative quantities. During the 6th century the ideas of Diophantus were improved on by Hindu mathematicians, and many deficiencies in the Greek symbolism were remedied. The development of symbolic algebra by the use of general symbols to denote numbers is due to a 16th-century French mathematician, Franois Vite, a usage that led to the idea of algebra as generalized arithmetic. Sir Isaac Newton gave it the name Universal Arithmetic in 1707. The main step in the modern development of algebra was the evolution of a correct understanding of negative quantities, contributed in 1629 by a French mathematician, Albert Girard, whose work was later overshadowed by that of his contemporary, Ren Descartes. While it is convenient to view Descartes's work as the starting point of modern algebra, for the sake of clarity notations and terminology will be used that were developed later. Algebra is concerned with certain operations on numbers, and it is necessary to be precise about what a number is. The numbers dealt with are either the natural numbers, 0, 1, 2, 3, 4, (some authors exclude 0); the rational numbers, which have the form p/q, in which p and q are integers (natural numbers and their negatives), with q 0; the real numbers, which correspond to all the points on a line; or the complex numbers, which are constructed from the real numbers together with a number i, the square of which is -1. The essential property that these numbers have is that they can be added and multiplied by well-established rules. Arithmetic becomes algebra when general rules are stated regarding these operations, as, for example, the commutative law of addition (see below). First the basic algebraic properties of such numbers are considered. Linear algebra is the branch of algebra that deals primarily with linear problems, that is to say, problems that depend for the most part on the solution of linear equations. An equation in two or more variables, or unknowns, is linear if it contains no terms of the second degree or greater; that is, if it contains no products or powers of the variables. The term linear derives from the fact that the graph of a linear equation in x and y is a straight line in the Cartesian xy-plane. Thus a linear equation represents a linear relationship between the variables x and y in a geometric sense. Similarly, a linear equation in three variables x, y, z represents a plane in three-dimensional space, and two such equations considered simultaneously represent the line of intersection of the two planes, provided they are not parallel. When the number of variables is greater than three, there is no longer a simple geometric interpretation because physical space is limited to three dimensions. Nevertheless, it is customary to continue the geometric analogy and to think of the solutions of a linear equation in four variables as constituting a hyperplane in a four-dimensional space, and similarly for any finite higher dimension. The theoretical investigation and solution of general systems of linear equations are facilitated by the introduction of entities called vectors and matrices. Vectors were originally introduced in order to interpret mathematically a physical quantity such as a velocity or force that has both a magnitude and an associated direction. A matrix is a rectangular array of numbers in a definite order, such as the array of coefficients of the unknowns in a system of linear equations. Rules of computation with vectors, which stem from their original physical interpretation, lead to closed systems of vectors called vector spaces, and matrices can be identified with special functions on these vector spaces called linear transformations. The theory of linear transformations of finite-dimensional vector spaces, which embraces the theory of matrices and that of systems of linear equations, constitutes the subject matter of linear algebra. Vectors are more fully treated in the article analysis: Vector and tensor analysis. Out of the developments of elementary algebra evolved the abstract algebra used today and the idea of an algebraic structure. Elementary algebra was originally concerned with a set of elements, the numbers used in arithmetic. Together with these elements there were two operations, addition and multiplication (subtraction and division being the inverse of these). It became recognized that there was also a collection of basic rules that constitute an axiomatic structure. The axiomatic structure describes even today the assumptions made in elementary algebra and arithmetic. Certain entities, however, do not follow these rules. Abstract algebra is concerned with the formulation and properties of quite general axiomatic abstract systems of this type. These systems are sets of elements with general operations and with a number of axioms. Just as new self-consistent geometries can be based on axioms other than those of Euclid, new algebras can be based on axioms that differ from those of elementary algebra. These new algebras may describe mathematical objects other than numbers. Additional reading General works B.L. van der Waerden, A History of Algebra: From al-Khwarizmi to Emmy Noether (1985), provides a useful account. Among many textbooks on elementary algebra are Marshall D. Hestenes and Richard O. Hill, Jr., College Algebra, 2nd ed. (1986); Raymond W. Brink, College Algebra, 2nd ed. (1961); and Christopher Schaufele and Nancy Zumoff, Earth Algebra: College Algebra with Applications to Environmental Issues (1993). Somewhat more advanced is David Dobbs and Robert Hanks, A Modern Course on the Theory of Equations, 2nd ed. (1992). Introductory texts on modern algebra include Garrett Birkhoff and Saunders MacLane, A Survey of Modern Algebra, 4th ed. (1977); John B. Fraleigh, A First Course in Abstract Algebra, 5th ed. (1994); Lindsay Childs, A Concrete Introduction to Higher Algebra (1979, reissued with corrections, 1988); and I.N. Herstein, Topics in Algebra, 2nd ed. (1975). An accessible overview with an emphasis on representation theory is Michael Artin, Algebra (1991). General works on abstract algebra include B.L. van der Waerden, Algebra, 2 vol. (1970, reissued 1991; originally published in German, 7th and 5th eds., 1966, 1967); Thomas W. Hungerford, Algebra (1974, reissued with corrections, 1989); and Nathan Jacobson, Basic Algebra, 2nd ed., 2 vol. (198589). Linear and multilinear algebra Elementary-level books include Howard Anton, Elementary Linear Algebra, 7th ed. (1994); and Marvin Marcus and Henryk Minc, Introduction to Linear Algebra (1965, reprinted 1988). Intermediate-level textbooks include Bill Jacob, Linear Algebra (1990); Kenneth Hoffman and Ray Kunze, Linear Algebra, 2nd ed. (1971); and Seymour Lipschutz, Schaum's Outline of Theory and Problems of Linear Algebra, 2nd ed. (1991). More advanced treatments can be found in W.H. Greub, Linear Algebra, 4th ed., rev. (1981; originally published in German, 1958); Georgi E. Shilov, An Introduction to the Theory of Linear Spaces (1961, reissued 1974; originally published in Russian, 1954); and Paul R. Halmos, Finite-Dimensional Vector Spaces, 2nd ed. (1958, reprinted 1974). D.G. Northcott, Multilinear Algebra (1984), gives a concise but thorough account and assumes some background in linear algebra and module theory. The Editors of the Encyclopdia Britannica Lattice theory J. Eldon Whitesitt, Boolean Algebra and Its Applications (1961), gives a very elementary introduction to Boolean algebra, with some applications to logic and switching circuits. James C. Abbott, Sets, Lattices, and Boolean Algebras (1969), is a very readable introductory text on sets and lattices from a purely mathematical standpoint. Garrett Birkhoff, Lattice Theory, 3rd ed. (1967), is the standard treatise on the subject. B.A. Davey and H.A. Priestley, Introduction to Lattices and Order (1990), emphasizes modern applications. Paul R. Halmos, Algebraic Logic (1962), discusses in depth some of the Boolean algebras arising in logic. J. Barkley Rosser, Simplified Independence Proofs (1969), shows how these have been applied in set theory and logic. A relevant later work is Peter T. Johnstone, Stone Spaces (1982). Garrett Birkhoff The Editors of the Encyclopdia Britannica Groups M.A. Armstrong, Groups and Symmetry (1988), provides an elementary introduction. Camille Jordan, Trait des substitutions et des quations algbriques (1870, reissued 1957); and William Burnside, Theory of Groups of Finite Order, 2nd ed. (1911, reissued 1955), are the great classical works on group theory. Standard references include Hans Zassenhaus, The Theory of Groups, 2nd ed. (1958; originally published in German, 1937); B.L. van der Waerden, Gruppen von linearen Transformationen (1935, reissued 1948); Joseph J. Rotman, The Theory of Groups, 2nd ed. (1973); and A.G. Kurosch, The Theory of Groups, 2nd ed., 2 vol. (1960; originally published in Russian, 1953), which developed the theory of free groups and free products extensively. Irving Kaplansky, Infinite Abelian Groups, rev. ed. (1969), treats a theory that is now a major subject in its own right. The importance of group theory in physics is recognized by Volker Heine, Group Theory in Quantum Mechanics (1960). A more modern approach and revival of representation theory is shown in Marshall Hall, Jr., The Theory of Groups, 2nd ed. (1976); William R. Scott, Group Theory (1964, reprinted 1987); and M.J. Collins, Representations and Characters of Finite Groups (1990). Wilhelm Magnus, Abraham Karrass, and Donald Solitar, Combinatorial Group Theory, 2nd rev. ed. (1976), deals mostly with generators and relations. The book-length paper by Walter Feit and John G. Thompson, Solvability of Groups of Odd Order (1963), brought activity to a high pitch; and the development and extension of this theory is the subject of the advanced treatises by Daniel Gorenstein, Finite Groups, 2nd ed. (1980); and Michael Aschbacher, Finite Group Theory (1986). B. Huppert and N. Blackburn, Finite Groups, 3 vol. (196782)vol. 1 is in German; and J.H. Conway, Atlas of Finite Groups (1985), are comprehensive references. Daniel Gorenstein, Finite Simple Groups: An Introduction to Their Classification (1982), and The Classification of Finite Simple Groups, vol. 1, Groups of Noncharacteristic 2 Type (1983), present detailed outlines of the problem as it stood in the early 1980s. Marshall Hall, Jr. The Editors of the Encyclopdia Britannica Fields Many works on algebra contain material on fields, such as Solomon Feferman, The Number Systems: Foundations of Algebra and Analysis, 2nd ed. (1989). More advanced works include Gregory Karpilovsky, Field Theory (1988); and Irving Kaplansky, Fields and Rings, 2nd ed. (1972). A classic, short exposition is Emil Artin, Galois Theory, ed. by Arthur N. Milgram, 2nd ed., with additions and revisions (1944, reissued 1971). Garrett Birkhoff The Editors of the Encyclopdia Britannica Rings Most treatises on algebra have chapters on ringse.g., Serge Lang, Algebra, 3rd ed. (1993). David Sharpe, Rings and Factorization (1987), is an elementary introduction requiring only naive set theory. A general introduction to rings is Neal H. McCoy, The Theory of Rings (1964, reissued 1973). Deeper studies of commutative rings may be found in Oscar Zariski and Pierre Samuel, Commutative Algebra, 2 vol. (195860, reprinted 197576); Irving Kaplansky, Commutative Rings, rev. ed. (1974); and Michael F. Atiyah and I.G. MacDonald, Introduction to Commutative Algebra (1969). Noncommutative rings are dealt with in I.N. Herstein, Noncommutative Rings (1968); and T.Y. Lam, A First Course in Noncommutative Rings (1991). Pierre Samuel The Editors of the Encyclopdia Britannica Categories The first mention of the notions of category and functor in their explicit form is contained in the paper by Samuel Eilenberg and Saunders MacLane, General Theory of Natural Equivalences, Transactions of the American Mathematical Society, 58:231294 (1945). Saunders MacLane, Categories for the Working Mathematician (1971, reissued with corrections, 1989), is a classic work in which the author, a distinguished categorist, sets out his view of the material of category theory basic to the professional mathematician. Peter J. Freyd, Abelian Categories (1964), is devoted to a very important class of categories. Recent works are Michael Barr and Charles Wells, Toposes, Triples, and Theories (1985); and Michael Barr and Charles Wells, Category Theory for Computing Science (1990). Homological algebra The classic text in homological algebra is Henri Cartan and Samuel Eilenberg, Homological Algebra (1956). Charles A. Weibel, An Introduction to Homological Algebra (1994), is a versatile treatment of modern homological algebra. The role played by homological algebra in algebraic topology may be inferred from Peter J. Hilton and S. Wylie, Homology Theory (1960); and Czes Kosniowski, A First Course in Algebraic Topology (1980), a work suitable for independent study. Particular aspects of homological algebra are to be found in John W. Milnor and John C. Moore, On the Structure of Hopf Algebras, Annals of Mathematics 81:211264 (1965); and Edwin Weiss, Cohomology of Groups (1969). Cyclic homology is discussed by Jean-Louis Loday, Cyclic Homology: A Survey, in Henryk Torunczyk, Stefan Jackowski, and Stanislaw Spiez, Geometric and Algebraic Topology (1986), pp. 281303; and Peter Seibt, Cyclic Homology of Algebras (1987). George Daniel Mostow Peter John Hilton The Editors of the Encyclopdia Britannica Universal algebra Some works on universal algebra are Alfred North Whitehead, A Treatise on Universal Algebra, with Applications, vol. 1 (1898, reissued 1960), a description of the algebras of Boole, Hamilton, and Grassmann, with emphasis on geometrical applications; Abraham Robinson, Introduction to Model Theory and to the Metamathematics of Algebra (1963, reissued 1974), a treatment of model theory with many applications to algebra; P.M. Cohn, Universal Algebra, rev. ed. (1981), a general introduction to the subject, stressing the connections with logic and giving applications to algebra; George Grtzer, Universal Algebra, 2nd ed. (1979), a comprehensive treatment of the subject, including much contemporary research in the field; and Wolfgang Wechler, Universal Algebra for Computer Scientists (1992), an introduction from a model-theoretic viewpoint. Relevant information can also be found in J. Conrad Crown and Marvin L. Bittinger, Finite Mathematics, 3rd ed. (1989), and Mathematics: A Modeling Approach (1982); gnes Szendrei, Clones in Universal Algebra (1986); and A.G. Pinus, Boolean Constructions in Universal Algebras (1993). Paul M. Cohn The Editors of the Encyclopdia Britannica Categories Since the mid-1940s mathematicians have found it valuable to formalize the notions of areas of mathematical discourse and interrelations between such areas; the formalization used is the language of categories and functors. A rapidly developing mathematical theory accompanied this formalization so that the theory of categories and functors has become an autonomous part of mathematics. This section deals with the role of this formalization in providing an appropriate language for the expression of mathematical ideas. The study of mathematics involves certain domains of mathematical discourse, their structural properties, and their interrelations. For example, there is the idea of a set (see set theory) and of functions that are transformations of sets. There is arithmetic, the study of the domain of natural numbers, rational numbers, and integers. Geometry involves subsets of Euclidean space of three dimensions together with the appropriate transformations of such subsetsfor example, translations, rotations, and reflections. Algebra is concerned with rational numbers, real numbers, and complex numbers; and abstract algebra involves further mathematical domains such as groups, rings, fields, and lattices. The calculus is again concerned in the first instance with subsets of the real numbers, or of Euclidean space of higher dimension, and with particular transformations of such subsets; e.g., differential operators. Algebraic topology relates domains of interest in geometry to domains of interest in algebra. Algebraic geometry, on the other hand, goes in the opposite direction, associating, for example, with each commutative ring its spectrum of prime ideals. Set theory is concerned with a class of objects A, B, C, , called sets, and a class of transformations f, g, h, , called functions. With each function f is associated a domain, which is a set A, and a range or codomain, which is a set B. The notation f: A B, or A f B, indicates that f is a function from the domain A to the codomain B. There is a strict distinction between the codomain of the function f and the image of the function f, which is simply the set of values taken by the function f. Thus, in particular, two functions can only be identical if not only their domains but also their codomains coincide. They are then, of course, identical if, and only if, they each take the same value at each element of the domain. Further, functions may be composed under certain conditions. To be precise, the functions f: A B and g: B C may be composed to yield a function, written gf or g f (see 339), if, and only if, B = B. Further, the law of composition is associative (see 340), provided, of course, that the relevant compositions are defined. With each set A may be associated its identity function 1A: A A. This function 1A has basic properties (see 341) provided the compositions are defined. Thus, 1A behaves very much like the integer 1 in the ring of integers, a fact that leads to the habit of dropping the subscript A and simply writing 1: A A. It is an easy exercise to see that the identity function is entirely characterized by the properties of 1A (see 341). That is to say, if the function u: A A also satisfies the conditions fu = f, ug = g for all appropriate f, g, then indeed u = 1A. Of course set theory has more structure than this, but if the ideas described above are abstracted from set theory, a model for the notion of a category is obtained. The notion of a category A formal definition can be given in the following way: A category consists of three sets of data: C1. There is a class of objects A, B, C, . C2. To each ordered pair of objects A, B in , there is associated a set (see 342) called the set of morphisms, or maps or transformations, from A to B in . C3. To each ordered triple of objects A, B, C in , there is associated a law of composition, or composition function (see 343), the image of (f, g), f (A, B), g (B, C), under this law of composition being written gf or g f, so that gf is a morphism from A to C. These data satisfy the following three axioms, of which the first is in the nature of a convention, while the remaining two are more substantial: C4. (A1,B1) and (A2,B2) are disjoint unless A1 = A2, B1 = B2. C5. (Associative Law) h(gf) = (hg)f, provided the compositions are defined. C6. (Existence of identities). To each object A of there is associated a morphism 1A (A, A) such that two equations hold (see 344) provided the compositions are defined. As a first example, the category is considered, called the category of sets and functions, or simply the category of sets. Precisely, the objects of are sets A, B, C, . A morphism in from the set A to the set B is merely a function with domain A and codomain (range) B. Thus either of two notations (see 345) can be adopted in an arbitrary category as a convenient representation of the statement f (A, B). The law of composition in is simply the familiar composition of functions. The axioms are then certainly satisfied. Moreover, this example serves to explain the notation gf for the composition of f: A B with g: B C. It is usual to write functions on the left of their arguments, so that the image of A A under the composite of f and g appears as g (f (A)). A less conservative attitude, defying this tradition and leading to a happier notation, would be to write fg for the composite of f and g, especially in view of the natural notational convention (see 346). In this discussion, however, tradition is not defied. Just as in the special case of the category , the equations relating to the identities (344) entirely determine the morphism 1A in (A, A); 1 is often written for 1A. Some further examples of categories follow: finite sets and functions; groups and homomorphisms; Abelian groups and homomorphisms; rings and homomorphisms; subsets of Euclidean space of 3 dimensions and Euclidean movements; subsets of Euclidean space of n dimensions and continuous functions; topological spaces and continuous functions. The law of composition is not specified explicitly in describing these categories. This is the custom when the objects have underlying set-structure, the morphisms are functions of the underlying sets (transporting the additional structure), and the law of composition is merely ordinary function-composition. Indeed, sometimes even the specification of the morphisms is suppressed if no confusion would arisethus one speaks of the category of groups. The examples given suggest a conceptual framework. For example, the concept of group may be regarded as constituting a first-order abstraction or generalization from various concrete, familiar realizations such as the additive group of integers, the multiplicative group of nonzero rationals, groups of permutations, symmetry groups, groups of Euclidean motions, and so on. Then, again, the notion of a category constitutes a second-order abstraction, the concrete realizations of which consist of such first-order abstractions as the category of groups, the category of rings, the category of topological spaces, and so on. It should not be supposed that, in every category, the objects are sets (probably with additional structure) and that the morphisms are certain preferred functions. One example serves to dispel this misconception. X is a set with a pre-ordering relation . Thus the relation A B holds for certain elements a, B of X and the following axioms are satisfied: A A; if A b, b c, then A C. For example, the integers may be ordered by size or they may be preordered by the divisibility condition: A|B if A is a factor of B. Then if (X, ) is a set with a pre-order, a category X is formed, the objects of which are the elements of X and such that X(a, b) is the single element A B if A B and is empty otherwise. There is evidently a unique law of composition and the category axioms obviously hold. A morphism f: A B in the category is said to be invertible, or a unit, or an equivalence, if there is a morphism g: B A in with gf = 1A, fg = 1B. It is easy to prove that g is then invertible and is determined by f (g is the inverse of f, written g = f -1), and that if A is isomorphic to B, there existing an invertible f: A B, then the relation of isomorphism is an equivalence relation on the objects of . Thus every category carries automatically with it a notion of the isomorphism of objects, proper to that category. The concept gives a great gain in universality compared with traditional procedures, which involve the definition, as distinct concepts, of oneone correspondence between sets, isomorphism between groups, isomorphism between rings, homeomorphism between topological spaces, bi-continuous isomorphism between topological groups, order type of ordered sets, and so on. It is particularly offensive, from the categorical point of view, to define an isomorphism of groups f: g h to be a homomorphism that is oneone and onto H; for an isomorphism of groups should be just an invertible morphism of the category of groups. It is a theorem that, in this category, a morphism that is oneone and onto its codomain is an isomorphism. This theorem is false in the category of topological spaces, so that the definition masks the universality of the concept of isomorphism. Functors Elementary and multivariate algebra Basic theory Elementary operations Addition, the operation which produces the sum of two numbers, has the following properties: The order of addition does not affect the result (see Box, equation 1)that is, addition is commutative. If three numbers are being added, parentheses can be placed around either the first two or the last two, while the terms in the parentheses are added first, and their sum is added to the remaining term. The two results are, however, always equal (see 2). This second property is expressed by saying that addition is associative, and as a result no ambiguity is introduced by use of the expression a + b + c. There is a number 0, called zero, such that the sum of any number a and zero is a (see 3). Corresponding to each number a there is a number (-a), called the opposite of a, such that the sum of a and (-a) is zero (see 4). It follows that for any two numbers a, b there is a unique number x such that a + x = b, namely, x = b + (-a), more commonly written x = b - a. A set with an addition law satisfying the above properties, and such that all sums lie in the set, is called an additive group. The result of multiplication of two numbers a, b is usually denoted by ab, but sometimes, to avoid confusion, a b is written. Multiplication of numbers has the following properties: The order of multiplication does not affect the result (commutativity; see 5), nor does the way the numbers are placed in parentheses (associativity; see 6). The product of a number and the sum of two numbers equals the sum of two appropriate products. This last is known as the distributive law (see 7). There exists a unique element 1 (unity) such that the product of any number a and 1 is a (see 8). Any set of elements in mathematics that has two laws of composition, addition and multiplication, such that (1) with addition as the law of combination it forms an additive group and (2) with multiplication is commutative and associative and also distributive over addition, is called a (commutative) ring. It is because so much of modern algebra is concerned with groups and rings that it is important to realize that ordinary numbers have these properties. Because the numbers have unity, the ring is a ring with unity. One other property of numbers that should be noted is that the product of 0 and a is 0 for all a, and if ab = 0, either a or b is 0 (see 9). With this last property the ring with unity becomes what is called an integral domain. All accepted systems of numbers, integers, rational numbers, and real and complex numbers have the above properties. If the system has the property that when a is different from zero there is a number x such that ax = 1 (see 10), then the nonzero numbers of the system form a group in which multiplication is the group operation, and the integral domain is a field. The systems consisting of the rational, real, or complex numbers form a field, but the system of integers does not. Complex numbers and root extraction The operations so far considerednamely, addition, subtraction, multiplication, and divisionare known as the elementary operations of algebra. In another section the extension of these operations to more complex systems, such as polynomials (see below Polynomials and general functions), will be considered. One other operation on real or complex numbers must be considered here. A theorem states that: If a is any positive real number and n is any positive integer, there exists a unique positive real number x such that xn = a, in which xn is the product of n factors each equal to x. It is important to note that if a is an integer (rational number, negative real number) there is, in general, no integer (rational number, real number) such that xn = a. Thus this operation called extracting the nth root is not a satisfactory algebraic operation as it stands. To obtain a satisfactory operation, it is necessary to use complex numbers. A complex number, symbolized here by the Greek letter alpha, a is any formal combination a = a + ib of two real numbers a, b, with addition and multiplication defined by certain appropriate rules (see 11). With addition and multiplication so defined, the complex numbers form a field. It should be noted that i i = -1. Now, a = a + ib may be considered. Because a2 + b2 is a positive real number, there exists a unique positive real number r such that r2 = a2 + b2, r being called the amplitude of a. If a, b are not both zero, there is an angle, symbolized by the Greek letter theta, q, uniquely defined to within a multiple of 2p (Greek letter pi, the ratio of the circumference to the diameter of a circle), such that a/r = cos q, b/r = sin q, and hence a can be written in terms of r and q (see 12). Let the Greek letter b stand for any other complex number (see 12). Following the rule for multiplying complex numbers, and again using elementary trigonometry, the product of two complex numbers is given in terms of the product of the amplitudes and the sum of the two angles (see 13). It is possible to consider the problem of finding a complex number z such that zn = a. From the formula for the product, the amplitude s must satisfy sn = r, and the angle y must satisfy ny = q to within a multiple of 2p; that is, ny - q = 2tp for some integer t. Because r is a positive real number, there must be a unique positive real number s satisfying sn = r; y must have the form y = (q/n) + (2tp/n), and there are n possible values of y (to within a multiple of 2p) satisfying this condition, corresponding to t = 0, 1, 2,, n - 1, for example. Hence there are n possible numbers z such that zn = a. If z1 is any one of these n numbers, the others can all be written z1w, in which w (omega) is an nth root of unity; that is, wn = 1. Thus the problem of finding the nth roots of unity is basic in the theory of complex numbers. It is clear that for any complex number a there are n solutions of zn = a, and this shows one of the advantages of complex numbers over the other kinds of numbers considered. In order to extract roots of integers, or rational numbers or real numbers, one notes that each of these number systems is a subsystem of the succeeding one, and that they are all subsystems of the complex numbers. To find z such that zn = a, when a is, say, a rational number, is to consider a as a complex number that just happens to belong to the subsystem of real numbers and then to consider the problem for complex numbers. This process is usually referred to as extending the field of rational numbers to the complex field. Fields Broadly speaking, a field is an algebraic system consisting of elements that are commonly called numbers, in which the four familiar operations of addition, subtraction, multiplication, and division are universally defined (except for division by zero) and have all their usual properties. Much of the general theory of vectors and matrices can be developed over an arbitrary field. In particular, this is true of the general theory of simultaneous linear equations and of their solution by a method known as Gaussian elimination (see below). The study of various special fields also explains which geometric constructions, such as angle bisection and angle trisection, can be made with ruler and compass and which cannot. Moreover, finite fields of 2n elements have been used to construct the best known error-correcting codes. Although many particular fieldsincluding especially the rational, real, and complex fields, finite fields and algebraic number fieldswere intensively studied in the 17th, 18th, and early 19th centuries, the idea of investigating all possible fields seems not to have been conceived until 1910, when the mathematician Steinitz proposed a systematic scheme for classifying them. Three familiar fields To illustrate the meaning of the first statement of this section, the three most familiar fields are discussed, namely, the rational field, the real field, and the complex field. Groups Groups, in mathematics, are systems of elements with a composition satisfying certain laws (these are given below). The elements may be operations; for example, the rotations of a sphere. The symmetries of a geometrical figure are best described as a group. The ornamental wall designs of the ancient Egyptians exhibit all possible combinations of symmetries. Euclid studied the properties of the regular polygons and the five regular solids. Not until the late 18th and 19th centuries, however, were groups recognized as mathematical systems. The French mathematician Joseph-Louis Lagrange was one of the first to consider them. Another French mathematician, Augustin-Louis Cauchy, began a study of permutation groups. In studying the solution of polynomial equations, a Norwegian mathematician, Niels Henrik Abel, showed that in general the equation of fifth degree cannot be solved by radicals. Then the French mathematician variste Galois, using groups systematically, showed that the solution of an equation by radicals is possible only if a group associated with the equation has certain specific properties; these groups are now called solvable groups. The group concept is now recognized as one of the most fundamental in all of mathematics and in many of its applications. The German mathematician Felix Klein considered geometry to be those properties of a space left unchanged by a certain specific group of transformations. In topology geometric entities are considered equivalent if one can be transformed into another by an element of a continuous group. Definition A group, denoted G, is a nonempty set of elements with a composition defined for any ordered pair x, y of its elements. If the composition is written as the product xy, then the following laws hold: G1. The Associative Law: (xy)z = x(yz) for all x, y, z, of G. G2. Existence of an identity. There is an identity element 1, such that 1x = x1 = x for every x of G. G3. Existence of an inverse. For every x of G there is an element x-1 such that x-1x = xx-1 = 1, in which 1 is the identity element.
Britannica English vocabulary. Английский словарь Британика. 2012