Meaning of COMPUTER SCIENCE in English
 COMPUTER SCIENCE

the study of computers, including their design (architecture) and their uses for computations, data processing, and systems control. The field of computer science includes engineering activities such as the design of computers and of the hardware and software that make up computer systems. It also encompasses theoretical, mathematical activities, such as the design and analysis of algorithms, performance studies of systems and their components by means of techniques like queueing theory, and the estimation of the reliability and availability of systems by probabilistic techniques. Since computer systems are often too large and complicated to allow a designer to predict failure or success without testing, experimentation is incorporated into the development cycle. Computer science is generally considered a discipline separate from computer engineering, although the two disciplines overlap extensively in the area of computer architecture, which is the design and study of computer systems. The major subdisciplines of computer science have traditionally been (1) architecture (including all levels of hardware design, as well as the integration of hardware and software components to form computer systems), (2) software (the programs, or sets of instructions, that tell a computer how to carry out tasks), here subdivided into software engineering, programming languages, operating systems, information systems and databases, artificial intelligence, and computer graphics, and (3) theory, which includes computational methods and numerical analysis on the one hand and data structures and algorithms on the other. field of study that deals with the structure, operation, and application of computers and computer systems. Computer science includes engineering activities, such as the design of computers and of the hardware and software of computer systems, and theoretical, mathematical activities, such as the analysis of algorithms and performance studies of systems. It also involves experimentation with new computer systems and their potential applications. Computer science was established as a discipline in the early 1960s. Its roots lie mainly in the fields of mathematics (e.g., Boolean algebra) and electrical engineering (e.g., circuit design). The major subdisciplines of computer science are (1) architecture (the design and study of computer systems), an area that overlaps extensively with computer engineering; (2) software, including such topics as software engineering, programming languages, operating systems, information systems and databases, artificial intelligence, and computer graphics; and (3) theory, including computational methods and numerical analysis as well as data structures and algorithms. Additional reading Anthony Ralston and Edwin D. Reilly (eds.), Encyclopedia of Computer Science, 4th ed. (1997), is a comprehensive reference work. D.A. Patterson and J.L. Hennessy, Computer Organization and Design, 2nd ed. (1998), is a readable book on computer architecture, covering everything from the basics through largescale parallel computers.Andrew S. Tanenbaum, Computer Networks , 3rd ed. (1996), contains a thorough discussion of computer networks and protocols. George F. Coulouris and Jean Dollimore, Distributed Systems: Concepts and Design, 2nd ed. (1994), provides an introduction to networks and their protocols in addition to discussing the architecture of distributed systems and such issues as protection and security.Roger S. Pressman, Software Engineering: A Practitioner's Approach, 4th ed. (1997), provides a guide to the software engineering process, from the management of large software development projects through the various stages of development, including uptodate information on CASE tools.Robert W. Sebesta, Concepts of Programming Languages, 4th ed. (1999), contains a good discussion of the principles of programming languages, some history, and a survey of the types of languages with examples of each.Abraham Silberschatz, James L. Peterson, and Peter B. Galvin, Operating System Concepts, 5th ed. (1994), is an updated classic text. Ramez Elmasri and Shamkant B. Navathe, Fundamentals of Database Systems, 3rd ed. (1999), is a good reference to databases.M. Tamer zsu and Patrick Valduriez, Principles of Distributed Database Systems, 2nd ed. (1999), covers the extension of database issues to the distributed case.D. Hearn and P. Baker, Computer Graphics, 2nd ed. (1994), is a good starting point for further reading on computer graphics.Michael T. Heath, Scientific Computing: An Introductory Survey (1997), is a good source for those interested in numerical methods and analysis, but it presupposes some mathematical background.Harry R. Lewis and Larry Denenberg, Data Structures & Their Algorithms (1991), is a good reference for these topics. Geneva G. Belford Software Software engineering Computer programs, the software that is becoming an everlarger part of the computer system, are growing more and more complicated, requiring teams of programmers and years of effort to develop. As a consequence, a new subdiscipline, software engineering, has arisen. The development of a large piece of software is perceived as an engineering task, to be approached with the same care as the construction of a skyscraper, for example, and with the same attention to cost, reliability, and maintainability of the final product. The softwareengineering process is usually described as consisting of several phases, variously defined but in general consisting of: (1) identification and analysis of user requirements, (2) development of system specifications (both hardware and software), (3) software design (perhaps at several successively more detailed levels), (4) implementation (actual coding), (5) testing, and (6) maintenance. Even with such an engineering discipline in place, the softwaredevelopment process is expensive and timeconsuming. Since the early 1980s, increasingly sophisticated tools have been built to aid the software developer and to automate as much as possible the development process. Such computeraided software engineering (CASE) tools span a wide range of types, from those that carry out the task of routine coding when given an appropriately detailed design in some specification language to those that incorporate an expert system to enforce design rules and eliminate software defects prior to the coding phase. As the size and complexity of software has grown, the concept of reuse has become increasingly important in software engineering, since it is clear that extensive new software cannot be created cheaply and rapidly without incorporating existing program modules (subroutines, or pieces of computer code). One of the attractive aspects of objectoriented programming (see below Programming languages) is that code written in terms of objects is readily reused. As with other aspects of computer systems, reliabilityusually rather vaguely defined as the likelihood of a system to operate correctly over a reasonably long period of timeis a key goal of the finished software product. Sophisticated techniques for testing software have therefore been designed. For example, a large software product might be deliberately seeded with artificial faults, or bugs; if they are all discovered through testing, there is a high probability that most actual faults likely to cause computational errors have been discovered as well. The need for better trained software engineers has led to the development of educational programs in which software engineering is either a specialization within computer science or a separate program. The recommendation that software engineers, like other engineers, be licensed or certified is gaining increasing support, as is the momentum toward the accreditation of software engineering degree programs. Programming languages Early languages Programming languages are the languages in which a programmer writes the instructions that the computer will ultimately execute. The earliest programming languages were assembly languages, not far removed from the binaryencoded instructions directly executed by the machine hardware. Users soon (beginning in the mid1950s) invented more convenient languages. Theory Computational methods and numerical analysis The mathematical methods needed for computations in engineering and the sciences must be transformed from the continuous to the discrete in order to be carried out on a computer. For example, the computer integration of a function over an interval is accomplished not by applying integral calculus to the function expressed as a formula but rather by approximating the area under the function graph by a sum of geometric areas obtained from evaluating the function at discrete points. Similarly, the solution of a differential equation is obtained as a sequence of discrete points determined, in simplistic terms, by approximating the true solution curve by a sequence of tangential line segments. When discretized in this way, many problems can be recast in the form of an equation involving a matrix (a rectangular array of numbers) that is solvable with techniques from linear algebra. Numerical analysis is the study of such computational methods. Several factors must be considered when applying numerical methods: (1) the conditions under which the method yields a solution, (2) the accuracy of the solution, and, since many methods are iterative, (3) whether the iteration is stable (in the sense of not exhibiting eventual error growth), and (4) how long (in terms of the number of steps) it will generally take to obtain a solution of the desired accuracy. The need to study everlarger systems of equations, combined with the development of large and powerful multiprocessors (supercomputers) that allow many operations to proceed in parallel by assigning them to separate processing elements, has sparked much interest in the design and analysis of parallel computational methods that may be carried out on such parallel machines. Data structures and algorithms A major area of study in computer science has been the storage of data for efficient search and retrieval. The main memory of a computer is linear, consisting of a sequence of memory cells that are numbered 0, 1, 2, in order. Similarly, the simplest data structure is the onedimensional, or linear, array, in which array elements are numbered with consecutive integers and array contents may be accessed by the element numbers. Data items (a list of names, for example) are often stored in arrays, and efficient methods are sought to handle the array data. Search techniques must address, for example, how a particular name is to be found. One possibility is to examine the contents of each element in turn. If the list is long, it is important to sort the data firstin the case of names, to alphabetize them. Just as the alphabetizing of names in a telephone book greatly facilitates their retrieval by a user, the sorting of list elements significantly reduces the search time required by a computer algorithm as compared to a search on an unsorted list. Many algorithms have been developed for sorting data efficiently. These algorithms have application not only to data structures residing in main memory but even more importantly to the files that constitute information systems and databases. Although data items are stored consecutively in memory, they may be linked together by pointers (essentially, memory addresses stored with an item to indicate where the next item or items in the structure are found) so that the items appear to be stored differently than they actually are. An example of such a structure is the linked list, in which noncontiguously stored items may be accessed in a prespecified order by following the pointers from one item in the list to the next. The list may be circular, with the last item pointing to the first, or may have pointers in both directions to form a doubly linked list. Algorithms have been developed for efficiently manipulating such listssearching for, inserting, and removing items. Pointers provide the ability to link data in other ways. Graphs, for example, consist of a set of nodes (items) and linkages between them (known as edges). Such a graph might represent a set of cities and the highways joining them or the layout of circuit elements and connecting wires on a VLSI chip. Typical graph algorithms include solutions to traversal problems, such as how to follow the links from node to node (perhaps searching for a node with a particular property) in such a way that each node is visited only once. A related problem is the determination of the shortest path between two given nodes. (For background on the mathematical theory of networks, see the article graph theory.) A problem of practical interest in designing any network is to determine how many broken links can be tolerated before communications begin to fail. Similarly, in VLSI chip design it is important to know whether the graph representing a circuit is planar, that is, whether it can be drawn in two dimensions without any links crossing each other.
Britannica English vocabulary. Английский словарь Британика. 2012