a computer program that operates in a manner analogous to the natural neural network in the brain. The theoretical basis of neural networks was developed in 1943 by the neurophysiologist Warren McCulloch of the University of Illinois and the mathematician Walter Pitts of the University of Chicago. In 1954 Belmont Farley and Wesley Clark of the Massachusetts Institute of Technology succeeded in running the first simple neural network. The primary appeal of neural networks is their ability to emulate the brain's pattern-recognition skills. Among commercial applications of this ability, neural networks have been used to make investment decisions, recognize handwriting, and even detect bombs. A distinguishing feature of neural networks is that knowledge is distributed throughout the network itself rather than being explicitly written into the program. The network then learns through exposure to various situations. Neural networks are able to accomplish this because they are built of processing elements (artificial neurons) grouped into layers. The input layer of artificial neurons receives information from the environment, and the output layer communicates the response; between these layers may be one or more hidden layers (with no direct contact with the environment), where most of the information processing takes place. The output of a neural network depends on the weights of the connections between neurons in different layers. Each weight indicates the relative importance of a particular connection. If the total of all the weighted inputs received by a particular neuron surpasses a certain threshold value, the neuron will send a signal to each neuron to which it is connected in the next layer. Neural networks may be used, for example, to process loan applications, in which the inputs may represent loan application data and the output whether or not to grant a loan. Two modifications of this simple feedforward neural network account for the growth of commercial applications. First, a network can be equipped with a feedback mechanism, known as a back-propagation algorithm, that enables it to adjust the connection weights back through the network, training it in response to representative examples. Second, recurrent neural networks can be developed, involving signals that proceed in both directions as well as within and between layers, and these networks are capable of vastly more complicated patterns of association. (In fact, for large networks it can be extremely difficult to follow exactly how an output was determined.) Training neural networks typically involves supervised learning, where each training example contains the values of both the input data and the desired output. As soon as the network is able to perform sufficiently well on additional test cases, it can be used to classify new cases. For example, researchers at the University of British Columbia have trained a feedforward neural network with temperature and pressure data from the tropical Pacific Ocean and from North America to predict future global weather patterns. In contrast, certain neural networks are trained through unsupervised learning, in which a network is presented with a collection of input data and given the goal of discovering patternswithout being told what specifically to look for. Such a neural network might be used, for example, to discover clusters of customers in a marketing database during a process known as data mining. Vladimir Zwass
NEURAL NETWORK
Meaning of NEURAL NETWORK in English
Britannica English vocabulary. Английский словарь Британика. 2012