October 6, 2017
The most important property of neural networks is their ability to learn from environmental data and, as a result of training, improve their performance. Productivity increases over time according to certain rules. The neural network is trained through an interactive process of adjusting synaptic weights and thresholds. Ideally, the neural network gains knowledge of the environment at each iteration of the learning process.
Quite a lot of activities are associated with the concept of learning, so it is difficult to give this process an unambiguous definition. Moreover, the learning process depends on the point of view on it. This is what makes it almost impossible for any precise definition of this concept to appear. For example, the learning process from the point of view of a psychologist is fundamentally different from learning from the point of view of a school teacher. From the perspective of a neural network, the following definition can probably be used:
This definition of the neural network learning process assumes the following sequence of developments:
• The neural network receives stimuli from the external environment.
• As a result of the first point, the free parameters of the neural network are changed.
• After changing the internal structure, the neural network responds to excitations in a different way.
The above list of clear rules for solving the problem of training a neural network is called a learning algorithm. It is not hard to guess that there is no universal learning algorithm suitable for all neural network architectures. There is only a set of tools represented by many learning algorithms, each of which has its own merits. Learning algorithms differ from each other in the way they adjust the synaptic weights of neurons. Another distinctive characteristic is the way in which the trained neural network communicates with the outside world. In this context, they talk about a learning paradigm associated with the model of the environment in which a given neural network functions.
There are two conceptual approaches to teaching neural networks: supervised learning and unsupervised learning.
Supervised neural network training assumes that for each input vector from the training set, there is a required value of the output vector, called the target. These vectors form a training pair. The network weights are changed until an acceptable level of deviation of the output vector from the target is obtained for each input vector.
Unsupervised neural network training is a much more believable learning model in terms of the biological roots of artificial neural networks. The training set consists only of input vectors. The neural network training algorithm adjusts the network weights so that consistent output vectors are obtained, i.e. so that presenting input vectors close enough to produce the same outputs.