Artificial neural networks (ANNs) are an attempt to simulate the human brain within a computer system.
Both, biological and artificial neuronal networks are built from atomic modules – the ‘neurons’. The human brain has up to 150 billion neurons with hundreds of different classified subtypes. Each neuron usually has one axon, which expands off from a part of the cell body to conduct electrical signals to other cells. Thousands of interconnected neurons build separate groups that are called networks.
Biological neurons can be classified into ‘sensory neurons’ that provide all information for perception and motor coordination, ‘motor neurons’ which transfer signals to muscles and glands and ‘inter-neuronal neurons’ that are needed to relay information, protect other neurons or connect different parts of the brain. These neurons have linear and non-linear relationships to obtain and transfer information as well as to build memory. The level of activity and interaction of inhibitory and excitatory (stimulating) signals via synapses between the neurons contributes to the whole system’s activity.
A typical ANN is not likely to have more than 1,000 artificial neurons building only a single network. In the artificial intelligence engineering world, ANNs are designed for analyzing excess of non-linear, non-stationary, or chaotic data that cannot be easily modeled and not resolved by other linear programs. In ANN models which are based on biological systems, synapses, that interconnect the neural network, are modeled as ‘weights’ giving the strength of the connection: positive weights reflect excitatory connection, negative weights inhibitory ones. The activity within the network is a linear combination of negative and positive weights to create an activation function that controls the amplitude of the output.
In general, ANNs have three key functions: pattern recognition in data (recognize similarities and differences), filtering of non-linear information (generalize data) and adapting to changing conditions – their non-linear nature make neural network processing elements very flexible.
ANNs acquire knowledge through optimized ‘teaching methods’ and ‘sample data’ following the ‘teaching rule’ and store knowledge in inter-neuronal connections defining the synaptic weight of the system.
Because of their parallel nature and using ‘redundant information coding’, ANNs have a high error tolerance so that system’s capabilities are retainable even with major network damage.
Establishing ANNs, however, is still challenging because their architecture is very different from traditionally used technologies, which results in ANN processing being timely and expensive for large neural networks.