The network of the human nervous system is composed of nodes (for example, neurons) that are connected by links (for example, synapses).
(2002),[1] Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011),[2] Gerstner, W., & Kistler, W. (2002),[3] and David Rumelhart, McClelland, J. L., and PDP Research Group (1986)[4] among others.
Once an approach based on the perspective and connectivity is chosen, the models are developed at microscopic (ion and neuron), mesoscopic (functional or population), or macroscopic (system) levels.
The nervous system consists of networks made up of neurons and synapses connected to and controlling tissues as well as impacting human thoughts and behavior.
The brain and the neural network should be considered as an integrated and self-contained firmware system that includes hardware (organs), software (programs), memory (short term and long term), database (centralized and distributed), and a complex network of active elements (such as neurons, synapses, and tissues) and passive elements (such as parts of visual and auditory system) that carry information within and in-and-out of the body.
In order to accomplish this, one needs to model its components and functions and validate its performance with real life.
[citation needed] The basic structural unit of the neural network is connectivity of one neuron to another via an active junction, called synapse.
The variables of the equation are some or all of the neurobiological properties of the entity being modeled, such as the dimensions of the dendrite or the stimulation rate of action potential along the axon in a neuron.
The mathematical equations are solved using computational techniques and the results are validated with either simulation or experimental processes.
The neuronal signal comprises a stream of short electrical pulses of about 100 millivolt amplitude and about 1 to 2 millisecond duration (Gerstner, W., & Kistler, W. (2002)[3] Chapter 1).
The ionic components of the fluid inside and outside maintain the cell membrane at a resting potential of about 65 millivolts.
Depending on the stimulus received by the dendrites, soma may generate one or more well-separated action potentials or spike train.
If the stimulus remains above the threshold level and the output is a spike train, it is called the Integrate-and-Fire (IF) neuron model.
The concept of artificial neural network (ANN) was introduced by McCulloch, W. S. & Pitts, W. (1943)[16] for models based on behavior of biological neurons.
Norbert Wiener (1961)[17] gave this new field the popular name of cybernetics, whose principle is the interdisciplinary relationship among engineering, biology, control systems, brain functions, and computer science.
But it was not suitable for symbolic processing, nondeterministic computations, dynamic executions, parallel distributed processing, and management of extensive knowledge bases, which are needed for biological neural network applications; and the direction of mind-like machine development changed to a learning machine.
Research and development are progressing in both artificial and biological neural networks including efforts to merge the two.
The second phase is limbic or paleo-mammalian brain and performs the four functions needed for animal survival – fighting, feeding, fleeing, and fornicating.
"Modular models of the brain aid the understanding of a complex system by decomposing it into structural modules (e.g., brain regions, layers, columns) or functional modules (schemas) and exploring the patterns of competition and cooperation that yield the overall function."
While the model may indeed be analyzed at this top level of modular decomposition, we need to further decompose basal ganglia, BG, as shown in Figure 3(c) if we are to tease apart the role of dopamine in differentially modulating (the 2 arrows shown arising from SNc) the direct and indirect pathways within the basal ganglia (Crowley, M. (1997)[20]).
Models developed using NSL are documented in Brain Operation Database (BODB) as hierarchically organized modules that can be decomposed into lower levels.
The four main features of an ANN are topology, data flow, types of input values, and forms of activation (Meireles, M. R. G. (2003),[21] Munakata, T. (1998)[22]).
Multilayer Perceptron (MLP) is the most popular of all the types, which is generally trained with back-propagation of error algorithm.
We can equate the routers at the nodes in telecommunication network to neurons in MLP technology and the links to synapses.
Models for more complex neurons containing other types of ions can be derived by adding to the equivalent circuit additional battery and resistance pairs for each ionic channel.
Although there are neurons that are physiologically connected to each other, information is transmitted at most of the synapses by chemical process across a cleft.
The next level of complexity is that of stream of action potentials, which are generated, whose pattern contains the coding information of the signal being transmitted.
Another phenomenon of spike train generation happens in Type II neurons, where firing occurs at the resting potential threshold, but with a quantum jump to a non-zero frequency.
What is important for understanding the functions of the nervous system is how the message is coded and transported by the action potential in the neuron.
Shankle, W. R., Hara, J., Fallon, J. H., and Landing, B. H. (2002)[27] describe the application of neuroanatomical data of the developing human cerebral cortex to computational models.