Synthetic nervous system

[1] The FSA enables the direct analytical tuning of dynamical networks that perform specific operations within the nervous system without the need for global optimization methods like genetic algorithms and reinforcement learning.

More conventional artificial neural networks rely on training phases where they use large data sets to form correlations and thus “learn” to identify a given object or pattern.

This makes it difficult to alter the function, without simply starting over, or extract biological meaning except in specialized cases.

As a result, it is possible to use this approach to directly assemble networks that perform basic functions, like addition or subtraction, as well as dynamical operations like differentiation and integration.

However, recent advancements in neuroscience tools and techniques have clarified the cellular and biophysical mechanisms of these networks, and their operation during behavior in complex environments.

To this end, SNSs primarily model neurons as leaky integrators, which are reasonable approximations of sub-threshold passive membrane dynamics.

The leaky integrator also models non-spiking interneurons which contribute to motor control in some invertebrates (locust,[19] stick insect,[20] C. elegans [21]).

Consequently, the SNS method can accommodate demand driven complexity, only adding features specifically where they are needed.

While these additions may increase computational cost, they grant the system the ability to perform a wider array of interesting behaviors.

[23] She used the term SNS to refer to her biologically-inspired hierarchical model of cognition, which included systems for low-level sensory feature extraction, attention, perception, motivation, behavior, and motor output.

Using this framework, Kismet could respond to people by abstracting its sensory information into motivation for responsive behaviors and the corresponding motor output.

In 2008, Thomas R. Insel, MD, the director of the National Institute of Mental Health, was quoted in an American Academy of Neurology interview calling for a “clear moon shot…[to motivate] a decade of new discovery [and] basic research on brain anatomy”.

[25] As part of that interview, Dr. Insel suggested building a “synthetic nervous system” as one such motivational moon shot to drive ongoing and future research.

A dissertation from Prof. Joseph Ayer’s lab at Northeastern University uses a similar term in its title but never explicitly defines it.

A 2017 research article from Prof. Alexander Hunt, Dr. Nicholas Szczecinski, and Prof. Roger Quinn use the term SNS and implicitly define it as “neural [or] neuro-mechanical models…composed of non-spiking leaky integrator neuron models”.

[5] Similar to work by Ayers et al., Hunt et al. apply the term SNS to refer to a simplified dynamical simulation of neurons and synapses used in the closed-loop control of robotic hardware.

ANNs and CNNs are only loosely associated with SNS in that they share the same general building blocks of neurons and synapses, though the methods used to model each component varies between the networks.

While predicting the responses of a complicated network can be difficult, the dynamics of each node are relatively simple in that each is a system of first order differential equations (as opposed to fractional derivatives).

This allows for the modelling of modulatory neural pathways since the synapses can alter the net membrane conductance of a postsynaptic neuron without injecting current.

It was previously mentioned that additional ion channels could be incorporated to elicit more interesting behaviors from non-spiking neuron models.

In the context of neuroscientific models, this is useful for applications such as pattern generators where it is desired that a neuron’s potential can be rapidly increased and remain elevated until inhibited by another neural signal or applied current.

Spiking neurons can also be modeled in a computationally efficient manner without sacrificing the rich behaviors exhibited in biological neural activity.

[39] The Izhikevich model can produce spiking behaviors approximately as plausible as Hodgkin-Huxley but with comparable computational efficiency to the integrate-and-fire method.

To accomplish this, Izhikevich reduces the Hodgkin-Huxley model to a two-dimensional set of ordinary differential equations via bifurcation methods.

This enables chattering, bursting, and continuous spiking with frequency adaptation which constitute a richer array of behaviors than the basic integrate-and-fire method can produce.

This equation can be used to tune synapse conductances for specific points in the network’s operation where the neurons are in a steady state or have a known/designed membrane potential (

In this way it is possible to intentionally and directly set the state of the network during key moments in its operation sequence so that it produces a desired action or behavior.

cannot be greater than or equal to 0 here as division by zero is undefined and dividing by a positive number gives a negative conductance which is impossible.

A system of two neurons, however, are capable of this if linked via mutually inhibitory transmission synapses with a marginally stable equilibrium curve.

The mutual inhibition means that the activation levels are maintained instead of leaking away and the system state changes continuously for the duration of an applied stimulus (integration).

An example of a Synthetic Nervous System composed of functional subnetworks. This network controls one joint of a Praying Mantis inspired robot (Figure 7 of Szczecinski et al. [ 1 ] )
Visual representation of a single synaptic connection between two neurons and the corresponding synaptic conduction dynamics (Figure 1A of Szczecinski et al. [ 1 ] )
Arithmetic subnetworks for addition (A), subtraction (B), division (C), and multiplication (D), and their corresponding contour plots for a visual representation of their behavior (Figure 2 of Szczecinski et al. [ 1 ] )
Differentiator subnetwork and corresponding example plot of its behavior in the time domain. (Figure 3A,B of Szczecinski et al. [ 1 ] )
Integrator subnetwork and corresponding example plot of its behavior in the time domain. (Figure 5A,B of Szczecinski et al. [ 1 ] )