The goal of artificial intelligence (AI) is to teach a computer how to think. So, it’s unsurprising that many of the strides made in AI and specifically machine learning — a subset of AI focused on using large collections of data to teach a computer rules through discovering patterns — have been based on biological knowledge. Artificial neural networks (ANN), a type of machine learning model, are designed around how neurons communicate. However, they are only loosely based on the brain and stray from biological truth. There is another type of neural network called Spiking Neural Networks (SNNs) that follow the architecture of the brain much more closely, making them superior.
In the brain, information is conveyed through neuron-to-neuron communication in the form of discrete action potentials or electrical spikes (distinct voltage changes in either direction). The signals pass through synapses — tiny spaces between the axons and dendrites of adjacent neurons. The spikes encode information from external and internal stimuli. The type of stimuli and area of the brain determines the type of temporal patterns of spikes, which can be regular, irregular, or even intricate.
In an ANN, the modeled neurons do not account for how signals are passed in discrete events and do not usually consider proximity when determining connectedness between neurons. SNNs differ because they do pass information through the modeled neurons with discrete signals and only nearby neurons are connected.
SNNs offer two main advantages: increased efficiency and a greater ability to utilize spatio-temporal data, information about space and time. Because of the structure of SNNs, the network processes separate chunks of input independently. Even though the chunks are processed independently, the order that they come in and their relation to each other timewise is not lost because the processing occurs over time, resulting in a significant gain in temporal information. The network can account for the potential temporal patterns of spikes. ANNs do not account for temporal patterns because information is inputted continuously, so they disregard the patterns of when the information appears. Networks such as recurrent neural networks account for time dependency; however, information from previous inputs or times must be saved and used multiple times, increasing complexity. In an SNN, time is accounted for automatically, effectively saving a lot of complexity and processing power. Additionally, SNNs severely decrease the amount of redundant information processed. The discretized spike sequences and less connected networks prevent redundant updates as new information is only passed to modeled neurons when there is a distinct change rather than a continuous one.
The advanced use of spatio-temporal data extends to using SNNs as accurate models of brain activity. In a study from a researcher from University of Moratuwa, Kumarasinghe, a Brain-Inspired SNN (BI-SNN) was shown to successfully predict continuous muscle activity and kinematics of upper-limbs. They recorded muscle activity of the right arm with sensors that accounted for x, y, and z positions; azimuth; elevation; and roll angles. Additionally, they used an electroencephalography (EEG) to detect electrical activity in the brain while each person picked up and put down a small object.
Both the muscle activity and EEG information were then passed through spike sequence encoding algorithms based on thresholds. The EEG information served as the input as it is the electrical information of neuronal communication, and the muscle activity served as the expected output as this is the result of the neuronal communication. The BI-SNN clustered the spike activity based on anatomical location and learned the spike-time rules to create spatio-temporal associations with distinct brain regions. This was achieved through building the BI-SNN with Spike Pattern Association Neurons (SPAN), which are neuron models that emit spikes at desired times. The goal of the BI-SNN was to predict the onset and trajectory of movements, given EEG signals. The results of the BI-SNN were evaluated based on the similarities between predicted and actual values of 29 different motor signals, such as elevation of thumb and wrist and roll of index finger. The majority of these signals had similarity scores of around 0.6, meaning that the signals predicted by the BI-SNN were similar to true motor signals.
The ability to more closely recreate the true architecture of the brain gives SNNs an increased efficiency that has the potential to vastly impact the abilities of artificial intelligence. These are vast advantages over ANNs and other previously widely used neural networks. Although they still have their drawbacks, mostly in training complexity, they open the doors to solving much more complex problems.
Sci Rep (2021). DOI: 10.1038/s41598-021-81805-4
Front Neurosci (2021). DOI: 10.3389/fnins.2021.651141
Sci Rep (2021). DOI: 10.1038/s41598-021-98448-0
Front Comput Neurosci (2018). DOI: 10.3389/fncom.2018.00048