Large-scale Brain-inspired Systems
By now, it must be clear that the programming paradigm of a brain-inspired computational system would be unlike any which typical processors support. However, it might be possible to run some of these programming paradigms as software on conventional hardware. The best-known example of this matter are Artificial Neural Networks (ANNs) which have received a great deal of attention in recent years, thanks to the efforts of the likes of Geoff Hinton, who proposed a new approach for training these models and revolutionized this field.
ANNs loosely resemble the brain; they do not communicate with spikes, nor their training algorithm (referred to as back-propagation) has any direct root in biology. Furthermore, ANNs were not designed and initially thought of as programming paradigms for brain-inspired hardware. However, their vast capability in pattern recognition has encouraged a number of researchers to use these models as the programming paradigm for their brain-inspired hardware.
In the past decade, there have been several studies on brain-inspired hardware with promising results. Stanford Neurogrid \cite{silver2007neurotech} is one of the earliest brain-inspired computation devices that was designed to carry out brain simulations. The device uses sub-threshold analog computation circuits to emulate ion-channel activity and uses digital communication to softwire synaptic connections which operate in parallel and in serial, respectively. Neurogrid is capable of simulating a million neurons connected by six billion synopses while using a hundred thousand times less energy than a supercomputer. This study that was carried out by several scientists from different fields laid the groundwork and a few standards for its successors.
The next breakthrough came with the Heidelberg HiCANN system \cite{Schemmel_2010} which in an attempt to model neural tissue came up with an integrated software/hardware framework to describe a model and execute it transparently on a neuromorphic hardware system. Wafer-scale above-threshold analog circuits were utilized in the design of HiCANN which enabled the system to run 10,000 times faster than the same number of biological neurons. The model supports complex neural models and can realize up to ten thousand synapses per neuron.
Not long after, a follow-up study at Cambridge \cite{Moore_2012} went back to a completely digital circuit running on an FPGA, capable of modeling 64 thousand neurons with 64 million synapses per board. This design aimed to carry out scientific simulations with high-bandwidth and low-latency demands. The significant contribution of this design was introducing the idea of extensibility along with a reconfigurable communication topology that makes it possible to scale the simulated neural network with ease.
It was not until 2014 before the next breakthrough in brain-inspired hardware happened. Researchers at IBM published a paper in which they introduced a brain-inspired computational system called TrueNorth with fascinating properties and capabilities.
TrueNorth
Design and Architecture
IBM TrueNorth chip \cite{Merolla_2014} is a silicon implementation of an ANN. The creators of TrueNorth argue that what we have been pursuing through the past decades in processor design sits at complete contrast to how the brain operates. To speed up computer processors, we have been trying to make transistors smaller and their clock rates higher. However, the neurons of the human brain fire at a clock rate orders of magnitude slower than what a typical PC processor operates at.
The rate at which the power consumption of a transistor reduces with it becoming smaller has not kept up with the rate at which the number of transistors in unit area increases. This imbalance has made the modern processors more power hungry than ever. Meanwhile, the brain tissue has a minimal power density and requires much less energy to handle its tasks. This contrast can be better observed in Figure \ref{527925}a. The researchers also claim that a definite problem with the Von Neuman architecture which all modern computers are based upon is the cost of moving data from the memory (equivalent to synapsis in the brain) to the arithmetic core (equivalent to neurons in the brain). This issue is depicted in Figure \ref{527925}b.