Introduction
In 1965, Gordon Moore published a paper in which he described a doubling every year in the number of transistors in a dense integrated chip, and almost a decade later he revised this idea to a doubling every two years \cite{moore1975progress}. This forecast proved accurate for several decades and went on to become the guideline for the semiconductor industry in long-term planning, which in turn made Moore's observation "a self-fulfilling prophecy" \cite{furber2016brain}. The smaller the transistors get, the cheaper, the faster, and the more energy-efficient they become. This scenario is responsible for most of the progress in computer technology \cite{moore1965cramming} and has allowed it to prevalence in every aspect of our lives.
As is the case with any exponential growth, this pattern could not continue forever. Transistors have become as small as molecules, and it would not be long before we reach the fundamental limit that is the size of an atom. Even before transistors reach this physical limit, there are other concerns about making transistors any smaller than they are now. Smaller transistors are less reliable and less predictable in their performance, and the phenomenons of quantum mechanics such as quantum tunneling might begin to affect them adversely \cite{seabaugh2013tunneling}. Furthermore, the energy-density of integrated chips are approaching infeasible levels; a problem referred to as the "heat wall" \cite{pop2006heat}. These factors suggest that the progress of Moore's Law is slowing down, and we need to look for new methods and approaches if we want to sustain this rate of growth.
Industries decided to switch from the ever-faster clock rates delivered by Moore's Law to multicore processors due to the power constraints \cite{furber2016brain}. This transition allowed computers to keep up with Moore's Law in the past decade, but some researchers believe that this is not enough and if we are looking for faster processors we need to change up the way we think of processing and let go of the conventional methods of computing entirely. The brain is undoubtlessly the nearest biological analog to the computer and therefore, the best source of inspiration for a computational system if we are to mimic nature. However, replicating what the brain does would first require a deep understanding of its inner workings.
Research into the brain is now new. Through the centuries, humanity has associated the brain with the most advanced technology of its era. Rene Descartes, a French scientist who lived through the sixteenth and the seventeenth centuries, believed that the nervous system of living beings relies on a hydraulic fluid which he called "animal spirit." Descartes believed this fluid to be responsible for carrying the motor and sensory information throughout the body. After the invention of the steam engine in the 18th century, Sigmund Freud, the founder of psychoanalysis, suggested the idea that the brain is similar to a steam engine and associated illnesses such as headache to pressure building up in the brain \cite{marcus}! Our generation has grown accustomed to analogizing the brain to an information processing unit which takes in the vast sensory information the body records and outputs proper commands to handle all the motor and cognitive tasks of the body.
For over a century, neuroscientists have been trying to understand the brain in a bottom-up fashion, while psychologists have been pursuing a top-down approach for even longer. However, the brain spans many orders of magnitude in scale, and there remains a significant gap between the scales manageable from the bottom-up approach and those that can be resolved from the top-down imaging techniques. Somewhere within this gap lie the essential scales for understanding the information processing principles in the brain. This problem has not stopped scientists from taking inspiration from what is known about the brain and utilizing that to design new computational systems in hopes of overcoming the limitations we are facing with the conventional methods of computing. In return, these computational models have enabled us to hypothesize and examine the possible underlying mechanisms at work the brain that we are unable to observe directly with any tool at hand.
In this paper, we will be discussing the functionality of the brain from a computer engineer's perspective and provide a comparison between the brain and typical computer processors to understand the challenges at hand better. Later on, we will introduce a few of the most promising large-scale computation models that have been designed in recent years with inspiration from the human brain.