The journey towards artificial intelligence

K.B. Jinesh

The mobile phone I had 20 years ago, looked like a walky-talky with an antenna projecting from the top. It had a memory of 128 kb, and therefore I could save only a few important contact numbers and their minimum details. It had no camera or any other facilities to hear music or watch videos and was meant only for phone calls. Today, the mobile phone is more than just an instrument to make phone calls. Just like computers were meant initially to compute but today do a lot more, mobile phones have become communication, entertainment and workstation devices all rolled into one. We do not know what the role of mobile phones will be in another 10 years, because technology is entering the era of Artificial Intelligence.

The main reason behind this fast pace of information technology is that we could create and manipulate electronic intelligence to a certain extent. All electronic instruments that have certain memory and logic can be called intelligent systems, because if we give them instructions, they obey and carry out the instructions. Be it biological or electronic, intelligence is the coordination between memory and logic. The efficiency of an intelligent system depends on how fast the coordination is between memory and logical processing. For instance, there maybe occasions when you meet someone, you feel like you have seen them somewhere, but do not remember exactly when and where. You have the memory and you know that you know them. But the coordination (the context where you had seen them) between space and time is missing. When memory, logic and their coordination work together, there is intelligence.

In 1945, Professor John von Neumann, a faculty member at the Moore School of Engineering in Philadelphia, published a report on the modern digital computer architecture based on the same idea of intelligence. According to this design, a computer would have four components – an input to give instructions, memory to store information and programs, a processing unit and an output. Using a mouse or a keypad (now even by voice as well), we give input in the form of instructions to perform certain tasks. Based on the instructions, the central processing unit (CPU) fetches appropriate programs and information from the memory. It processes the information based on the instructions we give and provides the results as output. This is the design still used in most computers even though it has been 77 years since it was proposed.

Though it was a successful model that helped create electronic intelligence, there are two problems with this architecture: the memory and processing units are physically separated in this model. Therefore, a cycle of fetching from memory, processing it by CPU and then storing it back in memory has certain time delays. This is called latency. The more recent computers have processors that can work very fast and the memory has increased dramatically. But the problem remains the same because the instructions to carry out some process and fetching the relevant information from the memory happens through buses or bus lines, which are the electronic routes that connect the processing units and memory. Information exchange through the buses is still slow because instructions can only be given sequentially and one at a time. Therefore, regardless of how fast the processing speed is and how big the memory is, the entire process is crippled by the limitations of the bus lines. In other words, the rate of data transfer through the bus lines is still very slow. We are living in a world where the information density we have to process is increasing on a daily basis. Look at the information you handle every hour of the day in the form of phone calls, emails, Google searches, teaching materials, social media and so on. Our demands are increasing exponentially, while the capability to process the data remains the same. This problem is called the “von Neumann bottleneck” and is a major problem when we deal with very large quantity of data every day.

Prof. John von Neumann
(Photo courtesy: www.economist.com)

Replacing the von Neumann architecture is the need of the hour. But what could the alternative be? In answering this question, we turn to mother nature. Our brain is an amazing computer that handles multiple tasks simultaneously. We eat, think, breath, write and laugh at the same time, despite hundreds of other processes happening inside the body, like the heart beating and the release of numerous hormones to keep the metabolism balanced. Even the brain of an ant is so complex that it processes an enormous amount of information every second. The information passing every minute through the neurons in our brain is equivalent to the total information passing through the internet over the entire world every day! While we say that our internet has a speed of 100 Mbps (mega bits per second), the information passing through the neurons in our brain is hundreds of trillion bits per second. The energy consumed in processing one bit in our brain is only 10 femto Joules, while the best supercomputer would spend 10 pico Joules per bit, which is 1000 times higher than our brain. That is why our brain is the best super-computer that ever existed; the human brain is so complex and busy with information exchange and processing and yet the energy it consumes is just 20 Watts (while a mixer or an iron would consume around 800 Watts). If a computer has to do this job, it will require 1 giga watt power – the power delivered by a whole nuclear power plant! If we know how our brain does such complex multi-tasking with such minimal energy, we might have a solution to the von Neumann bottleneck.

But, that is not an easy task. Our brain is a complex network of billions of interconnected neurons. Every neuron is connected to more than ten thousand neurons through synapses or synaptic terminals. Recently, IBM tried to construct a tiny electronic brain by imitating this large complex network of neurons. To demonstrate the functioning of a minute section of the visual cortex of a cat’s brain, they had to deploy two super-computers (Blue Gene), which had 147,456 computers with 144 Terabit memory! So, the challenge of constructing a bio-inspired computer-network is enormous. Replacing the von Neumann architecture of the computers by a brain-like architecture means the current Integrated Circuits (ICs) should be changed to neuron-like ICs.

An artistic impression of how brain-inspired computing would come to action. Photo courtesy www.analyticsinsight.net

In the biological brain, the information from different sensory organs – ears, eyes, tongue, nose and skin – pass through separate channels in the neurons in the form of voltage pulses. These voltage spikes are called action potentials. The neurons in our brain are immersed in the nutritious synaptic fluid, which contains several ions such as calcium, sodium, potassium, sulphur, chlorine, etc. There are such ions inside the neurons as well. The difference between the ions inside and outside the neurons is -70 mV across the membranes of the neurons. When this potential difference goes beyond -55 mV due to an incoming action potential, a neuron ‘wakes up’ or gets activated and it passes the information to the next neuron. This sequence of activation happens in a fraction of a second. Sometimes, just one action potential is not sufficient, a series of action potentials would be necessary to trigger a neutron. But when this triggering is done multiple times, the ‘synaptic weight’ of that individual neural circuit strengthens and it becomes a memory in the brain. This is called the “integrate and fire” model of neurons. The more you activate a neuronic channel, the easier will the passage of information through that channel be. That’s why we have to repeat a poem several times to by heart it.

Neurons act both as memory and logic units in our brain and therefore the ‘latency’ and processing energy are minimized. By mimicking how information passes through biological neurons, an entire model of electronic circuitry has been developed and is called Artificial Neural Networks (ANN). Though ANN has been well developed as a technique to efficiently compute, we still have to use the von Neumann architecture as its hardware platform, which as said before, takes an enormous amount of memory, computational power and time. The only way to solve this issue is to place the hardware into an electronic brain, which contains transistors and capacitors that behave exactly like neurons. The search for such semiconducting devices has begotten a new research field called neuromorphic engineering, where scientists investigate various types of materials and their combinations to construct components that exhibit the behaviour of neurons.

Loihi, the neuromorphic chip developed by Intel. It contains nearly 2 billion electronic neurons.
Photo courtesy: Intel

How can we replicate biological neurons? If the conductivity of a material increases when we apply consecutive voltage pulses, it will mimic the neurons. There are certain materials for which the resistance decreases if you apply a voltage pulse. The more pulse we apply, the less will be its resistance. Such materials retain the memory of previous voltage pulses and are called resistive memory materials. Every voltage pulse we apply resembles an action potential reaching the neuron, and it ‘triggers’ a capacitor or transistor made of such material, similar to an action potential that triggers a neuron. In this way, we can create inter-connected capacitor structures that can act as a miniature neural network and that is the essence of neuromorphic engineering. The resistors that have the memory of the previous voltage pulse that went through it are called memory-resistors or memristors. Using an interconnected network of memristors as neurons, a large network of e-synapse can be developed. Recently, Intel developed a neuromorphic chip called Loihi, which contains nearly 15 million artificial neurons.

The question is how these technological developments will affect our life in the future. We are using Artificial Intelligence (AI) every day in our life in several ways. While searching for information in Google, or when we search in our mobile phone for a nearby restaurant, or when we use Alexa, we are using AI. Together with neural networks, neuromorphic engineering will create a very powerful base for future AI applications. As technology experts observe, futuristic computers based on neuromorphic engineering will have the ability to learn from experiences and to make cognitive decisions accordingly. Thus, future intelligent systems will be capable of adapting themselves like the human brain, at a much lower energy budget than the current computers. This will have a tremendous impact on future technologies, in very large data administration, in automatic (driverless) car driving, in satellite administration, in diagnostics, in improving our life standards, in videos, games and cartoons, music and other entertainments, and of course in communication and social media. Just like Gordon Moore predicted 57 years ago that integrated circuits will enable “personal computers in the future”, realization of neuromorphic engineering will enable us to have personal intelligent systems with decision-making power – more realistic than Sophia, the first humanoid to receive citizenship!

The author is associate professor at the Department of Physics, Indian Institute of Space-Science and Technology, Trivandrum, Kerala. He can be reached at kbjinesh@iist.ac.in.

Leave a Reply