Neuromorphic computing

At a recent AI forum event in Christchurch, one of the presenters was Simon Brown, a physics professor from the University of Canterbury. Simon specialises in nanotechnology and has created a chip that may be capable of low power, fast AI computations. I caught up with Simon after the event to find out more.

The chip is created using a machine consisting of a number of vacuum chambers. It starts with a metal (in this case tin) in vapour form in one vacuum chamber. As it moves through the various chambers the vapour particles are filtered, mechanically and electrically until they are just the right size (averaging 8.5 nanometers diameter) and they are sprayed onto a blank chip. This is done until about 65% of surface of the chip is covered with these tiny droplets.

This is just enough coverage to be almost conductive. The metal droplets on the chips are close enough to each other that an electrical charge in one will induce charges in nearby droplets. Simon describes these as being analogous to synapses in the brain which connect neurons. The strength of the connection between the two droplets is a function of the distance between them. The first chips that were created had two external connections into this nano scale circuit. Interestingly when a voltage was applied to one of the connections the resulting waveform on the other connection had properties similar to those seen in biological neurons.

An important piece of research was showing that this chip was stable, i.e. the performance didn’t change over time. That was proven and so what Simon has been able to create is effectively a tiny neural network with many connections on a chip that has a random configuration. One feature that is unlike artificial neural networks that are used for deep learning, is that the strength of the connections between the neurons (the weights) cannot be changed using external controls. Instead the weights are updated through the atomic scale physical processes that take place on the chip. So while the chips will never be as flexible as artificial neural networks implemented in software, it turns out that these “unsupervised learning” processes have been studied by computer scientists for a long time and have been shown to be very efficient at some kinds of patter recognition. The question is whether there are applications that could leverage the “unsupervised” processing that this chip does very quickly and at low power.

A specific main candidate application is called reservoir computing. Reservoir computing uses a fixed, random network of neurons, just like the one created by Professor Brown, to transform a signal. A single, trainable layer of neurons (implemented in software) on top of this is then used to classify the signal. A Chicago based team  has achieved this using a chip made of memristors.

A standard implementation of reservoir computing would have access to each of the neurons in the random network. With just two connections into the network this chip does not have that access.  When we met, the team had just created a chip with 10 connections into the network.

Their focus now is trying to prove that they can implement reservoir computing or some variant on this chip. If they can do this then there is real potential to commercialise this technology. The larger opportunity is if they could find a way to use this technology to implement deep learning.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s