Auckland University Robotics Lab

I recently had the chance to catch up with Professor Bruce MacDonald who chairs the Auckland University Robotics Research Group. Although we had never met before, Bruce and I have a connection, having the same PhD supervisor, John Andreae.

Bruce took me through some of the robotics projects that him and his team have been working on. The most high profile project is a kiwifruit picking robot that has been a joint venture with Robotics Plus, Plant and Food Researchand Waikato University. This multi armed robot sits atop an autonomous vehicle that can navigate beneath the vines. Machine vision systems identify the fruit and obstacles and calculate where they are relative to the robot arm which is then guided to the fruit. A hand gently grasps the fruit and removes it from the vine using a downward twisting movement. The fruit then rolls down a tube.

Kiwifruit picking robot

The work has been split between the groups with the Auckland University team focused on the machine vision and vehicle navigation, Waikato on the control electronics and software, and Robotics Plus on the hardware. The team estimates that the fruit picking robot will be ready to be used in production in a couple of years. The current plan is to use it to provide a fruit picking service for growers. This way their customers don’t need to worry about robot repairs and maintenance and the venture can build a recurring revenue base. They are already talking to growers in New Zealand and the USA.

Along with Plant and Food Research, the group is also researching whether the same platform can be used to pollinate the kiwifruit flowers. Declining bee populations are expensive to maintain, and this may provide a cost effective alternative.

The group has just received funding of $17m to improve worker performance in orchards and vineyards. The idea is to use machine vision to understand what expert pruners do and translate that into a training tool for people learning to prune and for an automated robot.

Bruce’s earlier work included the use of robotics in healthcare. This included investigating if robots could help people take their medication correctly and the possibility of robots providing companionship to those with dementia who are unable to keep a pet.

Therapeutic robot

I asked Bruce whether Auckland University taught deep learning at an undergraduate level. He said that they don’t, but it is widely used by post grad students. They just pick it up.

Bruce is excited by the potential of reinforcement learning. We discussed whether there is the possibility of using our supervisor’s goal seeking PURR-PUSS system with modern reinforcement learning. I think there is a lot of opportunity to leverage some of this type of early AI work.

At the end of the meeting Bruce showed me around the robotics lab at the new engineering school. It was an engineer’s dream – with various robot arms, heads, bodies, hands and rigs all over the place. I think Bruce enjoys what he does.

Robotics lab

Neuromorphic computing

At a recent AI forum event in Christchurch, one of the presenters was Simon Brown, a physics professor from the University of Canterbury. Simon specialises in nanotechnology and has created a chip that may be capable of low power, fast AI computations. I caught up with Simon after the event to find out more.

The chip is created using a machine consisting of a number of vacuum chambers. It starts with a metal (in this case tin) in vapour form in one vacuum chamber. As it moves through the various chambers the vapour particles are filtered, mechanically and electrically until they are just the right size (averaging 8.5 nanometers diameter) and they are sprayed onto a blank chip. This is done until about 65% of surface of the chip is covered with these tiny droplets.

This is just enough coverage to be almost conductive. The metal droplets on the chips are close enough to each other that an electrical charge in one will induce charges in nearby droplets. Simon describes these as being analogous to synapses in the brain which connect neurons. The strength of the connection between the two droplets is a function of the distance between them. The first chips that were created had two external connections into this nano scale circuit. Interestingly when a voltage was applied to one of the connections the resulting waveform on the other connection had properties similar to those seen in biological neurons.

An important piece of research was showing that this chip was stable, i.e. the performance didn’t change over time. That was proven and so what Simon has been able to create is effectively a tiny neural network with many connections on a chip that has a random configuration. One feature that is unlike artificial neural networks that are used for deep learning, is that the strength of the connections between the neurons (the weights) cannot be changed using external controls. Instead the weights are updated through the atomic scale physical processes that take place on the chip. So while the chips will never be as flexible as artificial neural networks implemented in software, it turns out that these “unsupervised learning” processes have been studied by computer scientists for a long time and have been shown to be very efficient at some kinds of patter recognition. The question is whether there are applications that could leverage the “unsupervised” processing that this chip does very quickly and at low power.

A specific main candidate application is called reservoir computing. Reservoir computing uses a fixed, random network of neurons, just like the one created by Professor Brown, to transform a signal. A single, trainable layer of neurons (implemented in software) on top of this is then used to classify the signal. A Chicago based team  has achieved this using a chip made of memristors.

A standard implementation of reservoir computing would have access to each of the neurons in the random network. With just two connections into the network this chip does not have that access.  When we met, the team had just created a chip with 10 connections into the network.

Their focus now is trying to prove that they can implement reservoir computing or some variant on this chip. If they can do this then there is real potential to commercialise this technology. The larger opportunity is if they could find a way to use this technology to implement deep learning.

NZ Merino using artificial intelligence to monitor sheep well being.

At the last Christchurch AI meetup, I met up with Ian Harris who told me about the work he had done with neXtgen Agri for The New Zealand Merino Company (NZM)  as part of Sensing Wellbeing, a collaborative Sustainable Farming Fund Project. This work involved analysing data collected from accelerometers attached to jaws of sheep to try to identify their behaviour.

SheepWithActivityMonitor
A sheep with an activity monitor. NZM were very particular about ensuring the sheep were treated ethically during this data collection.

The data

Like any machine learning project, the critical part is the data. In this case the raw data came from the tri-axial accelerometers, sampled at 30 Hz. This meant that for each of the three channels there were 300 samples over a 10 second period. This data was collected from 15 sheep over a period of 6 days in conjunction with Massey University.

AccelGrazingObsId29684Sheep1
A example of the data from 3 channels over one 10 second period

6 of the sheep were filmed during that time and their behaviour was categorised into 12 different activities. An initial review of the data showed that there was only good data for 5 of the 12 activities and so the focus was on those activities:

  1. sitting
  2. standing
  3. ruminating while sitting
  4. ruminating while standing
  5. grazing

For those of you (like me) who are not intricately familiar with the lives of sheep, ruminating is the process or regurgitating, re-chewing and re-swallowing food.

Random forest approach

31 different metrics were calculated from the raw data, metrics like energy, maximum, etc. The initial approach Ian took was to use a random forest algorithm with these metrics or features as an input. With this approach the model correctly classifed 81% of the activities. This was a replication of an approach taken by a South African team who got similar results and helped validate the overall set up.

Stratified Sheep RandomForestClassifier Confusion Matrix
This confusion matrix shows that it was difficult to separate sitting and standing with a high degree of accuracy.

Deep learning approach

Ian is an experienced Java developer and had taught himself python and Deep Learning. For this problem he was using Tensorflow and set up a relatively simple 3-layer network that used the raw data for input rather than the calculated features used with the random forest approach. His most successful model had a binary output to detect whether (or should that be wether?) the sheep was grazing or not. This model had 93% accuracy.

Clustering

Only six of the fifteen sheep had their behaviour recorded, so Ian had a lot of unlabelled data. In an effort to leverage this data he used an unsupervised clustering algorithm to try to generate more data from which to learn. This did improve the detection rates, but only for models for which there was relatively low accuracy.

Sheep wellbeing

You could imagine a connected device that could accurately predict whether a sheep is grazing or not may be useful if it could alert a farmer when a sheep has stopped grazing for a period of time so he or she could check on the sheep. It’s been quite a few decades since I lived on a sheep farm – so I’m not able to make any sort of informed commentary on that. Even if this was useful, there are some engineering challenges in designing a device that is cheap enough to make this economic.

That said, I would like to congratulate NZM, and neXtgen Agri for undertaking this project. Whether or not it leads to a commercial product, they will have learned a lot about the capabilities of the AI technologies used along with the data requirements and the time and investment needed. And in Ian, New Zealand has another engineer with AI experience which I’m sure will be put to good use.

The impact and opportunity of AI in NZ

I’ve just read the AI forum report analysing the impact and opportunity of artificial intelligence within New Zealand. This was released last week. At 108 pages it’s a substantial read. You can see the full report here.

AI forum report

The timing of this report is very good. There is a lot of news about AI and a growing awareness of it. But at the same time, I believe there is a lack of understanding of what AI is capable of and how organisations can take advantage in the recent advances.

I think the first level of misunderstanding is that people over estimate what the technology can do. This is driven by science fiction, a misinformed media and fuelled by marketers who want their company and products to be seen to be using AI. AI is nowhere near human level intelligence and doesn’t understand concepts like a human (see my post on the limits of deep learning). That may change, but major breakthroughs are needed and it’s not clear when or if those will occur (see predictions from AI pioneer Rodney Brooks for more on this).

Although AI does not have human level intelligence, there are a host of applications for the technology. I think the second level of misunderstanding is around how difficult and expensive it is to take advantage of this AI technology. The assumption is that it’s expensive and you need a team of “rocket scientists”. From what I’ve seen studying deep learning and talking to NZ companies that are using AI, the technology is very accessible and the investment required is relatively small.

The report is level-headed: it’s not predicting massive job losses. I’m not going to comment further on the predictions on the economic impact. They’ll be wrong – because, to quote Niels Bohr – predicting is very difficult, especially about the future.

In my opinion the report did not place enough emphasis on the importance of deep learning. The recent rise of this technology has driven the resurgence of AI in recent years. Their history of AI missed the single most important event which was the AlexNet neural network winning the ImageNet competition. This bought deep learning to the attention of the worlds AI researchers and triggered a tsunami of research and development. I would go so far as to suggest that the majority of the focus on AI should be on deep learning.

image_classification_006

The key recommendation of the report is that NZ needs to adopt an AI strategy.  I agree. Of the 6 themes they suggested I think they key ones are:

  1. Increasing the understanding of AI capability. This should involve educating the decision makers at the board and executive level about the opportunities to leverage AI technology and the investment required. The outcome of this should be more organisations deciding to invest in AI.
  2. Growing the capability. NZ needs more AI practitioners. While we can attract immigrants with these skills, we also need to educate more people. I was encouraged to see the report advocating the use of online learning. I agree that NZQA should find a way to recognise these courses but think we should go further. Organisations should be incentivised to train existing staff using these courses (particularly if they have a project identified) and young people should be subsidised to study AI either online or undergrad/postgrad at universities.

I am less worried about the risks. I think it would be good to have AI that was biased, opaque, unethical and breaking copyright law. At least then we would be using the technology and we could address those concerns as they came up. I am also not worried about the existential threat of AI. First, I think human level intelligence may be a long time away. Second, I’m somewhat fatalistic – I can’t see how you could stop those breakthroughs from happening. We need to make sure that humans come along for the ride.

From my perspective the authors have done a very good job with this report. I encourage you to take the time to read it.  I encourage the government to adopt its recommendations.

Lincoln Agritech: using machine vision for estimating crop yield

I recently had the opportunity to visit Lincoln Agritech where I met with CEO, Peter Barrowclough, Chief Scientist Ian Woodhead and their machine vision team of Jaco Fourie, Chris Bateman and Jeffrey Hsiao. Lincoln Agritech is an independent R&D provider to the private sector and government employing 50 scientists, engineers and software developers. It is 100% owned by Lincoln University, but distinct from their research and commercialisation office.

Lincoln Agritech have taken a different approach to developing AI capability. Rather than hiring deep learning experts, they have invested in upskilling existing staff by allowing Jaco, Chris and Jeffrey to take a Udacity course in machine vision using deep learning. The investment is in the time of their staff. Having three of them take the course together means that they can learn off each other.

The core projects they are working on involve estimating the yield of grape and apple crops based on photos and microwave images. The business proposition for this is to provide information for better planning for the owners of the crops, both in-field and in-market. Operators of these vineyards and orchards can get a pretty good overall crop estimate based on historical performance and weather information. However, they can’t easily get plant by by plant estimates. To do this they need an experienced person to walk the fields and make a judgement call. A machine vision based approach can be more efficient with better accuracy.

The team elected to tackle the problem initially using photos. They had to take the images carefully at a prescribed distance from the crop, using HDR (this is where you combine light, medium and dark images to bring out the detail in the shadowy canopy). Like most machine learning tasks the biggest problem was getting a tagged data set. The tagging involved drawing polygons around the fruit in the images, including fruit partially occluded by leaves. There was a lot of work trying to train people to do this properly. Inevitably at some stage there were guys with PhDs drawing shapes, such is the glamour of data science. This problem is similar to that faced by Orbica who built a model to draw polygons around buildings and rivers from aerial photography.

In this image labels of a fixed size are added to an image to tell the model where the grape bunches are.
This image shows the result of a trained network automatically finding the areas in the image where the grape bunches are.

They used a convolution neural network to tackle this problem. Rather than train a network from scratch and come up with their own architecture, they used the image net winning inception architecture and adapted that. This network was already trained to extract the features from an image that are required to classify 1000 different classes of images. This technique is called transfer learning. This model works well, with 90% accuracy (on their validation data set).

However, part of the challenge here is that the images do not show all of the fruit that is on the plant. The only way to get the “ground truth” is to have someone go under the canopy and count all the fruit by hand. This is where the microwave technology comes into play.

The company is recognised internationally for their microwave technology in other projects. The way it works is a microwave transmitter emits microwaves and then detects the reflections. The microwaves will travel through leaves, but will be reflected by the water content in reasonably mature fruit.

The machine vision team is working to create a model that can use the microwave image and photo to get superior performance. This is a harder problem because this type of sensor fusing is less common than regular image processing.

The team is using the Tensorflow and Keras platforms on high end machines with Nvidia Titan GPUs. There were a few raised eyebrows from the company when the team were asking for what essentially looked like high end gaming machines.

I applaud Lincoln Agritech for investing in their deep learning capability. The experience they will have gained from their first projects will make each subsequent project easier to apply the technology. The fact they have three people working on this, provides redundancy and the ability to learn off each other. This is a model that other New Zealand organisations should consider, particularly if they’ve having problems finding data scientists. Applying the latest AI technologies to agriculture seems like a real opportunity for New Zealand.

Cacophony: Using deep learning to identify pests

This is the first of a series of posts I intend to write on organisations that are using artificial intelligence in New Zealand. I am closer to this organisation than most because it was started by my brother, Grant.

Cacophony is a non-profit organisation started by Grant when he observed that the volume of bird song increased when he did some trapping around his section in Akaroa. His original idea was simply to build a device to measure the volume of bird song in order to measure the impact of trapping. Upon examining trapping technology, he came to the conclusion there was an opportunity to significantly improve the effectiveness of trapping by applying modern technology. So he set up Cacophony to develop this technology and make it available via open source. This happened a little before the establishment of New Zealand government’s goal to be predator free by 2050. He managed to get some funding and has a small team of engineers working to create what I refer to as an autonomous killing machine. What could possibly go wrong?

Because most of the predators are nocturnal the team have chosen to use thermal cameras. At the time of writing they have about 12 cameras set up in various locations that record when motion is detected. Grant has been reviewing the video and tagging the predators he can identify. This has created the data set that has been used to by the model to automatically detect predators.

They hired an intern, Matthew Aitchison, to build a classifier over the summer and he’s made great progress. I’ve spent a bit of time with Matthew, discussing what he is doing. Matthew completed Standford’s CS231n computer vision course that I’m also working my way through.

He does a reasonable amount of pre-processing: removing the background, splitting the video into 3 second segments and detecting the movement of the pixels, so the model can use this information. One of his initial models was a 3 layer convolution neural network with long short-term memory.  This is still a work in progress and I expect Matthew will shortly be writing a description of his final model, along with releasing his code and data.

However, after just a few weeks he had made great progress. You can see an example of the model correctly classifying predators below, with the thermal image on the left and on the right the image with the background removed, a bounding box around the animal and instantaneous classification at the bottom with the cumulative classification at the top.

 

A version of this model is now being used to help Grant with his tagging function, making his job easier and providing more data, faster.

The next thing is to work out how to kill these predators. They’re developing a tracking system that you can see a prototype working below.

From my perspective it feels like they are making fantastic progress and it won’t be too long before they can have a prototype that can start killing predators. If you ask Grant he thinks we can be predator free well before the government’s goal of 2050.

One final point on this from a New Zealand AI point of view, is how accessible these technologies are that are driving the Artificial Intelligence renaissance. Technologies such as deep learning can be learnt from free and low-cost courses such as the CS231n. Those doing so, not only have a lot of fun, but open up a world of opportunity.

My PhD thesis

I did my PhD in the 1990s in artificial intelligence. I focused on artificial neural networks, in particular the Sparse Distributed Memory (SDM) and the Cerebellar Model Articulation Controller (CMAC), investigating their capabilities and capacities.  This included their ability to store and recognize sequences. I also looked at how to combine the CMAC with a robot learning system called PURR-PUSS.  In a simple control problem I was able to show how CMAC could learn from PURR-PUSS, providing smother control.

IMG_1585My thesis has sat I my bookshelf for the past ~30 years and I thought even though it’s old, I should make it a little more accessible. I had a back up on floppy disks and although I was able to get the data off the disks, I failed to find a copy of Norton Backup that could read the 30 year old files. So I resorted to scanning the book and using OCR to convert it into text.

You can download a PDF of my thesis titled Investigations into the capabilities of the SDM and combining CMAC with PURR-PUSS. The diagrams are the scans from the dot matrix printout I did all those years ago.

I completed my PhD at the University of Canterbury, supervised by John Andreae who continues to contribute to the field in his 90s. My examiners were Ian Witten and an American academic that I can’t remember.

After creating a digital copy of my thesis I discovered that the university also has a copy here. This goes to show that just because you’ve got a PhD, it doesn’t mean you’re that bright.