Auckland University Robotics Lab

I recently had the chance to catch up with Professor Bruce MacDonald who chairs the Auckland University Robotics Research Group. Although we had never met before, Bruce and I have a connection, having the same PhD supervisor, John Andreae.

Bruce took me through some of the robotics projects that him and his team have been working on. The most high profile project is a kiwifruit picking robot that has been a joint venture with Robotics Plus, Plant and Food Researchand Waikato University. This multi armed robot sits atop an autonomous vehicle that can navigate beneath the vines. Machine vision systems identify the fruit and obstacles and calculate where they are relative to the robot arm which is then guided to the fruit. A hand gently grasps the fruit and removes it from the vine using a downward twisting movement. The fruit then rolls down a tube.

Kiwifruit picking robot

The work has been split between the groups with the Auckland University team focused on the machine vision and vehicle navigation, Waikato on the control electronics and software, and Robotics Plus on the hardware. The team estimates that the fruit picking robot will be ready to be used in production in a couple of years. The current plan is to use it to provide a fruit picking service for growers. This way their customers don’t need to worry about robot repairs and maintenance and the venture can build a recurring revenue base. They are already talking to growers in New Zealand and the USA.

Along with Plant and Food Research, the group is also researching whether the same platform can be used to pollinate the kiwifruit flowers. Declining bee populations are expensive to maintain, and this may provide a cost effective alternative.

The group has just received funding of $17m to improve worker performance in orchards and vineyards. The idea is to use machine vision to understand what expert pruners do and translate that into a training tool for people learning to prune and for an automated robot.

Bruce’s earlier work included the use of robotics in healthcare. This included investigating if robots could help people take their medication correctly and the possibility of robots providing companionship to those with dementia who are unable to keep a pet.

Therapeutic robot

I asked Bruce whether Auckland University taught deep learning at an undergraduate level. He said that they don’t, but it is widely used by post grad students. They just pick it up.

Bruce is excited by the potential of reinforcement learning. We discussed whether there is the possibility of using our supervisor’s goal seeking PURR-PUSS system with modern reinforcement learning. I think there is a lot of opportunity to leverage some of this type of early AI work.

At the end of the meeting Bruce showed me around the robotics lab at the new engineering school. It was an engineer’s dream – with various robot arms, heads, bodies, hands and rigs all over the place. I think Bruce enjoys what he does.

Robotics lab

Autonomous weapons, running robots, Open AI and more

Here’s some highlights of AI reading, watching, listening I’ve been doing over the past few weeks.

A couple of videos from the MIT AGI series. First, Richard Moyes, co-Founder and Managing Director, Article36 on autonomous weapons and the efforts he and others are doing to reduce their impact.

The second is Ilya Sutskeve, co-founder of open AI, on neural networks, deep reinforcement learning, meta learning and self play. He seems pretty convinced that that it is a matter of when, not if, we will get machines with human level intelligence.

Judea Pearl, who proposed  that Baysian networks could be used to reason probabilistically, laments AI can’t compute cause and effect and summarises Deep Learning as simply curve fitting.

After reading a post about mask R-CNN being used on the footage of Ronaldo’s recent bicycle kick goal I took a look at the description of the code here.

Mask RCNN street Mask RCNN Football

Jeff Dean TWIMLAI_Background_800x800_JD_124An interview with Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, on core machine learning innovations from Google and future directions. This guy is a legend.

 

 

 

Jose TWIMLAI_Background_800x800_JH_137

An interviewwith Jose Hernandez-Orallo on the kinds of intelligence.

 

 

 

 

And of course the video of the running, jumping robot from Boston Dynamics. If you want to find out a little bit more about the company I recommend the lecture from their CEO Marc Raibert below.

 

Cognitive modelling, self aware robots Tensorflow, & adversarial attacks

 

This week I’ve been learning about cognitive modelling, self aware robots,  adversarial attacks in reinforcement learning and starting to play with Tensorflow.

Cognitive Modelling

The latest MIT AGI video was released a few days ago. In this Nate Derbinsky gives an overview of different types of cognitive architectures, including SPAUN, ACT-R, Sigma and Soar (his baby). This reminds me of old school AI: symbolic processing. My supervisor’s PURR-PUSS would belong in this category. These are a lot less sexy than deep learning, but in many way’s they are complementary with applications in robotics, game playing & natural language processing.

 

TWiML podcasts

SUM cognitive architectureThis week I listened to an interesting podcast with Raja Chatila on robot perception and discovery. In this Raja talked about the necessity of robot self awareness for true intelligence and the ethics of intelligent autonomous systems. It’s interesting to see that the sort of architectures used for exploring artificial consciousness in robotics have a lot of overlap with the cognitive models described by Nate Derbinsky in the MIT AGI series.

I also had the chance to listen to Google Brainers Ian Goodfellow & Sandy Huang discussing adversarial attacks used against reinforcement learning  Adversarial attacks highlight some of the weakness of deep learning. When used for image classification the image is just a series of numbers that has a set of mathematical operations performed on it to produce a classification. By subtly changing some of the numbers you can fool the classifier, even though to a human the image looks exactly the same. The example of a Panda below was taken from a 2015 Google paper.

panda

In the podcast Ian and Sandy discuss how this can be used against a reinforcement learning agent that has been trained to play computer games. Even changing one pixel can significantly degrade the performance.

Tensorflow

I’m up to the part in my CS231n course where you start to train CNNs using Tensorflow or Pytorch. Despite reading a compelling argument for using Pytorch over Tensorflow on Quora, the people I’ve spoken to locally are using Tensorflow – so I’m going to go with that. I found this introduction useful.

I managed to get the software installed and run Hello World. Apparently there is more you can do with it…

Tensorflow hello world

 

Robots opening doors, driverless cars and finding exoplanets

Here’s some things I’ve been watching/listening to lately…

The latest video from Boston Dynamics is cool. They seem to be having a lot of fun.

I’m continuing to watch the MIT series on Artificial General Intelligence. They’re currently releasing one video a week. The latest is from Emilio Frazzoli on self driving cars. I’ve been enjoying this series.

I’m also listening to the TWiML interview with Chris Shallue about using Deep Learning to hunt for exoplanets. Also pretty cool. I thought Chris’s accent might have been kiwi – but nah he’s an Aussie.

exoplanets