Autonomous weapons, running robots, Open AI and more

Here’s some highlights of AI reading, watching, listening I’ve been doing over the past few weeks.

A couple of videos from the MIT AGI series. First, Richard Moyes, co-Founder and Managing Director, Article36 on autonomous weapons and the efforts he and others are doing to reduce their impact.

The second is Ilya Sutskeve, co-founder of open AI, on neural networks, deep reinforcement learning, meta learning and self play. He seems pretty convinced that that it is a matter of when, not if, we will get machines with human level intelligence.

Judea Pearl, who proposed  that Baysian networks could be used to reason probabilistically, laments AI can’t compute cause and effect and summarises Deep Learning as simply curve fitting.

After reading a post about mask R-CNN being used on the footage of Ronaldo’s recent bicycle kick goal I took a look at the description of the code here.

Mask RCNN street Mask RCNN Football

Jeff Dean TWIMLAI_Background_800x800_JD_124An interview with Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, on core machine learning innovations from Google and future directions. This guy is a legend.

 

 

 

Jose TWIMLAI_Background_800x800_JH_137

An interviewwith Jose Hernandez-Orallo on the kinds of intelligence.

 

 

 

 

And of course the video of the running, jumping robot from Boston Dynamics. If you want to find out a little bit more about the company I recommend the lecture from their CEO Marc Raibert below.

 

The impact and opportunity of AI in NZ

I’ve just read the AI forum report analysing the impact and opportunity of artificial intelligence within New Zealand. This was released last week. At 108 pages it’s a substantial read. You can see the full report here.

AI forum report

The timing of this report is very good. There is a lot of news about AI and a growing awareness of it. But at the same time, I believe there is a lack of understanding of what AI is capable of and how organisations can take advantage in the recent advances.

I think the first level of misunderstanding is that people over estimate what the technology can do. This is driven by science fiction, a misinformed media and fuelled by marketers who want their company and products to be seen to be using AI. AI is nowhere near human level intelligence and doesn’t understand concepts like a human (see my post on the limits of deep learning). That may change, but major breakthroughs are needed and it’s not clear when or if those will occur (see predictions from AI pioneer Rodney Brooks for more on this).

Although AI does not have human level intelligence, there are a host of applications for the technology. I think the second level of misunderstanding is around how difficult and expensive it is to take advantage of this AI technology. The assumption is that it’s expensive and you need a team of “rocket scientists”. From what I’ve seen studying deep learning and talking to NZ companies that are using AI, the technology is very accessible and the investment required is relatively small.

The report is level-headed: it’s not predicting massive job losses. I’m not going to comment further on the predictions on the economic impact. They’ll be wrong – because, to quote Niels Bohr – predicting is very difficult, especially about the future.

In my opinion the report did not place enough emphasis on the importance of deep learning. The recent rise of this technology has driven the resurgence of AI in recent years. Their history of AI missed the single most important event which was the AlexNet neural network winning the ImageNet competition. This bought deep learning to the attention of the worlds AI researchers and triggered a tsunami of research and development. I would go so far as to suggest that the majority of the focus on AI should be on deep learning.

image_classification_006

The key recommendation of the report is that NZ needs to adopt an AI strategy.  I agree. Of the 6 themes they suggested I think they key ones are:

  1. Increasing the understanding of AI capability. This should involve educating the decision makers at the board and executive level about the opportunities to leverage AI technology and the investment required. The outcome of this should be more organisations deciding to invest in AI.
  2. Growing the capability. NZ needs more AI practitioners. While we can attract immigrants with these skills, we also need to educate more people. I was encouraged to see the report advocating the use of online learning. I agree that NZQA should find a way to recognise these courses but think we should go further. Organisations should be incentivised to train existing staff using these courses (particularly if they have a project identified) and young people should be subsidised to study AI either online or undergrad/postgrad at universities.

I am less worried about the risks. I think it would be good to have AI that was biased, opaque, unethical and breaking copyright law. At least then we would be using the technology and we could address those concerns as they came up. I am also not worried about the existential threat of AI. First, I think human level intelligence may be a long time away. Second, I’m somewhat fatalistic – I can’t see how you could stop those breakthroughs from happening. We need to make sure that humans come along for the ride.

From my perspective the authors have done a very good job with this report. I encourage you to take the time to read it.  I encourage the government to adopt its recommendations.

Ohmio automation: self driving buses

Last month I had lunch with Yaniv Gal, the artificial intelligence manager at Ohmio. Yaniv is an interesting character who grew up in Israel and has focused his career on computer vision and machine learning, both in academia and industry. Although a lot of his experience was in medical imaging, in New Zealand he had been working in the fresh produce industry as research program manager at Compac Sorting Equipment which uses machine vision to automatically sort fruit. At Ohmio he’s built one of the largest AI teams in New Zealand.

Yaniv explained that Ohmio emerged from electronic sign provider HMI technologies. HMI has been around since 2002 and has thousands of signs operating throughout NZ. To me it seemed unusual that an electronic sign company would spawn a self-driving vehicle company. There were a couple of core reasons:

  1. They had some experience using machine vision with traffic: cameras attached to their signs could be used to count traffic in a much more cost effective and reliable way than digging up the road to install induction powered sensors.
  2. They had experience at installing infrastructure alongside roads. This type of infrastructure could be used to aid a self-driving vehicle along a fixed path

This is a crucial differentiator for Ohmio. They are not trying to compete with the myriad of large companies that are trying to develop level 5 autonomous vehicles: those that can drive without human input in all conditions. This is a difficult problem. Sacha Arnold, the director of engineering at Waymo (owned by Alphabet, Google’s parent) recently said they are about 90% of the way there, but they still have 90% to go. Instead Ohmio are going for the more tractable problem of building a vehicle platform that can navigate along a fixed path. They call this level 4+ autonomy. While this doesn’t have the same broad opportunity as level 5, they believe it is something they can build and that there is still a large market opportunity.

Ohmio LIFT

Their first customer is Christchurch airport. This will allow them to prove the concept and refine the technologies. The economics are obvious, with no driver this just ends up cheaper. It’s not just about the money though, Yaniv is confident it will be safer and with electric vehicles, greener. Since our meeting Ohmio have announced a sale of 150 shuttles to  Korean Company Southwest Coast Enterprise City Development Co Ltd.

For fixed path navigation the path can be learnt and if necessary additional infrastructure can be added along the path to aid the vehicle in localising and navigating. Most of this is done on the vehicle using a variety of sensors. To establish exactly where it is odometery, GPSs and LIDAR are combined to get a more accurate location, with more redundancy than possible with a single sensor. Combining data from multiple sensors like this is called sensor fusion. Company R&D coordinator Mahmood Hikmet described this in his recent AI day presentation.

Machine vision is used primarily for vehicle navigation and collision avoidance. Here collision avoidance is detecting whether there is an object in the vehicle’s path, or if an object may come into its path. Ohmio use a variety of machine vision techniques, including deep learning. Yaniv’s experience predates the recent rise in popularity of neural networks. He is aware of the few disadvantages deep learning suffers compared to more “traditional” machine learning techniques and so he doesn’t feel like every machine vision problem needs the hammer that is a deep neural network.

Yaniv confessed that it will be a nervous moment the first time the system is driving passengers with no safety driver. However, he is confident it is going to be safer than a vehicle with a driver. We talked about how both of us would be even more nervous using a self driving, flying taxi that Kitty Hawk is testing in Christchurch, even though it should be safer than a ground based vehicle because there are less objects to crash into.  We compared this to the fear people felt when elevators were first self-operating, without an elevator operator. It seems like a silly fear now. Maybe the next generation will laugh at our anxiety about self-driving vehicles.

After lunch I told my Uber driver about our conversation. He expressed concern about whether the tech should be developed and the loss of jobs that will come with it. This is an understandable concern given his career (although his masters in aeronautical engineering should see him right). There are too many people working on this type of technology now. If the genie is not out of the bottle, he has his head and shoulders out. The economics are too strong and the world should be a better place with this technology. It’s nice to see a NZ company contributing.

Lincoln Agritech: using machine vision for estimating crop yield

I recently had the opportunity to visit Lincoln Agritech where I met with CEO, Peter Barrowclough, Chief Scientist Ian Woodhead and their machine vision team of Jaco Fourie, Chris Bateman and Jeffrey Hsiao. Lincoln Agritech is an independent R&D provider to the private sector and government employing 50 scientists, engineers and software developers. It is 100% owned by Lincoln University, but distinct from their research and commercialisation office.

Lincoln Agritech have taken a different approach to developing AI capability. Rather than hiring deep learning experts, they have invested in upskilling existing staff by allowing Jaco, Chris and Jeffrey to take a Udacity course in machine vision using deep learning. The investment is in the time of their staff. Having three of them take the course together means that they can learn off each other.

The core projects they are working on involve estimating the yield of grape and apple crops based on photos and microwave images. The business proposition for this is to provide information for better planning for the owners of the crops, both in-field and in-market. Operators of these vineyards and orchards can get a pretty good overall crop estimate based on historical performance and weather information. However, they can’t easily get plant by by plant estimates. To do this they need an experienced person to walk the fields and make a judgement call. A machine vision based approach can be more efficient with better accuracy.

The team elected to tackle the problem initially using photos. They had to take the images carefully at a prescribed distance from the crop, using HDR (this is where you combine light, medium and dark images to bring out the detail in the shadowy canopy). Like most machine learning tasks the biggest problem was getting a tagged data set. The tagging involved drawing polygons around the fruit in the images, including fruit partially occluded by leaves. There was a lot of work trying to train people to do this properly. Inevitably at some stage there were guys with PhDs drawing shapes, such is the glamour of data science. This problem is similar to that faced by Orbica who built a model to draw polygons around buildings and rivers from aerial photography.

In this image labels of a fixed size are added to an image to tell the model where the grape bunches are.
This image shows the result of a trained network automatically finding the areas in the image where the grape bunches are.

They used a convolution neural network to tackle this problem. Rather than train a network from scratch and come up with their own architecture, they used the image net winning inception architecture and adapted that. This network was already trained to extract the features from an image that are required to classify 1000 different classes of images. This technique is called transfer learning. This model works well, with 90% accuracy (on their validation data set).

However, part of the challenge here is that the images do not show all of the fruit that is on the plant. The only way to get the “ground truth” is to have someone go under the canopy and count all the fruit by hand. This is where the microwave technology comes into play.

The company is recognised internationally for their microwave technology in other projects. The way it works is a microwave transmitter emits microwaves and then detects the reflections. The microwaves will travel through leaves, but will be reflected by the water content in reasonably mature fruit.

The machine vision team is working to create a model that can use the microwave image and photo to get superior performance. This is a harder problem because this type of sensor fusing is less common than regular image processing.

The team is using the Tensorflow and Keras platforms on high end machines with Nvidia Titan GPUs. There were a few raised eyebrows from the company when the team were asking for what essentially looked like high end gaming machines.

I applaud Lincoln Agritech for investing in their deep learning capability. The experience they will have gained from their first projects will make each subsequent project easier to apply the technology. The fact they have three people working on this, provides redundancy and the ability to learn off each other. This is a model that other New Zealand organisations should consider, particularly if they’ve having problems finding data scientists. Applying the latest AI technologies to agriculture seems like a real opportunity for New Zealand.