Autonomous weapons, running robots, Open AI and more

Here’s some highlights of AI reading, watching, listening I’ve been doing over the past few weeks.

A couple of videos from the MIT AGI series. First, Richard Moyes, co-Founder and Managing Director, Article36 on autonomous weapons and the efforts he and others are doing to reduce their impact.

The second is Ilya Sutskeve, co-founder of open AI, on neural networks, deep reinforcement learning, meta learning and self play. He seems pretty convinced that that it is a matter of when, not if, we will get machines with human level intelligence.

Judea Pearl, who proposed  that Baysian networks could be used to reason probabilistically, laments AI can’t compute cause and effect and summarises Deep Learning as simply curve fitting.

After reading a post about mask R-CNN being used on the footage of Ronaldo’s recent bicycle kick goal I took a look at the description of the code here.

Mask RCNN street Mask RCNN Football

Jeff Dean TWIMLAI_Background_800x800_JD_124An interview with Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, on core machine learning innovations from Google and future directions. This guy is a legend.

 

 

 

Jose TWIMLAI_Background_800x800_JH_137

An interviewwith Jose Hernandez-Orallo on the kinds of intelligence.

 

 

 

 

And of course the video of the running, jumping robot from Boston Dynamics. If you want to find out a little bit more about the company I recommend the lecture from their CEO Marc Raibert below.

 

The impact and opportunity of AI in NZ

I’ve just read the AI forum report analysing the impact and opportunity of artificial intelligence within New Zealand. This was released last week. At 108 pages it’s a substantial read. You can see the full report here.

AI forum report

The timing of this report is very good. There is a lot of news about AI and a growing awareness of it. But at the same time, I believe there is a lack of understanding of what AI is capable of and how organisations can take advantage in the recent advances.

I think the first level of misunderstanding is that people over estimate what the technology can do. This is driven by science fiction, a misinformed media and fuelled by marketers who want their company and products to be seen to be using AI. AI is nowhere near human level intelligence and doesn’t understand concepts like a human (see my post on the limits of deep learning). That may change, but major breakthroughs are needed and it’s not clear when or if those will occur (see predictions from AI pioneer Rodney Brooks for more on this).

Although AI does not have human level intelligence, there are a host of applications for the technology. I think the second level of misunderstanding is around how difficult and expensive it is to take advantage of this AI technology. The assumption is that it’s expensive and you need a team of “rocket scientists”. From what I’ve seen studying deep learning and talking to NZ companies that are using AI, the technology is very accessible and the investment required is relatively small.

The report is level-headed: it’s not predicting massive job losses. I’m not going to comment further on the predictions on the economic impact. They’ll be wrong – because, to quote Niels Bohr – predicting is very difficult, especially about the future.

In my opinion the report did not place enough emphasis on the importance of deep learning. The recent rise of this technology has driven the resurgence of AI in recent years. Their history of AI missed the single most important event which was the AlexNet neural network winning the ImageNet competition. This bought deep learning to the attention of the worlds AI researchers and triggered a tsunami of research and development. I would go so far as to suggest that the majority of the focus on AI should be on deep learning.

image_classification_006

The key recommendation of the report is that NZ needs to adopt an AI strategy.  I agree. Of the 6 themes they suggested I think they key ones are:

  1. Increasing the understanding of AI capability. This should involve educating the decision makers at the board and executive level about the opportunities to leverage AI technology and the investment required. The outcome of this should be more organisations deciding to invest in AI.
  2. Growing the capability. NZ needs more AI practitioners. While we can attract immigrants with these skills, we also need to educate more people. I was encouraged to see the report advocating the use of online learning. I agree that NZQA should find a way to recognise these courses but think we should go further. Organisations should be incentivised to train existing staff using these courses (particularly if they have a project identified) and young people should be subsidised to study AI either online or undergrad/postgrad at universities.

I am less worried about the risks. I think it would be good to have AI that was biased, opaque, unethical and breaking copyright law. At least then we would be using the technology and we could address those concerns as they came up. I am also not worried about the existential threat of AI. First, I think human level intelligence may be a long time away. Second, I’m somewhat fatalistic – I can’t see how you could stop those breakthroughs from happening. We need to make sure that humans come along for the ride.

From my perspective the authors have done a very good job with this report. I encourage you to take the time to read it.  I encourage the government to adopt its recommendations.

AI day 2018: My take

I noticed the AI day videos were released a few days ago and I’d like to share my thoughts on the day. First I”d like to congratulate the organisers Ben Reid and Justin Flitter for putting this event together. Michelle Dickinson did a great job making the day flow as master of ceremonies. This type of event is just what NZ needs to help people understand how different organisations are using AI so they can make more informed decisions on how they could use the ever evolving set of technologies.

AI day 2018 videos
The AI day 2018 videos

I’d characterise the event as having presentations from small and large organisations, a couple of panels, a politician and a good dose of networking. The highlight for me was from the small companies because they were the ones who had taken various AI technologies and applied them in a way to give them an advantage. In my mind these are the stories that are most likely to inspire other NZ companies. This included:

  • R& D coordinator for Ohmio, Mahmood Hikmet describing the self-driving shuttle that they are building and how their AI team is building a sensor fusion model. This combines data from GPS, lidar and odometry sensors to estimate the position of the shuttle, that is then used for navigating.
  • Kurt Janssen, the founder of Orbica described how they’re using machine vision with aerial and drone footage to automate various GIS tasks.
  • Grant Ryan (or Bro as I call him) describing how Cacophony are using machine vision with thermal cameras to automatically identify pests, and how they might then kill them.
  • Sean Lyons had the most entertaining presentation where he described how Netsafe are using bots to waste scammers time in a project they call Rescam. They’re using IBM Watson for sentiment analysis. It’s been hugely successful, wasting over 5 years of scammers time with 1 million emails.
    netsafe bot
  • Mark Sagar and team are doing some of the most interesting AI work globally at Soul Machines. Unfortunately, his presentation had a few technical glitches, but it was nice to see the latest version of BabyX, complete with arms. Mark talked a little bit about how they are using neural networks for perception and control. I’d love to find out more details.
    Babyx

The other small company that presented was Centrality.ai. Founder Aaron McDonald spent most of the presentation explaining blockchain and how it can be used for contracts. I didn’t come away with any understanding that the company is using AI, or with any comprehension of what the company actually does.

The panels had a selection of interesting entrepreneurs and academics. However, I personally find the panel format a little too unstructured to get much useful information from. I may be an outlier here, Justin told me they got very good feedback about the panels from their post conference surveys.

The other highlight of the conference for me was the networking during the breaks. Everyone you spoke to had some involvement in AI: Entrepreneurs, practitioners, academics and investors. This was an added benefit to an already very stimulating day. I wasn’t able to attend the 2nd day of workshops.

To Justin and Ben: Well done! I look forward to attending next year and hearing how a host of other NZ companies are using AI in interesting ways. For those that didn’t make it, check out the videos.

 

Cognitive modelling, self aware robots Tensorflow, & adversarial attacks

 

This week I’ve been learning about cognitive modelling, self aware robots,  adversarial attacks in reinforcement learning and starting to play with Tensorflow.

Cognitive Modelling

The latest MIT AGI video was released a few days ago. In this Nate Derbinsky gives an overview of different types of cognitive architectures, including SPAUN, ACT-R, Sigma and Soar (his baby). This reminds me of old school AI: symbolic processing. My supervisor’s PURR-PUSS would belong in this category. These are a lot less sexy than deep learning, but in many way’s they are complementary with applications in robotics, game playing & natural language processing.

 

TWiML podcasts

SUM cognitive architectureThis week I listened to an interesting podcast with Raja Chatila on robot perception and discovery. In this Raja talked about the necessity of robot self awareness for true intelligence and the ethics of intelligent autonomous systems. It’s interesting to see that the sort of architectures used for exploring artificial consciousness in robotics have a lot of overlap with the cognitive models described by Nate Derbinsky in the MIT AGI series.

I also had the chance to listen to Google Brainers Ian Goodfellow & Sandy Huang discussing adversarial attacks used against reinforcement learning  Adversarial attacks highlight some of the weakness of deep learning. When used for image classification the image is just a series of numbers that has a set of mathematical operations performed on it to produce a classification. By subtly changing some of the numbers you can fool the classifier, even though to a human the image looks exactly the same. The example of a Panda below was taken from a 2015 Google paper.

panda

In the podcast Ian and Sandy discuss how this can be used against a reinforcement learning agent that has been trained to play computer games. Even changing one pixel can significantly degrade the performance.

Tensorflow

I’m up to the part in my CS231n course where you start to train CNNs using Tensorflow or Pytorch. Despite reading a compelling argument for using Pytorch over Tensorflow on Quora, the people I’ve spoken to locally are using Tensorflow – so I’m going to go with that. I found this introduction useful.

I managed to get the software installed and run Hello World. Apparently there is more you can do with it…

Tensorflow hello world

 

Robots opening doors, driverless cars and finding exoplanets

Here’s some things I’ve been watching/listening to lately…

The latest video from Boston Dynamics is cool. They seem to be having a lot of fun.

I’m continuing to watch the MIT series on Artificial General Intelligence. They’re currently releasing one video a week. The latest is from Emilio Frazzoli on self driving cars. I’ve been enjoying this series.

I’m also listening to the TWiML interview with Chris Shallue about using Deep Learning to hunt for exoplanets. Also pretty cool. I thought Chris’s accent might have been kiwi – but nah he’s an Aussie.

exoplanets

 

 

The limits of deep learning

There are a couple of articles I’ve read recently that have gelled with my own about the limits of deep learning. Deep learning simply refers to multi layered neural networks, that typically learn using back-propagation to train. These networks are very good at pattern recognition, and are behind most recent advances in artificial intelligence.  However, despite the amazing things they are capable of, I think it’s important to realize that these networks don’t have any understanding of what they’re looking at or listening to.

Gödel, Escher, BachThe first article was by Douglas Hofstadter, a professor of cognitive science and author of Gödel, Escher, Bach. I read that book many years ago and remember getting a little lost. However, his recent article titled The Shallowness of Google Translate clearly demonstrates how  the deep learning powered Google Translate successfully translates the words, but often fails to translate the meaning. Douglas believes that one day machines may be able to do this but they’ll need to be filled with ideas, images, memories and experiences, rather than sophisticated word clustering algorithms.

The second article by Jason Pontin at Wired discusses the Downsides of Deep Learning:

  • They require lots of data to learn
  • They work poorly when confronted with examples outside of their training set
  • It’s difficult to explain why they do what they do
  • They don’t gain any innate knowledge or common sense.

Jason argues that for artificial intelligence to progress we need something beyond deep learning. There are many others saying the same types of things. I’ve recommend watching MIT’s recent lectures on Artificial General Intelligence that covers this as well.

 

Rodney Brooks’ AI predictions

IMG_3465Their is a lot of hype around artificial intelligence, what the technology will bring and its impact on humanity. I thought I’d start my blogging by highlighting  some more grounded predictions from someone who has a lot of experience at the practicalities of AI implementation: Rodney Brooks. Rodney is a robotics pioneer who co-founded iRobot which bought us the Roomba robot vacuum cleaner. I had the pleasure of meeting Rodney when I did global entrepreneurship program at MIT Sloan School of Management. I was a little star struck…

Earlier this month Rodney made some dated predictions about self driving car cars (10 years before driverless taxis are in most big US cities), AI (30 years before we reach dog level), and space travel (humans on Mars in 2036). Rodney calls himself a techno-realist. His experience has shown that turning ideas into reality at scale takes a lot longer than people think. Undoubtedly his predictions will be wrong because that is the nature of predicting the future. However this is a useful perspective given the pace at which the field is advancing. The recent posts from the Google Brain team reviewing 2017 (part 1, and part 2) give a great view of how much progress was made in just the last year. Rodney’s assertion is that turning this progress into products is hard and will take longer than most people think.