Cognitive modelling, self aware robots Tensorflow, & adversarial attacks

 

This week I’ve been learning about cognitive modelling, self aware robots,  adversarial attacks in reinforcement learning and starting to play with Tensorflow.

Cognitive Modelling

The latest MIT AGI video was released a few days ago. In this Nate Derbinsky gives an overview of different types of cognitive architectures, including SPAUN, ACT-R, Sigma and Soar (his baby). This reminds me of old school AI: symbolic processing. My supervisor’s PURR-PUSS would belong in this category. These are a lot less sexy than deep learning, but in many way’s they are complementary with applications in robotics, game playing & natural language processing.

 

TWiML podcasts

SUM cognitive architectureThis week I listened to an interesting podcast with Raja Chatila on robot perception and discovery. In this Raja talked about the necessity of robot self awareness for true intelligence and the ethics of intelligent autonomous systems. It’s interesting to see that the sort of architectures used for exploring artificial consciousness in robotics have a lot of overlap with the cognitive models described by Nate Derbinsky in the MIT AGI series.

I also had the chance to listen to Google Brainers Ian Goodfellow & Sandy Huang discussing adversarial attacks used against reinforcement learning  Adversarial attacks highlight some of the weakness of deep learning. When used for image classification the image is just a series of numbers that has a set of mathematical operations performed on it to produce a classification. By subtly changing some of the numbers you can fool the classifier, even though to a human the image looks exactly the same. The example of a Panda below was taken from a 2015 Google paper.

panda

In the podcast Ian and Sandy discuss how this can be used against a reinforcement learning agent that has been trained to play computer games. Even changing one pixel can significantly degrade the performance.

Tensorflow

I’m up to the part in my CS231n course where you start to train CNNs using Tensorflow or Pytorch. Despite reading a compelling argument for using Pytorch over Tensorflow on Quora, the people I’ve spoken to locally are using Tensorflow – so I’m going to go with that. I found this introduction useful.

I managed to get the software installed and run Hello World. Apparently there is more you can do with it…

Tensorflow hello world

 

Orbica: Using machine vision in GIS

Last week I had the opportunity to sit down with Orbica CEO’s Kurt Janssen and data scientist  Sagar Soni.

Kurt has worked in the Geographic Information Systems (GIS) industry for more than 14-years. Last year he started his own company, Orbica, which does GIS consulting for organisations in the public and private sector. Orbica invests some of its consulting revenue into developing its own product. A major – and rewarding – investment has been hiring data scientist Sagar.

Sagar was taught machine learning during his master’s degree and had the opportunity to put it into practice developing an earth rock image classification system at Dharmsinh Desai University and using deep learning algorithms like Recurrent Neural Networks to solve medical entity detection problems at US health care solutions provider ezDI. Last year he immigrated to NZ and had just the skills and experience Orbica was looking for.

Orbica’s first product automatically identifyies buildings and waterways from aerial photos. This manually intensive job is traditionally done by geographers and cartographers who draw polygons on maps identifying these features using digitising techniques. The first product identifies buildings in urban areas. The 15 million-pixel (4800×3200 ) photos have each pixel covering a 7.5×7.5cm square . Sagar has built a convolution neural network that takes these photos and outputs the vectors representing the polygons where it believes the buildings are.

They have a good amount of training, test and validation data from Land Information New Zealand that consists of the images and polygons that have been hand drawn. Because of the size of the image, Sagar has tiled them into 512×512 images . He built the model over a couple of months with a little trial and error testing the various hyper parameters. The existing model has nine layers, with the standard 3×3 convolutions. He’s currently getting 90 per cent accuracy on the validation set.

Building outlines

RiverDetection_AIThe water classification is very similar, working with 96 million pixel(12000×8000)  images, but with smaller resolution 30x30cm  pixels. The output is the set of polygons representing the water in the aerial images, but the model also classifies the type of water body, e.g. a lake, lagoon, river, canal, etc.

The commercial benefits of these models are self-evident: Orbica can significantly improve the efficiency of producing this data, whether it does this for a client, or it is sold as a service to city and regional councils. These are done regularly – to identify buildings that have been added or removed, or to track how waterways have changed.

WaterBodiesClassification'

Another opportunity has come from the Beyond Conventions pitch competition in Essen, Germany, where Orbica won the Thyssenkrupp Drone Analytics Challenge and the People’s Choice Award. Orbica’s pitch was to use machine vision to analyse drone footage of construction sites to automatically generate a progress update on the construction project. This is a more complex problem given its 3-dimensional nature. Thyssenkrupp has now resourced Orbica to put together a proof of concept, which Sagar is busy working on. Should this go well, Orbica will probably hire at least one other data scientist. DroneImage_Output

Because the technology is developing quickly, Sagar keeps up to date with the latest developments in deep learning through Coursera and Udacity courses. He’s a fan of anything Andrew Ng produces.

To me, Orbica’s use of machine vision technology is an excellent case study for how New Zealand companies can use the latest advances in artificial intelligence. They have a deep knowledge in their own vertical; in this case GIS. They develop an awareness of what AI technologies are capable of in general and have a vision for how those technologies could be used in their own industry.  Finally, they make an investment to develop that vision. In Orbica’s case, the investment was reasonably modest: hiring Sagar. A recurring theme I’m seeing here is hiring skilled immigrants. New Zealand’s image as a desirable place to live – coupled with interesting work – will hopefully make this a win-win for all involved.

For those that would like to hear more. Kurt is speaking at AI Day in Auckland next week.

 

 

 

 

Robots opening doors, driverless cars and finding exoplanets

Here’s some things I’ve been watching/listening to lately…

The latest video from Boston Dynamics is cool. They seem to be having a lot of fun.

I’m continuing to watch the MIT series on Artificial General Intelligence. They’re currently releasing one video a week. The latest is from Emilio Frazzoli on self driving cars. I’ve been enjoying this series.

I’m also listening to the TWiML interview with Chris Shallue about using Deep Learning to hunt for exoplanets. Also pretty cool. I thought Chris’s accent might have been kiwi – but nah he’s an Aussie.

exoplanets

 

 

Jade: developing AI capability for chatbots and predictive modelling

Jade logo

A couple of weeks ago I sat down with Eduard Liebenberger who is the head of digital at Jade to find out a little about their AI capabilities and plans. Eduard is passionate about AI and the possibilities it brings to transform the way we communicate with businesses.

In Eduard’s words, Jade’s core focus is around freeing people from mundane/repetitive tasks and instead allow them to apply their creativity/expertise to more challenging tasks – and the JADE development, database and integration technologies. Eduard and the team at Jade have been watching recent developments in AI and identifying which of these they can use to help their customers. Their first foray has been into conversation interfaces (chatbots). They’ve developed a number of showcases, including an insurance chatbot called TOBi which shows how the technology can be used to make a claim, change contact details etc. From their they have started rolling out this technology into existing customers.

The chatbot uses natural language processing and sentiment analysis. It aims to make businesses interactions with their customers more efficient by allowing them to communicate via conversations that don’t have to be in real time, like a phone call and are more intuitive than a web form. Jade’s main advantage with their existing customers is that they have already done the tricky integration work with the back-end systems and so can fairly quickly add a chatbot as an alternative to an existing interface. Jade’s focus on the digital experience means they invest heavily into making this a natural and human-like interaction. For non-Jade customers their attraction is their ability to deliver a whole solution and not just the chatbot.

A77B63FA-FD17-4EF9-9643-0F01F13BF576

Another advantage Jade has is that through their existing customers they have access to a lot of data that can be used to power machine learning applications. One example Eduard talked about was a summer intern project with a NZ university to try and identify students at risk of dropping out.  This was done using the data in student record database which is powered by Jade and contains several years’ of records. In just a few weeks the interns built a predictive model that was able to predict which students were likely to drop out with 90%+ accuracy. Ed is a big fan of rapid development for these types of proof of concept projects and doesn’t believe it should cost a fortune to get value from AI.

Overall, I think it’s fair to say that Jade’s AI capability is nascent. However, it’s positive to see that they are looking to build capability, understandably with a focus on the business benefits to their customers. I’m keen to see how it develops.

For those that want to find out more, Eduard is delivering the keynote at Digital Disruption X 2018 in Sydney, and presenting at DX 2018 and the AI Day in Auckland, all later this month. He’s a busy man.