Thought Experiment logo

image-1A quick word on the thought experiment logo. I had this logo designed using Freelancer – a great site for getting all sorts of things done. I’ve seen these guys present at investor meetings in Sydney and Auckland and I previously used the service to get a design for my gates at home: for just $20 I got 18 designs, selected one and my father-in-law built it with me as manual labour. I’m still pretty proud of those gates.

PURR-PUSS

Again for $20, I ran a competition, this time for a logo. My Ph.D supervisor, John Andreae used a stylised cat, to illustrate his PURR-PUSS AI system. I suggested in the competition notes that they use this combined with the thought experiment Schrödinger’s cat to come up with a logo. I received about 20 designs and ended up choosing the one on the top right. It’s a lot better than what I could have come up with myself.img_1246-1.jpg

Cacophony: Using deep learning to identify pests

This is the first of a series of posts I intend to write on organisations that are using artificial intelligence in New Zealand. I am closer to this organisation than most because it was started by my brother, Grant.

Cacophony is a non-profit organisation started by Grant when he observed that the volume of bird song increased when he did some trapping around his section in Akaroa. His original idea was simply to build a device to measure the volume of bird song in order to measure the impact of trapping. Upon examining trapping technology, he came to the conclusion there was an opportunity to significantly improve the effectiveness of trapping by applying modern technology. So he set up Cacophony to develop this technology and make it available via open source. This happened a little before the establishment of New Zealand government’s goal to be predator free by 2050. He managed to get some funding and has a small team of engineers working to create what I refer to as an autonomous killing machine. What could possibly go wrong?

Because most of the predators are nocturnal the team have chosen to use thermal cameras. At the time of writing they have about 12 cameras set up in various locations that record when motion is detected. Grant has been reviewing the video and tagging the predators he can identify. This has created the data set that has been used to by the model to automatically detect predators.

They hired an intern, Matthew Aitchison, to build a classifier over the summer and he’s made great progress. I’ve spent a bit of time with Matthew, discussing what he is doing. Matthew completed Standford’s CS231n computer vision course that I’m also working my way through.

He does a reasonable amount of pre-processing: removing the background, splitting the video into 3 second segments and detecting the movement of the pixels, so the model can use this information. One of his initial models was a 3 layer convolution neural network with long short-term memory.  This is still a work in progress and I expect Matthew will shortly be writing a description of his final model, along with releasing his code and data.

However, after just a few weeks he had made great progress. You can see an example of the model correctly classifying predators below, with the thermal image on the left and on the right the image with the background removed, a bounding box around the animal and instantaneous classification at the bottom with the cumulative classification at the top.

 

A version of this model is now being used to help Grant with his tagging function, making his job easier and providing more data, faster.

The next thing is to work out how to kill these predators. They’re developing a tracking system that you can see a prototype working below.

From my perspective it feels like they are making fantastic progress and it won’t be too long before they can have a prototype that can start killing predators. If you ask Grant he thinks we can be predator free well before the government’s goal of 2050.

One final point on this from a New Zealand AI point of view, is how accessible these technologies are that are driving the Artificial Intelligence renaissance. Technologies such as deep learning can be learnt from free and low-cost courses such as the CS231n. Those doing so, not only have a lot of fun, but open up a world of opportunity.

Rodney Brooks’ AI predictions

IMG_3465Their is a lot of hype around artificial intelligence, what the technology will bring and its impact on humanity. I thought I’d start my blogging by highlighting  some more grounded predictions from someone who has a lot of experience at the practicalities of AI implementation: Rodney Brooks. Rodney is a robotics pioneer who co-founded iRobot which bought us the Roomba robot vacuum cleaner. I had the pleasure of meeting Rodney when I did global entrepreneurship program at MIT Sloan School of Management. I was a little star struck…

Earlier this month Rodney made some dated predictions about self driving car cars (10 years before driverless taxis are in most big US cities), AI (30 years before we reach dog level), and space travel (humans on Mars in 2036). Rodney calls himself a techno-realist. His experience has shown that turning ideas into reality at scale takes a lot longer than people think. Undoubtedly his predictions will be wrong because that is the nature of predicting the future. However this is a useful perspective given the pace at which the field is advancing. The recent posts from the Google Brain team reviewing 2017 (part 1, and part 2) give a great view of how much progress was made in just the last year. Rodney’s assertion is that turning this progress into products is hard and will take longer than most people think.

 

 

My PhD thesis

I did my PhD in the 1990s in artificial intelligence. I focused on artificial neural networks, in particular the Sparse Distributed Memory (SDM) and the Cerebellar Model Articulation Controller (CMAC), investigating their capabilities and capacities.  This included their ability to store and recognize sequences. I also looked at how to combine the CMAC with a robot learning system called PURR-PUSS.  In a simple control problem I was able to show how CMAC could learn from PURR-PUSS, providing smother control.

IMG_1585My thesis has sat I my bookshelf for the past ~30 years and I thought even though it’s old, I should make it a little more accessible. I had a back up on floppy disks and although I was able to get the data off the disks, I failed to find a copy of Norton Backup that could read the 30 year old files. So I resorted to scanning the book and using OCR to convert it into text.

You can download a PDF of my thesis titled Investigations into the capabilities of the SDM and combining CMAC with PURR-PUSS. The diagrams are the scans from the dot matrix printout I did all those years ago.

I completed my PhD at the University of Canterbury, supervised by John Andreae who continues to contribute to the field in his 90s. My examiners were Ian Witten and an American academic that I can’t remember.

After creating a digital copy of my thesis I discovered that the university also has a copy here. This goes to show that just because you’ve got a PhD, it doesn’t mean you’re that bright.