Auckland University Robotics Lab

I recently had the chance to catch up with Professor Bruce MacDonald who chairs the Auckland University Robotics Research Group. Although we had never met before, Bruce and I have a connection, having the same PhD supervisor, John Andreae.

Bruce took me through some of the robotics projects that him and his team have been working on. The most high profile project is a kiwifruit picking robot that has been a joint venture with Robotics Plus, Plant and Food Researchand Waikato University. This multi armed robot sits atop an autonomous vehicle that can navigate beneath the vines. Machine vision systems identify the fruit and obstacles and calculate where they are relative to the robot arm which is then guided to the fruit. A hand gently grasps the fruit and removes it from the vine using a downward twisting movement. The fruit then rolls down a tube.

Kiwifruit picking robot

The work has been split between the groups with the Auckland University team focused on the machine vision and vehicle navigation, Waikato on the control electronics and software, and Robotics Plus on the hardware. The team estimates that the fruit picking robot will be ready to be used in production in a couple of years. The current plan is to use it to provide a fruit picking service for growers. This way their customers don’t need to worry about robot repairs and maintenance and the venture can build a recurring revenue base. They are already talking to growers in New Zealand and the USA.

Along with Plant and Food Research, the group is also researching whether the same platform can be used to pollinate the kiwifruit flowers. Declining bee populations are expensive to maintain, and this may provide a cost effective alternative.

The group has just received funding of $17m to improve worker performance in orchards and vineyards. The idea is to use machine vision to understand what expert pruners do and translate that into a training tool for people learning to prune and for an automated robot.

Bruce’s earlier work included the use of robotics in healthcare. This included investigating if robots could help people take their medication correctly and the possibility of robots providing companionship to those with dementia who are unable to keep a pet.

Therapeutic robot

I asked Bruce whether Auckland University taught deep learning at an undergraduate level. He said that they don’t, but it is widely used by post grad students. They just pick it up.

Bruce is excited by the potential of reinforcement learning. We discussed whether there is the possibility of using our supervisor’s goal seeking PURR-PUSS system with modern reinforcement learning. I think there is a lot of opportunity to leverage some of this type of early AI work.

At the end of the meeting Bruce showed me around the robotics lab at the new engineering school. It was an engineer’s dream – with various robot arms, heads, bodies, hands and rigs all over the place. I think Bruce enjoys what he does.

Robotics lab

Aware Group: Artificial Intelligence consulting

Over breakfast recently I had the opportunity to talk to Brandon Hutcheson,  CEO of the Aware Group. Brandon told me their 17 person company is the fastest growing artificial intelligence consulting group in New Zealand. He shared a little of their history and plans.

Brandon is a serial entrepreneur, having being involved in two successful exits with Cheaphost and Dvelop IT.  The Aware Group got their start in 2016 after successfully winning the Microsoft Excellence Award in Technology Delivery for a tertiary business intelligence implementation. This includes predicting things like student drop out rates (a task Jade had told me they have been working on too) , student prediction,  and  measuring tutor performance. This platform is now being used by various New Zealand tertiary institutions.

They’ve developed their machine vision capability across various projects, including counting and classifying traffic (cars, trucks, vans, bikes, etc) for Local and National Government.  They’ve also put this technology to use to count students going into tertiary classes. The next stage for their product release is the use of facial recognition to identify who is in the class but they don’t feel that we are socially ready for this level of implementation.

Aware group people counting

Their growing data science team is not reinventing the wheel. Where possible they’re using existing models and tweaking them to meet the needs of their project. They’re using Microsoft technologies and are a strong Microsoft partner.

The newest AI capability they’ve developed has been around natural language processing. This has been supported by their recent entry into the Vodafone Xone incubator. They’re applying this technology to understand customer support calls and attempting to surface relevant articles from an internal knowledge base. These articles could help to answer the support call more efficiently, but also to up-sell the customer on relevant services more effectively. This product is currently under development and is being tested in a call centre.

Probably some of their highest profile work has been in demonstrating AI concepts, often in partnership with Microsoft. Some of these are quite quirky, including:

  • demonstrating how patrons with empty glasses can be identified so bar staff can come and give them a top up
  • allowing conference attendees to have a coffee of their choice ordered as soon as they are identified on a camera
  • Using a recurrent neural network to generate Simpsons lines. This won the Just for fun category at the Microsoft Data Insights Summit

aware group simpsons

On the business side, the Aware group are getting most of their revenue from service contracts with an increasing growth in product-based revenue. They have bootstrapped to date, but are preparing themselves for a capital raise.

One of the challenges Brandon is facing is finding good quality AI practitioners.  As well as their team of 12 in Hamilton, they have 5 people based in Seattle and are planning to expand in the US market. They are also eyeing opportunities in Australia, Japan, Israel, and Korea.. With customers in sectors as varied as education, government, large corporates and agriculture, my main advice to Brandon was to focus their small company geographically and within a single sector so they can benefit from getting known within a niche.

The Aware Group was recently acknowledged as a rising star by Deloitte as part of their Fast 50 awards. I’ll be following to see how far and fast their star does rise and wish them all the best.

 

Neuromorphic computing

At a recent AI forum event in Christchurch, one of the presenters was Simon Brown, a physics professor from the University of Canterbury. Simon specialises in nanotechnology and has created a chip that may be capable of low power, fast AI computations. I caught up with Simon after the event to find out more.

The chip is created using a machine consisting of a number of vacuum chambers. It starts with a metal (in this case tin) in vapour form in one vacuum chamber. As it moves through the various chambers the vapour particles are filtered, mechanically and electrically until they are just the right size (averaging 8.5 nanometers diameter) and they are sprayed onto a blank chip. This is done until about 65% of surface of the chip is covered with these tiny droplets.

This is just enough coverage to be almost conductive. The metal droplets on the chips are close enough to each other that an electrical charge in one will induce charges in nearby droplets. Simon describes these as being analogous to synapses in the brain which connect neurons. The strength of the connection between the two droplets is a function of the distance between them. The first chips that were created had two external connections into this nano scale circuit. Interestingly when a voltage was applied to one of the connections the resulting waveform on the other connection had properties similar to those seen in biological neurons.

An important piece of research was showing that this chip was stable, i.e. the performance didn’t change over time. That was proven and so what Simon has been able to create is effectively a tiny neural network with many connections on a chip that has a random configuration. One feature that is unlike artificial neural networks that are used for deep learning, is that the strength of the connections between the neurons (the weights) cannot be changed using external controls. Instead the weights are updated through the atomic scale physical processes that take place on the chip. So while the chips will never be as flexible as artificial neural networks implemented in software, it turns out that these “unsupervised learning” processes have been studied by computer scientists for a long time and have been shown to be very efficient at some kinds of patter recognition. The question is whether there are applications that could leverage the “unsupervised” processing that this chip does very quickly and at low power.

A specific main candidate application is called reservoir computing. Reservoir computing uses a fixed, random network of neurons, just like the one created by Professor Brown, to transform a signal. A single, trainable layer of neurons (implemented in software) on top of this is then used to classify the signal. A Chicago based team  has achieved this using a chip made of memristors.

A standard implementation of reservoir computing would have access to each of the neurons in the random network. With just two connections into the network this chip does not have that access.  When we met, the team had just created a chip with 10 connections into the network.

Their focus now is trying to prove that they can implement reservoir computing or some variant on this chip. If they can do this then there is real potential to commercialise this technology. The larger opportunity is if they could find a way to use this technology to implement deep learning.

NZ Merino using artificial intelligence to monitor sheep well being.

At the last Christchurch AI meetup, I met up with Ian Harris who told me about the work he had done with neXtgen Agri for The New Zealand Merino Company (NZM)  as part of Sensing Wellbeing, a collaborative Sustainable Farming Fund Project. This work involved analysing data collected from accelerometers attached to jaws of sheep to try to identify their behaviour.

SheepWithActivityMonitor
A sheep with an activity monitor. NZM were very particular about ensuring the sheep were treated ethically during this data collection.

The data

Like any machine learning project, the critical part is the data. In this case the raw data came from the tri-axial accelerometers, sampled at 30 Hz. This meant that for each of the three channels there were 300 samples over a 10 second period. This data was collected from 15 sheep over a period of 6 days in conjunction with Massey University.

AccelGrazingObsId29684Sheep1
A example of the data from 3 channels over one 10 second period

6 of the sheep were filmed during that time and their behaviour was categorised into 12 different activities. An initial review of the data showed that there was only good data for 5 of the 12 activities and so the focus was on those activities:

  1. sitting
  2. standing
  3. ruminating while sitting
  4. ruminating while standing
  5. grazing

For those of you (like me) who are not intricately familiar with the lives of sheep, ruminating is the process or regurgitating, re-chewing and re-swallowing food.

Random forest approach

31 different metrics were calculated from the raw data, metrics like energy, maximum, etc. The initial approach Ian took was to use a random forest algorithm with these metrics or features as an input. With this approach the model correctly classifed 81% of the activities. This was a replication of an approach taken by a South African team who got similar results and helped validate the overall set up.

Stratified Sheep RandomForestClassifier Confusion Matrix
This confusion matrix shows that it was difficult to separate sitting and standing with a high degree of accuracy.

Deep learning approach

Ian is an experienced Java developer and had taught himself python and Deep Learning. For this problem he was using Tensorflow and set up a relatively simple 3-layer network that used the raw data for input rather than the calculated features used with the random forest approach. His most successful model had a binary output to detect whether (or should that be wether?) the sheep was grazing or not. This model had 93% accuracy.

Clustering

Only six of the fifteen sheep had their behaviour recorded, so Ian had a lot of unlabelled data. In an effort to leverage this data he used an unsupervised clustering algorithm to try to generate more data from which to learn. This did improve the detection rates, but only for models for which there was relatively low accuracy.

Sheep wellbeing

You could imagine a connected device that could accurately predict whether a sheep is grazing or not may be useful if it could alert a farmer when a sheep has stopped grazing for a period of time so he or she could check on the sheep. It’s been quite a few decades since I lived on a sheep farm – so I’m not able to make any sort of informed commentary on that. Even if this was useful, there are some engineering challenges in designing a device that is cheap enough to make this economic.

That said, I would like to congratulate NZM, and neXtgen Agri for undertaking this project. Whether or not it leads to a commercial product, they will have learned a lot about the capabilities of the AI technologies used along with the data requirements and the time and investment needed. And in Ian, New Zealand has another engineer with AI experience which I’m sure will be put to good use.

Ambit: Build your own chatbots

AMB-Lock-Up-Orange-RGB

I am a chatbot sceptic. I think this is because they over promise and under deliver. They try to appear intelligent, but they’re not capable of true understanding. They remind me of bad search: where I can’t quite work out the right combination of words to get the information I want. I also find their attempts to be personable to be as annoying as clippy. I’m not alone with this point of view. Just today I read Justin Lee’s article Chatbots were the next big thing: what happened?

With that context I ran into Tim Warren, COO and co-founder of Ambit, while at the Hitech Awards in Christchurch. We connected after the event and I found out more about Ambit’s approach to chatbots. Tim has a diverse background. He started in software and then moved into finance, running Goldman Sachs/JB Were as COO. He and his co-founders spent quite a bit of time researching what sort of start-up they wanted to do, before settling on chatbots aimed at the enterprise (for now).

They came up with their first proof of concept in 2016 and by mid last year they had a product they could sell. They now have a reasonable amount of recurring revenue and their 14-person company is close to break even and growing quickly. There’s nothing like a little revenue to counter a dose of scepticism.

They like to describe their platform as WordPress for conversational AI. Customers can create their own chatbots, but at the moment this is done by conversational designers at Ambit. Their aim is to have a completely self-service product.  They initially started using the MS Luis platform, but found that it had limitations, so they wrote their own.

Their core technology is trying to distil intent from utterances: that is matching what is typed into the chat window by the user to one of their core tasks. The system learns from examples.  Typically, they need about 5 utterance examples to reasonably interpret a new utterances and connect it to an intent. Their system returns a confidence score for the various intents they have.  For example, the following questions might all be around starting an application for a mortgage.

  • I’d like to borrow some money for a house
  • Can I apply for a mortgage?
  • How do I get a loan for a property?
  • What are your mortgage rates?
  • What are your interest rates for a house loan?

A conversation is not represented linearly, but is a web of nodes. They have a demo where they can show how a web of nodes representing a conversation can be created quickly using their platform.

synapse
The Ambit chatbot builder

The other key differentiator is that they have a hierarchical language model that means while an individual chat design belongs to a particular customer, the language and synonyms belong to the industry and all customers in the industry benefit as Ambit learns more.

Another important part of the platform is the analytics, where they can see how many of the different types of conversations are happening. They can also see where drop offs are in conversation funnels, to inform them where they might need to change the design to improve performance. As part of this they can inspect conversations to help guide them and learn new utterances.

Their client applications fall into three categories:

  • Navigate:  helping people find information on the website
  • Acquire: Using the bot to help generate sales leads
  • Support: Using the bot to automate the repetitive tasks and leaving more difficult support tasks to humans.

They charge their customers a fixed monthly fee. For that they get a certain number of conversational nodes and a generous number of conversations that can happen on the platform. Integration with the website is simple, with the customer being able to tailor the appearance via CSS.

AMBIT-AI chatbot at Squirrel
Chatbot at Squirrel Mortgages

They have the ability to integrate voice to text – but have not seen the demand for it yet. At this stage they support English only, however they could build functionality using partner tools such as Watson from IBM as a stop gap before they build their own multi-lingual functionality.

They are too new to have really solid customer case studies and hard ROI metrics. They are working on that now. However, the customer value is self-evident: lower cost to serve and happier customer facing people because they don’t have to deal with repetitive, easy questions.

While Ambit haven’t converted this sceptic to a chatbot fan, I can see there is a business here. As they get more customers and build out the product the chatbots will only improve. I’ll be interested to see how the business develops.

NEC NZ: AI for smart cities

While I was in Wellington as part of the excellent AI panel put together by WREDA, I caught up with Tim Rastall who recently left NEC to start his own consulting firm. Tim gave me some insight into the interesting AI work NEC have been doing in New Zealand. NEC is the Japanese tech giant best known for their IT services and products. One of the core focuses for NEC’s New Zealand arm is on smart cities technologies.

Car counting was the first project Tim discussed. This was a very early project at a time when there were limited tools available. The project used machine vision to count vehicles in a stream of traffic. One of the problems they had with these early models is that they were trained in cities with more smog that Wellington.  The crisp NZ light would create shadows that would confuse the model. This combined with different viewing angles than the model was trained with meant, out of the box, the accuracy was not as high as hoped for. There are now a few third part solutions that do a reasonable job of car counting. What they really learned from the project was that the market wanted vehicle categorisation solutions but using surveillance cameras/analytics presented a raft of technical and practical issues. This led them developing a road based sensor that provides much higher accuracy and reliability and is about to be deployed for some field trials in Wellington.

One of NEC’s higher profile projects was the Safe City Living Lab. This is a research project that includes microphones and cameras placed around the city that can detect events such breaking glass, screaming, beggars, or fighting. The idea being that if an event is detected then an agency could be alerted. There were concerns around privacy but these were addressed to the satisfaction of the Office of the Privacy Commissioner. Tim also shared that they worked hard to remove bias from the models.

Safe City Beggar

NEC also worked with Victoria university to build a model to identify birds from audio recordings. Researcher Victor Anton collected tens of thousands of hours of recordings. The idea was to use deep learning to automatically identify which birds have been recorded. Like a lot of deep learning tasks, the biggest problem was to get a good data set to train from. This involved having people listen to audio recordings and identify which if any birds were in the clip. This is difficult because a lot of the recordings contain other sounds: trees rustling in the wind, other animals or other urban sounds like door bells. So, the first tool they built was a model to identify whether a particular clip contained a bird song or not, that would then make the tagging task easier. From the sounds of it (no pun intended), this is not solved yet for sensors that had novel environmental noise. For those sensors they created good training data they could categorise discreet species very well. Listen to a Radio NZ interview with Victor and Tim about the birdsong AI project for more information.

This research is related the Cacophony project that was inspired by the idea that measuring the volume of bird song would be a way of measuring the impact of predator control. Cacophony pivoted and are now creating an AI powered predator killing machine.

full_The_acoustic_sensor_technology_(Yadana_Saw)

NEC’s current focus is aggregating city data from various sources and making it easily accessible. This includes powerful visualisation tools that allow the data be overlaid on a map and viewed in 3D, either on a screen or in VR. This makes it easier to understand and helps people make decisions with the data more efficiently. While this isn’t directly related to AI, having the data accessible makes it much easier for people to use in AI applications.

3D_Capture_of_Wellington_fewer_pixels

In summary, NEC New Zealand has worked on many interesting smart city projects using deep learning and data visualisation. They have a sizeable team that is growing in experience. I’m interested to see what they come up with next.

Meanwhile Tim has left NEC to work on a gaming start up and do some consulting around artificial intelligence, augmented and virtual reality, and Internet of Things. I’m sure he’ll do well.

 

Autonomous weapons, running robots, Open AI and more

Here’s some highlights of AI reading, watching, listening I’ve been doing over the past few weeks.

A couple of videos from the MIT AGI series. First, Richard Moyes, co-Founder and Managing Director, Article36 on autonomous weapons and the efforts he and others are doing to reduce their impact.

The second is Ilya Sutskeve, co-founder of open AI, on neural networks, deep reinforcement learning, meta learning and self play. He seems pretty convinced that that it is a matter of when, not if, we will get machines with human level intelligence.

Judea Pearl, who proposed  that Baysian networks could be used to reason probabilistically, laments AI can’t compute cause and effect and summarises Deep Learning as simply curve fitting.

After reading a post about mask R-CNN being used on the footage of Ronaldo’s recent bicycle kick goal I took a look at the description of the code here.

Mask RCNN street Mask RCNN Football

Jeff Dean TWIMLAI_Background_800x800_JD_124An interview with Jeff Dean, Google Senior Fellow and head of the company’s deep learning research team Google Brain, on core machine learning innovations from Google and future directions. This guy is a legend.

 

 

 

Jose TWIMLAI_Background_800x800_JH_137

An interviewwith Jose Hernandez-Orallo on the kinds of intelligence.

 

 

 

 

And of course the video of the running, jumping robot from Boston Dynamics. If you want to find out a little bit more about the company I recommend the lecture from their CEO Marc Raibert below.

 

The impact and opportunity of AI in NZ

I’ve just read the AI forum report analysing the impact and opportunity of artificial intelligence within New Zealand. This was released last week. At 108 pages it’s a substantial read. You can see the full report here.

AI forum report

The timing of this report is very good. There is a lot of news about AI and a growing awareness of it. But at the same time, I believe there is a lack of understanding of what AI is capable of and how organisations can take advantage in the recent advances.

I think the first level of misunderstanding is that people over estimate what the technology can do. This is driven by science fiction, a misinformed media and fuelled by marketers who want their company and products to be seen to be using AI. AI is nowhere near human level intelligence and doesn’t understand concepts like a human (see my post on the limits of deep learning). That may change, but major breakthroughs are needed and it’s not clear when or if those will occur (see predictions from AI pioneer Rodney Brooks for more on this).

Although AI does not have human level intelligence, there are a host of applications for the technology. I think the second level of misunderstanding is around how difficult and expensive it is to take advantage of this AI technology. The assumption is that it’s expensive and you need a team of “rocket scientists”. From what I’ve seen studying deep learning and talking to NZ companies that are using AI, the technology is very accessible and the investment required is relatively small.

The report is level-headed: it’s not predicting massive job losses. I’m not going to comment further on the predictions on the economic impact. They’ll be wrong – because, to quote Niels Bohr – predicting is very difficult, especially about the future.

In my opinion the report did not place enough emphasis on the importance of deep learning. The recent rise of this technology has driven the resurgence of AI in recent years. Their history of AI missed the single most important event which was the AlexNet neural network winning the ImageNet competition. This bought deep learning to the attention of the worlds AI researchers and triggered a tsunami of research and development. I would go so far as to suggest that the majority of the focus on AI should be on deep learning.

image_classification_006

The key recommendation of the report is that NZ needs to adopt an AI strategy.  I agree. Of the 6 themes they suggested I think they key ones are:

  1. Increasing the understanding of AI capability. This should involve educating the decision makers at the board and executive level about the opportunities to leverage AI technology and the investment required. The outcome of this should be more organisations deciding to invest in AI.
  2. Growing the capability. NZ needs more AI practitioners. While we can attract immigrants with these skills, we also need to educate more people. I was encouraged to see the report advocating the use of online learning. I agree that NZQA should find a way to recognise these courses but think we should go further. Organisations should be incentivised to train existing staff using these courses (particularly if they have a project identified) and young people should be subsidised to study AI either online or undergrad/postgrad at universities.

I am less worried about the risks. I think it would be good to have AI that was biased, opaque, unethical and breaking copyright law. At least then we would be using the technology and we could address those concerns as they came up. I am also not worried about the existential threat of AI. First, I think human level intelligence may be a long time away. Second, I’m somewhat fatalistic – I can’t see how you could stop those breakthroughs from happening. We need to make sure that humans come along for the ride.

From my perspective the authors have done a very good job with this report. I encourage you to take the time to read it.  I encourage the government to adopt its recommendations.

Ohmio automation: self driving buses

Last month I had lunch with Yaniv Gal, the artificial intelligence manager at Ohmio. Yaniv is an interesting character who grew up in Israel and has focused his career on computer vision and machine learning, both in academia and industry. Although a lot of his experience was in medical imaging, in New Zealand he had been working in the fresh produce industry as research program manager at Compac Sorting Equipment which uses machine vision to automatically sort fruit. At Ohmio he’s built one of the largest AI teams in New Zealand.

Yaniv explained that Ohmio emerged from electronic sign provider HMI technologies. HMI has been around since 2002 and has thousands of signs operating throughout NZ. To me it seemed unusual that an electronic sign company would spawn a self-driving vehicle company. There were a couple of core reasons:

  1. They had some experience using machine vision with traffic: cameras attached to their signs could be used to count traffic in a much more cost effective and reliable way than digging up the road to install induction powered sensors.
  2. They had experience at installing infrastructure alongside roads. This type of infrastructure could be used to aid a self-driving vehicle along a fixed path

This is a crucial differentiator for Ohmio. They are not trying to compete with the myriad of large companies that are trying to develop level 5 autonomous vehicles: those that can drive without human input in all conditions. This is a difficult problem. Sacha Arnold, the director of engineering at Waymo (owned by Alphabet, Google’s parent) recently said they are about 90% of the way there, but they still have 90% to go. Instead Ohmio are going for the more tractable problem of building a vehicle platform that can navigate along a fixed path. They call this level 4+ autonomy. While this doesn’t have the same broad opportunity as level 5, they believe it is something they can build and that there is still a large market opportunity.

Ohmio LIFT

Their first customer is Christchurch airport. This will allow them to prove the concept and refine the technologies. The economics are obvious, with no driver this just ends up cheaper. It’s not just about the money though, Yaniv is confident it will be safer and with electric vehicles, greener. Since our meeting Ohmio have announced a sale of 150 shuttles to  Korean Company Southwest Coast Enterprise City Development Co Ltd.

For fixed path navigation the path can be learnt and if necessary additional infrastructure can be added along the path to aid the vehicle in localising and navigating. Most of this is done on the vehicle using a variety of sensors. To establish exactly where it is odometery, GPSs and LIDAR are combined to get a more accurate location, with more redundancy than possible with a single sensor. Combining data from multiple sensors like this is called sensor fusion. Company R&D coordinator Mahmood Hikmet described this in his recent AI day presentation.

Machine vision is used primarily for vehicle navigation and collision avoidance. Here collision avoidance is detecting whether there is an object in the vehicle’s path, or if an object may come into its path. Ohmio use a variety of machine vision techniques, including deep learning. Yaniv’s experience predates the recent rise in popularity of neural networks. He is aware of the few disadvantages deep learning suffers compared to more “traditional” machine learning techniques and so he doesn’t feel like every machine vision problem needs the hammer that is a deep neural network.

Yaniv confessed that it will be a nervous moment the first time the system is driving passengers with no safety driver. However, he is confident it is going to be safer than a vehicle with a driver. We talked about how both of us would be even more nervous using a self driving, flying taxi that Kitty Hawk is testing in Christchurch, even though it should be safer than a ground based vehicle because there are less objects to crash into.  We compared this to the fear people felt when elevators were first self-operating, without an elevator operator. It seems like a silly fear now. Maybe the next generation will laugh at our anxiety about self-driving vehicles.

After lunch I told my Uber driver about our conversation. He expressed concern about whether the tech should be developed and the loss of jobs that will come with it. This is an understandable concern given his career (although his masters in aeronautical engineering should see him right). There are too many people working on this type of technology now. If the genie is not out of the bottle, he has his head and shoulders out. The economics are too strong and the world should be a better place with this technology. It’s nice to see a NZ company contributing.

Lincoln Agritech: using machine vision for estimating crop yield

I recently had the opportunity to visit Lincoln Agritech where I met with CEO, Peter Barrowclough, Chief Scientist Ian Woodhead and their machine vision team of Jaco Fourie, Chris Bateman and Jeffrey Hsiao. Lincoln Agritech is an independent R&D provider to the private sector and government employing 50 scientists, engineers and software developers. It is 100% owned by Lincoln University, but distinct from their research and commercialisation office.

Lincoln Agritech have taken a different approach to developing AI capability. Rather than hiring deep learning experts, they have invested in upskilling existing staff by allowing Jaco, Chris and Jeffrey to take a Udacity course in machine vision using deep learning. The investment is in the time of their staff. Having three of them take the course together means that they can learn off each other.

The core projects they are working on involve estimating the yield of grape and apple crops based on photos and microwave images. The business proposition for this is to provide information for better planning for the owners of the crops, both in-field and in-market. Operators of these vineyards and orchards can get a pretty good overall crop estimate based on historical performance and weather information. However, they can’t easily get plant by by plant estimates. To do this they need an experienced person to walk the fields and make a judgement call. A machine vision based approach can be more efficient with better accuracy.

The team elected to tackle the problem initially using photos. They had to take the images carefully at a prescribed distance from the crop, using HDR (this is where you combine light, medium and dark images to bring out the detail in the shadowy canopy). Like most machine learning tasks the biggest problem was getting a tagged data set. The tagging involved drawing polygons around the fruit in the images, including fruit partially occluded by leaves. There was a lot of work trying to train people to do this properly. Inevitably at some stage there were guys with PhDs drawing shapes, such is the glamour of data science. This problem is similar to that faced by Orbica who built a model to draw polygons around buildings and rivers from aerial photography.

In this image labels of a fixed size are added to an image to tell the model where the grape bunches are.
This image shows the result of a trained network automatically finding the areas in the image where the grape bunches are.

They used a convolution neural network to tackle this problem. Rather than train a network from scratch and come up with their own architecture, they used the image net winning inception architecture and adapted that. This network was already trained to extract the features from an image that are required to classify 1000 different classes of images. This technique is called transfer learning. This model works well, with 90% accuracy (on their validation data set).

However, part of the challenge here is that the images do not show all of the fruit that is on the plant. The only way to get the “ground truth” is to have someone go under the canopy and count all the fruit by hand. This is where the microwave technology comes into play.

The company is recognised internationally for their microwave technology in other projects. The way it works is a microwave transmitter emits microwaves and then detects the reflections. The microwaves will travel through leaves, but will be reflected by the water content in reasonably mature fruit.

The machine vision team is working to create a model that can use the microwave image and photo to get superior performance. This is a harder problem because this type of sensor fusing is less common than regular image processing.

The team is using the Tensorflow and Keras platforms on high end machines with Nvidia Titan GPUs. There were a few raised eyebrows from the company when the team were asking for what essentially looked like high end gaming machines.

I applaud Lincoln Agritech for investing in their deep learning capability. The experience they will have gained from their first projects will make each subsequent project easier to apply the technology. The fact they have three people working on this, provides redundancy and the ability to learn off each other. This is a model that other New Zealand organisations should consider, particularly if they’ve having problems finding data scientists. Applying the latest AI technologies to agriculture seems like a real opportunity for New Zealand.