Auckland University Robotics Lab

I recently had the chance to catch up with Professor Bruce MacDonald who chairs the Auckland University Robotics Research Group. Although we had never met before, Bruce and I have a connection, having the same PhD supervisor, John Andreae.

Bruce took me through some of the robotics projects that him and his team have been working on. The most high profile project is a kiwifruit picking robot that has been a joint venture with Robotics Plus, Plant and Food Researchand Waikato University. This multi armed robot sits atop an autonomous vehicle that can navigate beneath the vines. Machine vision systems identify the fruit and obstacles and calculate where they are relative to the robot arm which is then guided to the fruit. A hand gently grasps the fruit and removes it from the vine using a downward twisting movement. The fruit then rolls down a tube.

Kiwifruit picking robot

The work has been split between the groups with the Auckland University team focused on the machine vision and vehicle navigation, Waikato on the control electronics and software, and Robotics Plus on the hardware. The team estimates that the fruit picking robot will be ready to be used in production in a couple of years. The current plan is to use it to provide a fruit picking service for growers. This way their customers don’t need to worry about robot repairs and maintenance and the venture can build a recurring revenue base. They are already talking to growers in New Zealand and the USA.

Along with Plant and Food Research, the group is also researching whether the same platform can be used to pollinate the kiwifruit flowers. Declining bee populations are expensive to maintain, and this may provide a cost effective alternative.

The group has just received funding of $17m to improve worker performance in orchards and vineyards. The idea is to use machine vision to understand what expert pruners do and translate that into a training tool for people learning to prune and for an automated robot.

Bruce’s earlier work included the use of robotics in healthcare. This included investigating if robots could help people take their medication correctly and the possibility of robots providing companionship to those with dementia who are unable to keep a pet.

Therapeutic robot

I asked Bruce whether Auckland University taught deep learning at an undergraduate level. He said that they don’t, but it is widely used by post grad students. They just pick it up.

Bruce is excited by the potential of reinforcement learning. We discussed whether there is the possibility of using our supervisor’s goal seeking PURR-PUSS system with modern reinforcement learning. I think there is a lot of opportunity to leverage some of this type of early AI work.

At the end of the meeting Bruce showed me around the robotics lab at the new engineering school. It was an engineer’s dream – with various robot arms, heads, bodies, hands and rigs all over the place. I think Bruce enjoys what he does.

Robotics lab

Neuromorphic computing

At a recent AI forum event in Christchurch, one of the presenters was Simon Brown, a physics professor from the University of Canterbury. Simon specialises in nanotechnology and has created a chip that may be capable of low power, fast AI computations. I caught up with Simon after the event to find out more.

The chip is created using a machine consisting of a number of vacuum chambers. It starts with a metal (in this case tin) in vapour form in one vacuum chamber. As it moves through the various chambers the vapour particles are filtered, mechanically and electrically until they are just the right size (averaging 8.5 nanometers diameter) and they are sprayed onto a blank chip. This is done until about 65% of surface of the chip is covered with these tiny droplets.

This is just enough coverage to be almost conductive. The metal droplets on the chips are close enough to each other that an electrical charge in one will induce charges in nearby droplets. Simon describes these as being analogous to synapses in the brain which connect neurons. The strength of the connection between the two droplets is a function of the distance between them. The first chips that were created had two external connections into this nano scale circuit. Interestingly when a voltage was applied to one of the connections the resulting waveform on the other connection had properties similar to those seen in biological neurons.

An important piece of research was showing that this chip was stable, i.e. the performance didn’t change over time. That was proven and so what Simon has been able to create is effectively a tiny neural network with many connections on a chip that has a random configuration. One feature that is unlike artificial neural networks that are used for deep learning, is that the strength of the connections between the neurons (the weights) cannot be changed using external controls. Instead the weights are updated through the atomic scale physical processes that take place on the chip. So while the chips will never be as flexible as artificial neural networks implemented in software, it turns out that these “unsupervised learning” processes have been studied by computer scientists for a long time and have been shown to be very efficient at some kinds of patter recognition. The question is whether there are applications that could leverage the “unsupervised” processing that this chip does very quickly and at low power.

A specific main candidate application is called reservoir computing. Reservoir computing uses a fixed, random network of neurons, just like the one created by Professor Brown, to transform a signal. A single, trainable layer of neurons (implemented in software) on top of this is then used to classify the signal. A Chicago based team  has achieved this using a chip made of memristors.

A standard implementation of reservoir computing would have access to each of the neurons in the random network. With just two connections into the network this chip does not have that access.  When we met, the team had just created a chip with 10 connections into the network.

Their focus now is trying to prove that they can implement reservoir computing or some variant on this chip. If they can do this then there is real potential to commercialise this technology. The larger opportunity is if they could find a way to use this technology to implement deep learning.

NZ Merino using artificial intelligence to monitor sheep well being.

At the last Christchurch AI meetup, I met up with Ian Harris who told me about the work he had done with neXtgen Agri for The New Zealand Merino Company (NZM)  as part of Sensing Wellbeing, a collaborative Sustainable Farming Fund Project. This work involved analysing data collected from accelerometers attached to jaws of sheep to try to identify their behaviour.

SheepWithActivityMonitor
A sheep with an activity monitor. NZM were very particular about ensuring the sheep were treated ethically during this data collection.

The data

Like any machine learning project, the critical part is the data. In this case the raw data came from the tri-axial accelerometers, sampled at 30 Hz. This meant that for each of the three channels there were 300 samples over a 10 second period. This data was collected from 15 sheep over a period of 6 days in conjunction with Massey University.

AccelGrazingObsId29684Sheep1
A example of the data from 3 channels over one 10 second period

6 of the sheep were filmed during that time and their behaviour was categorised into 12 different activities. An initial review of the data showed that there was only good data for 5 of the 12 activities and so the focus was on those activities:

  1. sitting
  2. standing
  3. ruminating while sitting
  4. ruminating while standing
  5. grazing

For those of you (like me) who are not intricately familiar with the lives of sheep, ruminating is the process or regurgitating, re-chewing and re-swallowing food.

Random forest approach

31 different metrics were calculated from the raw data, metrics like energy, maximum, etc. The initial approach Ian took was to use a random forest algorithm with these metrics or features as an input. With this approach the model correctly classifed 81% of the activities. This was a replication of an approach taken by a South African team who got similar results and helped validate the overall set up.

Stratified Sheep RandomForestClassifier Confusion Matrix
This confusion matrix shows that it was difficult to separate sitting and standing with a high degree of accuracy.

Deep learning approach

Ian is an experienced Java developer and had taught himself python and Deep Learning. For this problem he was using Tensorflow and set up a relatively simple 3-layer network that used the raw data for input rather than the calculated features used with the random forest approach. His most successful model had a binary output to detect whether (or should that be wether?) the sheep was grazing or not. This model had 93% accuracy.

Clustering

Only six of the fifteen sheep had their behaviour recorded, so Ian had a lot of unlabelled data. In an effort to leverage this data he used an unsupervised clustering algorithm to try to generate more data from which to learn. This did improve the detection rates, but only for models for which there was relatively low accuracy.

Sheep wellbeing

You could imagine a connected device that could accurately predict whether a sheep is grazing or not may be useful if it could alert a farmer when a sheep has stopped grazing for a period of time so he or she could check on the sheep. It’s been quite a few decades since I lived on a sheep farm – so I’m not able to make any sort of informed commentary on that. Even if this was useful, there are some engineering challenges in designing a device that is cheap enough to make this economic.

That said, I would like to congratulate NZM, and neXtgen Agri for undertaking this project. Whether or not it leads to a commercial product, they will have learned a lot about the capabilities of the AI technologies used along with the data requirements and the time and investment needed. And in Ian, New Zealand has another engineer with AI experience which I’m sure will be put to good use.

NEC NZ: AI for smart cities

While I was in Wellington as part of the excellent AI panel put together by WREDA, I caught up with Tim Rastall who recently left NEC to start his own consulting firm. Tim gave me some insight into the interesting AI work NEC have been doing in New Zealand. NEC is the Japanese tech giant best known for their IT services and products. One of the core focuses for NEC’s New Zealand arm is on smart cities technologies.

Car counting was the first project Tim discussed. This was a very early project at a time when there were limited tools available. The project used machine vision to count vehicles in a stream of traffic. One of the problems they had with these early models is that they were trained in cities with more smog that Wellington.  The crisp NZ light would create shadows that would confuse the model. This combined with different viewing angles than the model was trained with meant, out of the box, the accuracy was not as high as hoped for. There are now a few third part solutions that do a reasonable job of car counting. What they really learned from the project was that the market wanted vehicle categorisation solutions but using surveillance cameras/analytics presented a raft of technical and practical issues. This led them developing a road based sensor that provides much higher accuracy and reliability and is about to be deployed for some field trials in Wellington.

One of NEC’s higher profile projects was the Safe City Living Lab. This is a research project that includes microphones and cameras placed around the city that can detect events such breaking glass, screaming, beggars, or fighting. The idea being that if an event is detected then an agency could be alerted. There were concerns around privacy but these were addressed to the satisfaction of the Office of the Privacy Commissioner. Tim also shared that they worked hard to remove bias from the models.

Safe City Beggar

NEC also worked with Victoria university to build a model to identify birds from audio recordings. Researcher Victor Anton collected tens of thousands of hours of recordings. The idea was to use deep learning to automatically identify which birds have been recorded. Like a lot of deep learning tasks, the biggest problem was to get a good data set to train from. This involved having people listen to audio recordings and identify which if any birds were in the clip. This is difficult because a lot of the recordings contain other sounds: trees rustling in the wind, other animals or other urban sounds like door bells. So, the first tool they built was a model to identify whether a particular clip contained a bird song or not, that would then make the tagging task easier. From the sounds of it (no pun intended), this is not solved yet for sensors that had novel environmental noise. For those sensors they created good training data they could categorise discreet species very well. Listen to a Radio NZ interview with Victor and Tim about the birdsong AI project for more information.

This research is related the Cacophony project that was inspired by the idea that measuring the volume of bird song would be a way of measuring the impact of predator control. Cacophony pivoted and are now creating an AI powered predator killing machine.

full_The_acoustic_sensor_technology_(Yadana_Saw)

NEC’s current focus is aggregating city data from various sources and making it easily accessible. This includes powerful visualisation tools that allow the data be overlaid on a map and viewed in 3D, either on a screen or in VR. This makes it easier to understand and helps people make decisions with the data more efficiently. While this isn’t directly related to AI, having the data accessible makes it much easier for people to use in AI applications.

3D_Capture_of_Wellington_fewer_pixels

In summary, NEC New Zealand has worked on many interesting smart city projects using deep learning and data visualisation. They have a sizeable team that is growing in experience. I’m interested to see what they come up with next.

Meanwhile Tim has left NEC to work on a gaming start up and do some consulting around artificial intelligence, augmented and virtual reality, and Internet of Things. I’m sure he’ll do well.

 

The impact and opportunity of AI in NZ

I’ve just read the AI forum report analysing the impact and opportunity of artificial intelligence within New Zealand. This was released last week. At 108 pages it’s a substantial read. You can see the full report here.

AI forum report

The timing of this report is very good. There is a lot of news about AI and a growing awareness of it. But at the same time, I believe there is a lack of understanding of what AI is capable of and how organisations can take advantage in the recent advances.

I think the first level of misunderstanding is that people over estimate what the technology can do. This is driven by science fiction, a misinformed media and fuelled by marketers who want their company and products to be seen to be using AI. AI is nowhere near human level intelligence and doesn’t understand concepts like a human (see my post on the limits of deep learning). That may change, but major breakthroughs are needed and it’s not clear when or if those will occur (see predictions from AI pioneer Rodney Brooks for more on this).

Although AI does not have human level intelligence, there are a host of applications for the technology. I think the second level of misunderstanding is around how difficult and expensive it is to take advantage of this AI technology. The assumption is that it’s expensive and you need a team of “rocket scientists”. From what I’ve seen studying deep learning and talking to NZ companies that are using AI, the technology is very accessible and the investment required is relatively small.

The report is level-headed: it’s not predicting massive job losses. I’m not going to comment further on the predictions on the economic impact. They’ll be wrong – because, to quote Niels Bohr – predicting is very difficult, especially about the future.

In my opinion the report did not place enough emphasis on the importance of deep learning. The recent rise of this technology has driven the resurgence of AI in recent years. Their history of AI missed the single most important event which was the AlexNet neural network winning the ImageNet competition. This bought deep learning to the attention of the worlds AI researchers and triggered a tsunami of research and development. I would go so far as to suggest that the majority of the focus on AI should be on deep learning.

image_classification_006

The key recommendation of the report is that NZ needs to adopt an AI strategy.  I agree. Of the 6 themes they suggested I think they key ones are:

  1. Increasing the understanding of AI capability. This should involve educating the decision makers at the board and executive level about the opportunities to leverage AI technology and the investment required. The outcome of this should be more organisations deciding to invest in AI.
  2. Growing the capability. NZ needs more AI practitioners. While we can attract immigrants with these skills, we also need to educate more people. I was encouraged to see the report advocating the use of online learning. I agree that NZQA should find a way to recognise these courses but think we should go further. Organisations should be incentivised to train existing staff using these courses (particularly if they have a project identified) and young people should be subsidised to study AI either online or undergrad/postgrad at universities.

I am less worried about the risks. I think it would be good to have AI that was biased, opaque, unethical and breaking copyright law. At least then we would be using the technology and we could address those concerns as they came up. I am also not worried about the existential threat of AI. First, I think human level intelligence may be a long time away. Second, I’m somewhat fatalistic – I can’t see how you could stop those breakthroughs from happening. We need to make sure that humans come along for the ride.

From my perspective the authors have done a very good job with this report. I encourage you to take the time to read it.  I encourage the government to adopt its recommendations.

Lincoln Agritech: using machine vision for estimating crop yield

I recently had the opportunity to visit Lincoln Agritech where I met with CEO, Peter Barrowclough, Chief Scientist Ian Woodhead and their machine vision team of Jaco Fourie, Chris Bateman and Jeffrey Hsiao. Lincoln Agritech is an independent R&D provider to the private sector and government employing 50 scientists, engineers and software developers. It is 100% owned by Lincoln University, but distinct from their research and commercialisation office.

Lincoln Agritech have taken a different approach to developing AI capability. Rather than hiring deep learning experts, they have invested in upskilling existing staff by allowing Jaco, Chris and Jeffrey to take a Udacity course in machine vision using deep learning. The investment is in the time of their staff. Having three of them take the course together means that they can learn off each other.

The core projects they are working on involve estimating the yield of grape and apple crops based on photos and microwave images. The business proposition for this is to provide information for better planning for the owners of the crops, both in-field and in-market. Operators of these vineyards and orchards can get a pretty good overall crop estimate based on historical performance and weather information. However, they can’t easily get plant by by plant estimates. To do this they need an experienced person to walk the fields and make a judgement call. A machine vision based approach can be more efficient with better accuracy.

The team elected to tackle the problem initially using photos. They had to take the images carefully at a prescribed distance from the crop, using HDR (this is where you combine light, medium and dark images to bring out the detail in the shadowy canopy). Like most machine learning tasks the biggest problem was getting a tagged data set. The tagging involved drawing polygons around the fruit in the images, including fruit partially occluded by leaves. There was a lot of work trying to train people to do this properly. Inevitably at some stage there were guys with PhDs drawing shapes, such is the glamour of data science. This problem is similar to that faced by Orbica who built a model to draw polygons around buildings and rivers from aerial photography.

In this image labels of a fixed size are added to an image to tell the model where the grape bunches are.
This image shows the result of a trained network automatically finding the areas in the image where the grape bunches are.

They used a convolution neural network to tackle this problem. Rather than train a network from scratch and come up with their own architecture, they used the image net winning inception architecture and adapted that. This network was already trained to extract the features from an image that are required to classify 1000 different classes of images. This technique is called transfer learning. This model works well, with 90% accuracy (on their validation data set).

However, part of the challenge here is that the images do not show all of the fruit that is on the plant. The only way to get the “ground truth” is to have someone go under the canopy and count all the fruit by hand. This is where the microwave technology comes into play.

The company is recognised internationally for their microwave technology in other projects. The way it works is a microwave transmitter emits microwaves and then detects the reflections. The microwaves will travel through leaves, but will be reflected by the water content in reasonably mature fruit.

The machine vision team is working to create a model that can use the microwave image and photo to get superior performance. This is a harder problem because this type of sensor fusing is less common than regular image processing.

The team is using the Tensorflow and Keras platforms on high end machines with Nvidia Titan GPUs. There were a few raised eyebrows from the company when the team were asking for what essentially looked like high end gaming machines.

I applaud Lincoln Agritech for investing in their deep learning capability. The experience they will have gained from their first projects will make each subsequent project easier to apply the technology. The fact they have three people working on this, provides redundancy and the ability to learn off each other. This is a model that other New Zealand organisations should consider, particularly if they’ve having problems finding data scientists. Applying the latest AI technologies to agriculture seems like a real opportunity for New Zealand.

Cognitive modelling, self aware robots Tensorflow, & adversarial attacks

 

This week I’ve been learning about cognitive modelling, self aware robots,  adversarial attacks in reinforcement learning and starting to play with Tensorflow.

Cognitive Modelling

The latest MIT AGI video was released a few days ago. In this Nate Derbinsky gives an overview of different types of cognitive architectures, including SPAUN, ACT-R, Sigma and Soar (his baby). This reminds me of old school AI: symbolic processing. My supervisor’s PURR-PUSS would belong in this category. These are a lot less sexy than deep learning, but in many way’s they are complementary with applications in robotics, game playing & natural language processing.

 

TWiML podcasts

SUM cognitive architectureThis week I listened to an interesting podcast with Raja Chatila on robot perception and discovery. In this Raja talked about the necessity of robot self awareness for true intelligence and the ethics of intelligent autonomous systems. It’s interesting to see that the sort of architectures used for exploring artificial consciousness in robotics have a lot of overlap with the cognitive models described by Nate Derbinsky in the MIT AGI series.

I also had the chance to listen to Google Brainers Ian Goodfellow & Sandy Huang discussing adversarial attacks used against reinforcement learning  Adversarial attacks highlight some of the weakness of deep learning. When used for image classification the image is just a series of numbers that has a set of mathematical operations performed on it to produce a classification. By subtly changing some of the numbers you can fool the classifier, even though to a human the image looks exactly the same. The example of a Panda below was taken from a 2015 Google paper.

panda

In the podcast Ian and Sandy discuss how this can be used against a reinforcement learning agent that has been trained to play computer games. Even changing one pixel can significantly degrade the performance.

Tensorflow

I’m up to the part in my CS231n course where you start to train CNNs using Tensorflow or Pytorch. Despite reading a compelling argument for using Pytorch over Tensorflow on Quora, the people I’ve spoken to locally are using Tensorflow – so I’m going to go with that. I found this introduction useful.

I managed to get the software installed and run Hello World. Apparently there is more you can do with it…

Tensorflow hello world

 

Orbica: Using machine vision in GIS

Last week I had the opportunity to sit down with Orbica CEO’s Kurt Janssen and data scientist  Sagar Soni.

Kurt has worked in the Geographic Information Systems (GIS) industry for more than 14-years. Last year he started his own company, Orbica, which does GIS consulting for organisations in the public and private sector. Orbica invests some of its consulting revenue into developing its own product. A major – and rewarding – investment has been hiring data scientist Sagar.

Sagar was taught machine learning during his master’s degree and had the opportunity to put it into practice developing an earth rock image classification system at Dharmsinh Desai University and using deep learning algorithms like Recurrent Neural Networks to solve medical entity detection problems at US health care solutions provider ezDI. Last year he immigrated to NZ and had just the skills and experience Orbica was looking for.

Orbica’s first product automatically identifyies buildings and waterways from aerial photos. This manually intensive job is traditionally done by geographers and cartographers who draw polygons on maps identifying these features using digitising techniques. The first product identifies buildings in urban areas. The 15 million-pixel (4800×3200 ) photos have each pixel covering a 7.5×7.5cm square . Sagar has built a convolution neural network that takes these photos and outputs the vectors representing the polygons where it believes the buildings are.

They have a good amount of training, test and validation data from Land Information New Zealand that consists of the images and polygons that have been hand drawn. Because of the size of the image, Sagar has tiled them into 512×512 images . He built the model over a couple of months with a little trial and error testing the various hyper parameters. The existing model has nine layers, with the standard 3×3 convolutions. He’s currently getting 90 per cent accuracy on the validation set.

Building outlines

RiverDetection_AIThe water classification is very similar, working with 96 million pixel(12000×8000)  images, but with smaller resolution 30x30cm  pixels. The output is the set of polygons representing the water in the aerial images, but the model also classifies the type of water body, e.g. a lake, lagoon, river, canal, etc.

The commercial benefits of these models are self-evident: Orbica can significantly improve the efficiency of producing this data, whether it does this for a client, or it is sold as a service to city and regional councils. These are done regularly – to identify buildings that have been added or removed, or to track how waterways have changed.

WaterBodiesClassification'

Another opportunity has come from the Beyond Conventions pitch competition in Essen, Germany, where Orbica won the Thyssenkrupp Drone Analytics Challenge and the People’s Choice Award. Orbica’s pitch was to use machine vision to analyse drone footage of construction sites to automatically generate a progress update on the construction project. This is a more complex problem given its 3-dimensional nature. Thyssenkrupp has now resourced Orbica to put together a proof of concept, which Sagar is busy working on. Should this go well, Orbica will probably hire at least one other data scientist. DroneImage_Output

Because the technology is developing quickly, Sagar keeps up to date with the latest developments in deep learning through Coursera and Udacity courses. He’s a fan of anything Andrew Ng produces.

To me, Orbica’s use of machine vision technology is an excellent case study for how New Zealand companies can use the latest advances in artificial intelligence. They have a deep knowledge in their own vertical; in this case GIS. They develop an awareness of what AI technologies are capable of in general and have a vision for how those technologies could be used in their own industry.  Finally, they make an investment to develop that vision. In Orbica’s case, the investment was reasonably modest: hiring Sagar. A recurring theme I’m seeing here is hiring skilled immigrants. New Zealand’s image as a desirable place to live – coupled with interesting work – will hopefully make this a win-win for all involved.

For those that would like to hear more. Kurt is speaking at AI Day in Auckland next week.

 

 

 

 

Robots opening doors, driverless cars and finding exoplanets

Here’s some things I’ve been watching/listening to lately…

The latest video from Boston Dynamics is cool. They seem to be having a lot of fun.

I’m continuing to watch the MIT series on Artificial General Intelligence. They’re currently releasing one video a week. The latest is from Emilio Frazzoli on self driving cars. I’ve been enjoying this series.

I’m also listening to the TWiML interview with Chris Shallue about using Deep Learning to hunt for exoplanets. Also pretty cool. I thought Chris’s accent might have been kiwi – but nah he’s an Aussie.

exoplanets

 

 

Imagr: NZ’s version of Amazon Go

Last Friday during a visit to Auckland I took the opportunity to catch up with Will Chomley, CEO of Imagr. Will is an ambitious entrepreneur, with a finance background. He has surrounded himself with a group of engineers skilled in deep learning. This start-up is focused on using machine vision to identify products as they are added to a shopping cart: NZ’s version of Amazon Go. But this is aimed at cart rather than the whole store. Their product is called SMARTCART and integrates with your phone. 

When I met Will they had just announced their first trial with Foodstuffs New Zealand in their Four Square Ellerslie store . They were busy building their training dataset which at the time of writing contains over 2 million product photos for their machine vision models.

The 12-person company has a handful of machine vision engineers. A portion of these are immigrants because the skills are hard to find in New Zealand. Will is very enthusiastic about running the company from New Zealand because it is such a desirable place to live and it’s easy and quick to move here for people with the right qualifications.

The capabilities of their machine vision look impressive. They’re able to identify very similar looking packages with only subtle differences. I saw an example of two ELF mascara products that on first inspection looked identical, but on closer inspection one had a small brush.  They’re able to identify, with high accuracy, occluded products – partially covered by a hand, or high speed objects being thrown into a basket that I couldn’t recognize, but the blurred images are able to be recognised by their 200+ layer convolutional neural network.

They have designed the cart so the inference engine, which is making the decision about what is going into the basket, can either run on a computer on the basket, or the information can be sent to a server. To get the speed from this model they have developed their own software using C, rather than relying on the easier to use but slower  frameworks such as TensorFlow. This gives the capability to identify and engineer around bottlenecks.

In parallel they’re working on the hardware, having recently decided to use small, cheaper, lower resolution cameras with some added lighting. These can provide the same high accuracy rate as higher resolution, expensive cameras. They have their challenges. Designing and building hardware that can be manufactured at scale and operate reliably and easily is no mean feat. However, they have some engineering chops: their CTO Jimmy Young was Director of Engineering for PowerByProxi, the wireless charging company bought by Apple last year.

They have some backers with deep pockets to help resource these challenges.  Last year they received an undisclosed investment from Sage Technologies Ltd, the technology venture of private investment company QuantRes founder, billionaire Harald McPike.

There’s a large opportunity in front of them and they’re moving quickly. Adding smart carts has to be a lot cheaper than fitting out a store Amazon Go style. They may be able to get a piece of the global grocery market, grabbing a cut of the value of the cart, saving costs for the grocer, improving the experience for the shopper and opening up a world of possibility for the company once they are collecting the shopping data.

One of their challenges is to stay focused. There are so many applications of machine vision technology, even if they stick to retail. They’ve experimented with smart fridges that can identify the gender and age of people taking products, as well as knowing which products they take. They’re talking to others in the industry about applications for their technology.

If they can keep their focus and execute they have a bright future ahead of them.  Their product has global appeal. Retailers can see the threat of technology giants such as Amazon and Alibaba who are moving quickly into bricks and mortar retail. This accelerated last year with Amazon’s purchase of Whole Foods. The convenience of online shopping is coming to a store near you and Imagr may be the one’s powering it.