Camera updates: new lens, AI on camera, and more.

I was recently asked what was new with the camera. As I looked at the work we had done recently, I thought it was worth sharing.

New lens design

new lens design

We've redesigned the lens - to make it more waterproof, easier to manufacture and maintain. The lens is simply a piece of thermally transparent material that was glued to the box in the first iteration of the cameras. We found that in some of our older cameras this glue was deteriorating and compromising the waterproofing.

The new design uses rubber rings to seal against water and can easily be replaced. Please contact us if you are concerned about your lens and we will replace it at no cost.

AI on the camera

We have successfully run the machine recognition algorithms on the camera itself. This will be vital when the camera is controlling a trap so it can decide in real time whether the detected animal is a target pest. This will also be useful when the camera does not have a high bandwidth connection and can't upload videos. In this situation it could use a low bandwidth connection to notify when a predator has been detected and what type of predator.


We've been experimenting with using solar panels to power the camera.  Technically this is straightforward, we just need a large enough solar panel. We have also been trying to get funding to help us design a lower power version of the camera which would reduce the cost and size of the solar panel required.


We've got a group of people tagging videos to increase the data we use for our machine learning models. To help with this we have created a power tagging user interface to make it easier to tag multiple videos. The power tagging interface can be accessed by clicking on the "Power Tagger" link on the top menu.
Power tagging interface.

Updated machine vision algorithms

We've got funding for a machine learning engineer to update and expand the model we use for machine vision. This will include using all the latest data and adding cats and maybe some other categories. Hopefully this will mean more accurate classification of the animals that are captured along with a broader range. We are hoping that after this work is complete it will be easy for us to update this model more frequently.


We've done a significant amount of work on the user interface of the sidekick app. This is the companion Android app to the camera that is used to set it up and pull the videos from the camera when it doesn't have a connection. This has been released in beta version only at the moment and is being tested before it is made more widely available.

Counting visits

We have developed an algorithm to count visits rather than sightings. So, if an animal appears in two or more consecutive videos it will count as one visit for that animal. We found we were doing this manually when we were analysing the data from a camera. This makes the process much easier.  The visits reports can be accessed from the Visits link at the top of the Cacophony Portal.

The visit analysis is particularly useful when analysing a large set of data. For example when we were testing the effectiveness of a trap over 3 months there were about 4000 videos in total. Of these about 380 were tagged as possum but the visit analysis showed that there were actually only 60 different visits by possums (each visit had on average about 5 different videos). Obviously the artificial intelligence to identify the possums makes the analysis much faster but this visit analysis makes it faster again. There is a summary table that shows how many visits from different predators for any particular time period.

Leave a comment

Please note, comments must be approved before they are published