scispace - formally typeset
Open AccessJournal ArticleDOI

ClassifyMe: A Field-Scouting Software for the Identification of Wildlife in Camera Trap Images

Reads0
Chats0
TLDR
The ClassifyMe software tool is designed to address the gap in computer software capable of extracting false positives, automatically identifying animals detected and sorting imagery, and provides users the opportunity to utilise state-of-the-art image recognition algorithms without the need for specialised computer programming skills.
Abstract
We present ClassifyMe a software tool for the automated identification of animal species from camera trap images. ClassifyMe is intended to be used by ecologists both in the field and in the office. Users can download a pre-trained model specific to their location of interest and then upload the images from a camera trap to a laptop or workstation. ClassifyMe will identify animals and other objects (e.g., vehicles) in images, provide a report file with the most likely species detections, and automatically sort the images into sub-folders corresponding to these species categories. False Triggers (no visible object present) will also be filtered and sorted. Importantly, the ClassifyMe software operates on the user’s local machine (own laptop or workstation)—not via internet connection. This allows users access to state-of-the-art camera trap computer vision software in situ, rather than only in the office. The software also incurs minimal cost on the end-user as there is no need for expensive data uploads to cloud services. Furthermore, processing the images locally on the users’ end-device allows them data control and resolves privacy issues surrounding transfer and third-party access to users’ datasets.

read more

Citations
More filters
Journal ArticleDOI

“How many images do I need?” Understanding how sample size per class affects deep learning model performance metrics for balanced designs in autonomous wildlife monitoring

TL;DR: The issues of deep learning model performance for progressively increasing per class (species) sample sizes are explored and generalizes additive models (GAM) are shown to be effective in modelling DL performance metrics based on the number of training images per class, tuning scheme and dataset.
Journal ArticleDOI

Innovations in Camera Trapping Technology and Approaches: The Integration of Citizen Science and Artificial Intelligence.

TL;DR: More efforts to combine citizen science with AI are proposed to improve classification accuracy and efficiency while maintaining public involvement in camera trap research.
Journal ArticleDOI

Technological advances in biodiversity monitoring: applicability, opportunities and challenges

TL;DR: Technological solutions will still need to be complemented with traditional observer-based methods for the foreseeable future until the tools become cheap enough and easy enough for widespread use (especially in biodiversity-rich countries).
Journal ArticleDOI

Next-Generation Camera Trapping: Systematic Review of Historic Trends Suggests Keys to Expanded Research Applications in Ecology and Conservation

TL;DR: In this article, the authors reviewed 2,167 camera trap (CT) articles from 1994 to 2020 and assessed trends in: (1) CT adoption measured by published research output, (2) topic, taxonomic, and geographic diversification and composition of CT applications, and (3) sampling effort, spatial extent, and temporal duration of CT studies.
Journal ArticleDOI

Identification of animals and recognition of their actions in wildlife videos using deep learning techniques

TL;DR: In this paper, a proof-of-concept for an end-to-end pipeline to detect and classify animals and their behaviour in video clips is presented, showing an average precision of 63.8% for animal detection and identification.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.
Posted Content

Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

TL;DR: Faster R-CNN as discussed by the authors proposes a Region Proposal Network (RPN) to generate high-quality region proposals, which are used by Fast R-NN for detection.

Automatic differentiation in PyTorch

TL;DR: An automatic differentiation module of PyTorch is described — a library designed to enable rapid research on machine learning models that focuses on differentiation of purely imperative programs, with a focus on extensibility and low overhead.
Book ChapterDOI

SSD: Single Shot MultiBox Detector

TL;DR: SSD as mentioned in this paper discretizes the output space of bounding boxes into a set of default boxes over different aspect ratios and scales per feature map location, and combines predictions from multiple feature maps with different resolutions to naturally handle objects of various sizes.
Related Papers (5)