scispace - formally typeset
D

David Vazquez

Researcher at James Cook University

Publications -  21
Citations -  407

David Vazquez is an academic researcher from James Cook University. The author has contributed to research in topics: Segmentation & Deep learning. The author has an hindex of 8, co-authored 21 publications receiving 171 citations.

Papers
More filters
Proceedings ArticleDOI

A Weakly Supervised Consistency-based Learning Method for COVID-19 Segmentation in CT Images

TL;DR: Laradji et al. as discussed by the authors proposed a consistency-based loss function that encourages the output predictions to be consistent with spatial transformations of the input images to detect COVID-19 in chest CT images.
Journal ArticleDOI

A realistic fish-habitat dataset to evaluate algorithms for underwater visual analysis.

TL;DR: DeepFish as discussed by the authors is a large-scale dataset for underwater computer vision tasks, which consists of approximately 40,000 images collected underwater from 20 habitats in the marine-environments of tropical Australia.
Proceedings Article

Where are the Masks: Instance Segmentation with Image-level Supervision

TL;DR: A novel framework that can effectively train with image-level labels, which are significantly cheaper to acquire, and achieves new state-of-the-art results for this problem setup is proposed.
Journal ArticleDOI

A Realistic Fish-Habitat Dataset to Evaluate Algorithms for Underwater Visual Analysis

TL;DR: This work presents DeepFish as a benchmark suite with a large-scale dataset to train and test methods for several computer vision tasks, and collects point-level and segmentation labels to have a more comprehensive fish analysis benchmark.
Posted Content

A Weakly Supervised Consistency-based Learning Method for COVID-19 Segmentation in CT Images

TL;DR: A consistency-based (CB) loss function that encourages the output predictions to be consistent with spatial transformations of the input images, and yields significant improvement over conventional point-level loss functions and almost matches the performance of models trained with full supervision with much less human effort.