scispace - formally typeset
Search or ask a question
JournalISSN: 2313-433X

Journal of Imaging 

Multidisciplinary Digital Publishing Institute
About: Journal of Imaging is an academic journal published by Multidisciplinary Digital Publishing Institute. The journal publishes majorly in the area(s): Medicine & Computer science. It has an ISSN identifier of 2313-433X. It is also open access. Over the lifetime, 1198 publications have been published receiving 11194 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: A review of the current applications of explainable deep learning for different medical imaging tasks is presented in this paper, where various approaches, challenges for clinical deployment, and the areas requiring further research are discussed from a practical standpoint of a deep learning researcher designing a system for the clinical end-users.
Abstract: Deep learning methods have been very effective for a variety of medical diagnostic tasks and have even outperformed human experts on some of those However, the black-box nature of the algorithms has restricted their clinical use Recent explainability studies aim to show the features that influence the decision of a model the most The majority of literature reviews of this area have focused on taxonomy, ethics, and the need for explanations A review of the current applications of explainable deep learning for different medical imaging tasks is presented here The various approaches, challenges for clinical deployment, and the areas requiring further research are discussed here from a practical standpoint of a deep learning researcher designing a system for the clinical end-users

298 citations

Journal ArticleDOI
TL;DR: In this paper, the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection, and provide the criteria of evaluation for spatio-temporal anomaly detection.
Abstract: Videos represent the primary source of information for surveillance applications. Video material is often available in large quantities but in most cases it contains little or no annotation for supervised learning. This article reviews the state-of-the-art deep learning based methods for video anomaly detection and categorizes them based on the type of model and criteria of detection. We also perform simple studies to understand the different approaches and provide the criteria of evaluation for spatio-temporal anomaly detection.

287 citations

Journal ArticleDOI
TL;DR: A review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances, and tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points.
Abstract: Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.

232 citations

Journal ArticleDOI
TL;DR: The present review is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields and the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectrals data from a multidisciplinary perspective.
Abstract: Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends.

184 citations

Journal ArticleDOI
TL;DR: Using a convolutional neural network implemented in the “YOLO” (“You Only Look Once”) platform, objects can be tracked, detected, and classified from video feeds supplied by UAVs in real-time.
Abstract: There are numerous applications of unmanned aerial vehicles (UAVs) in the management of civil infrastructure assets. A few examples include routine bridge inspections, disaster management, power line surveillance and traffic surveying. As UAV applications become widespread, increased levels of autonomy and independent decision-making are necessary to improve the safety, efficiency, and accuracy of the devices. This paper details the procedure and parameters used for the training of convolutional neural networks (CNNs) on a set of aerial images for efficient and automated object recognition. Potential application areas in the transportation field are also highlighted. The accuracy and reliability of CNNs depend on the network’s training and the selection of operational parameters. This paper details the CNN training procedure and parameter selection. The object recognition results show that by selecting a proper set of parameters, a CNN can detect and classify objects with a high level of accuracy (97.5%) and computational efficiency. Furthermore, using a convolutional neural network implemented in the “YOLO” (“You Only Look Once”) platform, objects can be tracked, detected (“seen”), and classified (“comprehended”) from video feeds supplied by UAVs in real-time.

147 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
2023136
2022340
2021245
2020145
201982
2018145