scispace - formally typeset
Search or ask a question
JournalISSN: 2624-8212

Frontiers in artificial intelligence 

Frontiers Media
About: Frontiers in artificial intelligence is an academic journal published by Frontiers Media. The journal publishes majorly in the area(s): Computer science & Medicine. It has an ISSN identifier of 2624-8212. It is also open access. Over the lifetime, 388 publications have been published receiving 724 citations.

Papers published on a yearly basis

Papers
More filters
Journal ArticleDOI
TL;DR: An overview of COVID-19 detection using deep learning methods and their cost-effectiveness and financial implications from the perspective of insurance claim settlement is presented.
Abstract: Graphical-design-based symptomatic techniques in pandemics perform a quintessential purpose in screening hit causes that comparatively render better outcomes amongst the principal radioscopy mechanisms in recognizing and diagnosing COVID-19 cases. The deep learning paradigm has been applied vastly to investigate radiographic images such as Chest X-Rays (CXR) and CT scan images. These radiographic images are rich in information such as patterns and clusters like structures, which are evident in conformance and detection of COVID-19 like pandemics. This paper aims to comprehensively study and analyze detection methodology based on Deep learning techniques for COVID-19 diagnosis. Deep learning technology is a good, practical, and affordable modality that can be deemed a reliable technique for adequately diagnosing the COVID-19 virus. Furthermore, the research determines the potential to enhance image character through artificial intelligence and distinguishes the most inexpensive and most trustworthy imaging method to anticipate dreadful viruses. This paper further discusses the cost-effectiveness of the surveyed methods for detecting COVID-19, in contrast with the other methods. Several finance-related aspects of COVID-19 detection effectiveness of different methods used for COVID-19 detection have been discussed. Overall, this study presents an overview of COVID-19 detection using deep learning methods and their cost-effectiveness and financial implications from the perspective of insurance claim settlement.

18 citations

Journal ArticleDOI
TL;DR: A conceptual map of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones is drawn, highlighting similarities and differences between them.
Abstract: Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities in order to improve model transparency and allow users to form a mental model of a trained ML model. However, explanations can go beyond this one way communication as a mechanism to elicit user control, because once users understand, they can then provide feedback. The goal of this paper is to present an overview of research where explanations are combined with interactive capabilities as a mean to learn new models from scratch and to edit and debug existing ones. To this end, we draw a conceptual map of the state-of-the-art, grouping relevant approaches based on their intended purpose and on how they structure the interaction, highlighting similarities and differences between them. We also discuss open research issues and outline possible directions forward, with the hope of spurring further research on this blooming research topic.

18 citations

Journal ArticleDOI
TL;DR: In almost all studies, transfer learning contributed to better performance in diagnosis, classification, segmentation of different neuroimaging diseases and problems, than methods without transfer learning.
Abstract: Deep learning algorithms have been moderately successful in diagnoses of diseases by analyzing medical images especially through neuroimaging that is rich in annotated data. Transfer learning methods have demonstrated strong performance in tackling annotated data. It utilizes and transfers knowledge learned from a source domain to target domain even when the dataset is small. There are multiple approaches to transfer learning that result in a range of performance estimates in diagnosis, detection, and classification of clinical problems. Therefore, in this paper, we reviewed transfer learning approaches, their design attributes, and their applications to neuroimaging problems. We reviewed two main literature databases and included the most relevant studies using predefined inclusion criteria. Among 50 reviewed studies, more than half of them are on transfer learning for Alzheimer's disease. Brain mapping and brain tumor detection were second and third most discussed research problems, respectively. The most common source dataset for transfer learning was ImageNet, which is not a neuroimaging dataset. This suggests that the majority of studies preferred pre-trained models instead of training their own model on a neuroimaging dataset. Although, about one third of studies designed their own architecture, most studies used existing Convolutional Neural Network architectures. Magnetic Resonance Imaging was the most common imaging modality. In almost all studies, transfer learning contributed to better performance in diagnosis, classification, segmentation of different neuroimaging diseases and problems, than methods without transfer learning. Among different transfer learning approaches, fine-tuning all convolutional and fully-connected layers approach and freezing convolutional layers and fine-tuning fully-connected layers approach demonstrated superior performance in terms of accuracy. These recent transfer learning approaches not only show great performance but also require less computational resources and time.

15 citations

Journal ArticleDOI
TL;DR: A new benchmark dataset for lexical simplification in English, Spanish, and (Brazilian) Portuguese is presented, and details about data selection and annotation procedures are provided, to enable compilation of comparable datasets in other languages and domains.
Abstract: Even in highly-developed countries, as many as 15–30% of the population can only understand texts written using a basic vocabulary. Their understanding of everyday texts is limited, which prevents them from taking an active role in society and making informed decisions regarding healthcare, legal representation, or democratic choice. Lexical simplification is a natural language processing task that aims to make text understandable to everyone by replacing complex vocabulary and expressions with simpler ones, while preserving the original meaning. It has attracted considerable attention in the last 20 years, and fully automatic lexical simplification systems have been proposed for various languages. The main obstacle for the progress of the field is the absence of high-quality datasets for building and evaluating lexical simplification systems. In this study, we present a new benchmark dataset for lexical simplification in English, Spanish, and (Brazilian) Portuguese, and provide details about data selection and annotation procedures, to enable compilation of comparable datasets in other languages and domains. As the first multilingual lexical simplification dataset, where instances in all three languages were selected and annotated using comparable procedures, this is the first dataset that offers a direct comparison of lexical simplification systems for three languages. To showcase the usability of the dataset, we adapt two state-of-the-art lexical simplification systems with differing architectures (neural vs. non-neural) to all three languages (English, Spanish, and Brazilian Portuguese) and evaluate their performances on our new dataset. For a fairer comparison, we use several evaluation measures which capture varied aspects of the systems' efficacy, and discuss their strengths and weaknesses. We find that a state-of-the-art neural lexical simplification system outperforms a state-of-the-art non-neural lexical simplification system in all three languages, according to all evaluation measures. More importantly, we find that the state-of-the-art neural lexical simplification systems perform significantly better for English than for Spanish and Portuguese, thus posing a question if such an architecture can be used for successful lexical simplification in other languages, especially the low-resourced ones.

15 citations

Journal ArticleDOI
TL;DR: An inference-optimized AI ensemble that identifies all known binary black hole mergers previously identified in this advanced LIGO dataset and reports no misclassifications, while also providing a 3X inference speedup compared to traditional artificial intelligence models is introduced.
Abstract: We introduce an ensemble of artificial intelligence models for gravitational wave detection that we trained in the Summit supercomputer using 32 nodes, equivalent to 192 NVIDIA V100 GPUs, within 2 h. Once fully trained, we optimized these models for accelerated inference using NVIDIA TensorRT. We deployed our inference-optimized AI ensemble in the ThetaGPU supercomputer at Argonne Leadership Computer Facility to conduct distributed inference. Using the entire ThetaGPU supercomputer, consisting of 20 nodes each of which has 8 NVIDIA A100 Tensor Core GPUs and 2 AMD Rome CPUs, our NVIDIA TensorRT-optimized AI ensemble processed an entire month of advanced LIGO data (including Hanford and Livingston data streams) within 50 s. Our inference-optimized AI ensemble retains the same sensitivity of traditional AI models, namely, it identifies all known binary black hole mergers previously identified in this advanced LIGO dataset and reports no misclassifications, while also providing a 3X inference speedup compared to traditional artificial intelligence models. We used time slides to quantify the performance of our AI ensemble to process up to 5 years worth of advanced LIGO data. In this synthetically enhanced dataset, our AI ensemble reports an average of one misclassification for every month of searched advanced LIGO data. We also present the receiver operating characteristic curve of our AI ensemble using this 5 year long advanced LIGO dataset. This approach provides the required tools to conduct accelerated, AI-driven gravitational wave detection at scale.

13 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
2023123
2022286