scispace - formally typeset
Search or ask a question
Author

Nazim Shaikh

Bio: Nazim Shaikh is an academic researcher from University of Southern California. The author has contributed to research in topics: Statistical classification & Artificial intelligence. The author has an hindex of 1, co-authored 3 publications receiving 28 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A novel loss function is proposed that gives rise to a novel method, Outlier Exposure with Confidence Control (OECC), which achieves superior results in out-of-distribution detection with OE both on image and text classification tasks without requiring access to OOD samples.

67 citations

Journal ArticleDOI
TL;DR: The goal of this study is to look at how Artificial Intelligence can be used to diagnose skin cancer by utilizing support vector machine, which is the most prevalently used classification techniques.
Abstract: Skin cancer is caused by reasons such as unhealthy life style, air pollution, UV radiation, etc. Various automated machine learning algorithmic solutions have been created in prevalent years to be used to detect such cancers before any major aggravation has taken place. In this paper there is a review on ways can detect disease and alert us before something becomes serious. The goal of this study is to look at how Artificial Intelligence can be used to diagnose skin cancer. With the use of Artificial Intelligence, people can learn what skin illness they have and what safeguards and steps they should take at an early stage, allowing them to treat the disease successfully. Machine learning will be utilized to determine the ailment and assist us in detecting the outcome. Support vector machine is the most prevalently used classification techniques. The findings of this study will aid doctors in treating disease at its onset, preventing future deterioration.

1 citations

Posted Content
TL;DR: This paper reviews some of the most seminal recent algorithms in the OOD detection field, they are divided into training and post-training and it is experimentally shown how the combination of the former with the latter can achieve state-of-the-art results in theOOD detection task.
Abstract: Deep neural networks are known to achieve superior results in classification tasks. However, it has been recently shown that they are incapable to detect examples that are generated by a distribution which is different than the one they have been trained on since they are making overconfident prediction for Out-Of-Distribution (OOD) examples. OOD detection has attracted a lot of attention recently. In this paper, we review some of the most seminal recent algorithms in the OOD detection field, we divide those methods into training and post-training and we experimentally show how the combination of the former with the latter can achieve state-of-the-art results in the OOD detection task.

Cited by
More filters
Posted Content
TL;DR: This paper proposes a group-based OOD detection framework, along with a novel OOD scoring function termed MOS, to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between invs.
Abstract: Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world. Existing solutions are mainly driven by small datasets, with low resolution and very few class labels (e.g., CIFAR). As a result, OOD detection for large-scale image classification tasks remains largely unexplored. In this paper, we bridge this critical gap by proposing a group-based OOD detection framework, along with a novel OOD scoring function termed MOS. Our key idea is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between in- vs. out-of-distribution data for effective OOD detection. Our method scales substantially better for high-dimensional class space than previous approaches. We evaluate models trained on ImageNet against four carefully curated OOD datasets, spanning diverse semantics. MOS establishes state-of-the-art performance, reducing the average FPR95 by 14.33% while achieving 6x speedup in inference compared to the previous best method.

81 citations

Book ChapterDOI
Jiefeng Chen1, Yixuan Li1, Xi Wu2, Yingyu Liang1, Somesh Jha1 
13 Sep 2021
TL;DR: ATOM as discussed by the authors improves the robustness of OOD detection by mining informative auxiliary OOD data, and somewhat surprisingly, generalizes to unseen adversarial attacks, achieving state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks.
Abstract: Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in an open-world setting. However, existing OOD detection solutions can be brittle in the open world, facing various types of adversarial OOD inputs. While methods leveraging auxiliary OOD data have emerged, our analysis on illuminative examples reveals a key insight that the majority of auxiliary OOD examples may not meaningfully improve or even hurt the decision boundary of the OOD detector, which is also observed in empirical results on real data. In this paper, we provide a theoretically motivated method, Adversarial Training with informative Outlier Mining (ATOM), which improves the robustness of OOD detection. We show that, by mining informative auxiliary OOD data, one can significantly improve OOD detection performance, and somewhat surprisingly, generalize to unseen adversarial attacks. ATOM achieves state-of-the-art performance under a broad family of classic and adversarial OOD evaluation tasks. For example, on the CIFAR-10 in-distribution dataset, ATOM reduces the FPR (at TPR 95%) by up to 57.99% under adversarial OOD inputs, surpassing the previous best baseline by a large margin.

53 citations

Posted Content
TL;DR: This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference, which is especially important in automotive applications with limited computational resources and real-time constraints.
Abstract: Neural networks (NNs) are widely used for object classification in autonomous driving. However, NNs can fail on input data not well represented by the training dataset, known as out-of-distribution (OOD) data. A mechanism to detect OOD samples is important for safety-critical applications, such as automotive perception, to trigger a safe fallback mode. NNs often rely on softmax normalization for confidence estimation, which can lead to high confidences being assigned to OOD samples, thus hindering the detection of failures. This paper presents a method for determining whether inputs are OOD, which does not require OOD data during training and does not increase the computational cost of inference. The latter property is especially important in automotive applications with limited computational resources and real-time constraints. Our proposed approach outperforms state-of-the-art methods on real-world automotive datasets.

25 citations

Proceedings ArticleDOI
01 Jun 2021
TL;DR: The authors decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between in- vs. out-of-distribution data for effective OOD detection.
Abstract: Detecting out-of-distribution (OOD) inputs is a central challenge for safely deploying machine learning models in the real world. Existing solutions are mainly driven by small datasets, with low resolution and very few class labels (e.g., CIFAR). As a result, OOD detection for large-scale image classification tasks remains largely unexplored. In this paper, we bridge this critical gap by proposing a group-based OOD detection framework, along with a novel OOD scoring function termed MOS. Our key idea is to decompose the large semantic space into smaller groups with similar concepts, which allows simplifying the decision boundaries between in- vs. out-of-distribution data for effective OOD detection. Our method scales substantially better for high-dimensional class space than previous approaches. We evaluate models trained on ImageNet against four carefully curated OOD datasets, spanning diverse semantics. MOS establishes state-of-the-art performance, reducing the average FPR95 by 14.33% while achieving 6x speedup in inference compared to the previous best method.

23 citations

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed a deep adversarial anomaly detection (DAAD) method, where an auxiliary task with self-supervised learning is first designed to learn task-specific features.

22 citations