scispace - formally typeset
Open AccessPosted Content

Ablate, Variate, and Contemplate: Visual Analytics for Discovering Neural Architectures.

Reads0
Chats0
TLDR
In this article, the authors present Rapid Exploration of Model Architectures and Parameters, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures.
Abstract
Deep learning models require the configuration of many layers and parameters in order to get good results. However, there are currently few systematic guidelines for how to configure a successful model. This means model builders often have to experiment with different configurations by manually programming different architectures (which is tedious and time consuming) or rely on purely automated approaches to generate and train the architectures (which is expensive). In this paper, we present Rapid Exploration of Model Architectures and Parameters, or REMAP, a visual analytics tool that allows a model builder to discover a deep learning model quickly via exploration and rapid experimentation of neural network architectures. In REMAP, the user explores the large and complex parameter space for neural network architectures using a combination of global inspection and local experimentation. Through a visual overview of a set of models, the user identifies interesting clusters of architectures. Based on their findings, the user can run ablation and variation experiments to identify the effects of adding, removing, or replacing layers in a given architecture and generate new models accordingly. They can also handcraft new models using a simple graphical interface. As a result, a model builder can build deep learning models quickly, efficiently, and without manual programming. We inform the design of REMAP through a design study with four deep learning model builders. Through a use case, we demonstrate that REMAP allows users to discover performant neural network architectures efficiently using visual exploration and user-defined semi-automated searches through the model space.

read more

Citations
More filters
Journal ArticleDOI

A survey of visual analytics techniques for machine learning

TL;DR: A taxonomy of visual analytics techniques is built, which includes three first-level categories: techniques before model building, techniques during modeling building, and techniques after model building.
Journal ArticleDOI

The State of the Art in Enhancing Trust in Machine Learning Models with the Use of Visualizations

TL;DR: This survey is intended to be beneficial for visualization researchers whose interests involve making ML models more trustworthy, as well as researchers and practitioners from other disciplines in their search for effective visualization techniques suitable for solving their tasks with confidence and conveying meaning to their data.
Journal ArticleDOI

Multi-view deep learning for zero-day Android malware detection

TL;DR: This work presents a novel multi-view deep learning Android malware detector with no specialist malware domain insight used to select, rank or hand-craft input features, encapsulating knowledge inside a deep learning neural net with no prior understanding of malicious characteristics.
Journal ArticleDOI

PipelineProfiler: A Visual Analytics Tool for the Exploration of AutoML Pipelines

TL;DR: The Pipeline Profiler is an interactive visualization tool that allows the exploration and comparison of the solution space of machine learning pipelines produced by AutoML systems, providing users a better understanding of the algorithms that generated them as well as insights into how they can be improved.
Proceedings ArticleDOI

Symphony: Composing Interactive Interfaces for Machine Learning

TL;DR: Symphony, a framework for composing interactive ML interfaces with task-specific, data-driven components that can be used across platforms such as computational notebooks and web dashboards, was designed and implemented.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Related Papers (5)