scispace - formally typeset
Open AccessProceedings Article

Multimodal Deep Learning

Reads0
Chats0
TLDR
This work presents a series of tasks for multimodal learning and shows how to train deep networks that learn features to address these tasks, and demonstrates cross modality feature learning, where better features for one modality can be learned if multiple modalities are present at feature learning time.
Abstract
Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

A Multimodal Approach to Predict Social Media Popularity

TL;DR: A multimodal approach which exploits visual features, textual features, and social features to predict popularity of social media photos in terms of view counts, and achieves comparable performance with that of state-of-the-art.
Book ChapterDOI

Concatenated Frame Image Based CNN for Visual Speech Recognition

TL;DR: A novel sequence image representation method called concatenated frame image (CFI), two types of data augmentation methods for CFI, and a framework of CFI-based convolutional neural network (CNN) for visual speech recognition (VSR) task are proposed.

An Overview of Deep-Structured Learning for Information Processing

TL;DR: This paper develops a classificatory scheme to analyze and summarize major work reported in the deep learning literature, and provides a taxonomy-oriented survey on the existing deep architectures, and categorize them into three types: generative, discriminative, and hybrid.
Proceedings ArticleDOI

Deep learning of tissue fate features in acute ischemic stroke

TL;DR: A deep learning model of tissue fate based on randomly sampled local patches from the hypoperfusion feature observed in MRI immediately after symptom onset is constructed and results show the superiority of the proposed regional learning framework versus a single-voxel-based regression model.
Book ChapterDOI

Robust deep learning for improved classification of AD/MCI patients

TL;DR: A robust deep learning system to identify different progression stages of AD patients based on MRI and PET scans is presented and the dropout technique is utilized to improve classical deep learning by preventing its weight co-adaptation, which is a typical cause of over-fitting in deep learning.
References
More filters
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Extracting and composing robust features with denoising autoencoders

TL;DR: This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Journal ArticleDOI

Hearing lips and seeing voices

TL;DR: The study reported here demonstrates a previously unrecognised influence of vision upon speech perception, on being shown a film of a young woman's talking head in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga].
Related Papers (5)