scispace - formally typeset
Open AccessProceedings Article

Multimodal Deep Learning

Reads0
Chats0
TLDR
This work presents a series of tasks for multimodal learning and shows how to train deep networks that learn features to address these tasks, and demonstrates cross modality feature learning, where better features for one modality can be learned if multiple modalities are present at feature learning time.
Abstract
Deep networks have been successfully applied to unsupervised feature learning for single modalities (e.g., text, images or audio). In this work, we propose a novel application of deep networks to learn features over multiple modalities. We present a series of tasks for multimodal learning and show how to train deep networks that learn features to address these tasks. In particular, we demonstrate cross modality feature learning, where better features for one modality (e.g., video) can be learned if multiple modalities (e.g., audio and video) are present at feature learning time. Furthermore, we show how to learn a shared representation between modalities and evaluate it on a unique task, where the classifier is trained with audio-only data but tested with video-only data and vice-versa. Our models are validated on the CUAVE and AVLetters datasets on audio-visual speech classification, demonstrating best published visual speech classification on AVLetters and effective shared representation learning.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Cross-Modal Retrieval via Deep and Bidirectional Representation Learning

TL;DR: A deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval and shows that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross- modal retrieval performance.
Journal ArticleDOI

Applications of artificial intelligence for disaster management

TL;DR: It is found that the majority of AI applications focus on the disaster response phase, and challenges to inspire the professional community to advance AI techniques for addressing them in future research are identified.
Proceedings Article

Deep multimodal hashing with orthogonal regularization

TL;DR: This paper proposes a novel deep multimodal hashing method, namely Deep Multimodal Hashing with Orthogonal Regularization (DMHOR), which fully exploits intra- modality and inter-modality correlations and finds that a better representation can be attained with different numbers of layers for different modalities, due to their different complexities.
Journal ArticleDOI

Multimodal Deep Learning for Music Genre Classification

TL;DR: An approach to learn and combine multimodal data representations for music genre classification is proposed, and a proposed approach for dimensionality reduction of target labels yields major improvements in multi-label classification.
Posted Content

Learning Multiple Tasks with Deep Relationship Networks

TL;DR: This work proposes a novel Deep Relationship Network (DRN) architecture for multi-task learning by discovering correlated tasks based on multiple task-specific layers of a deep convolutional neural network that yields state-of-the-art classification results on standard multi-domain object recognition datasets.
References
More filters
Proceedings ArticleDOI

Histograms of oriented gradients for human detection

TL;DR: It is shown experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection, and the influence of each stage of the computation on performance is studied.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Proceedings ArticleDOI

Extracting and composing robust features with denoising autoencoders

TL;DR: This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.
Journal ArticleDOI

Hearing lips and seeing voices

TL;DR: The study reported here demonstrates a previously unrecognised influence of vision upon speech perception, on being shown a film of a young woman's talking head in which repeated utterances of the syllable [ba] had been dubbed on to lip movements for [ga].
Related Papers (5)