scispace - formally typeset
Open AccessJournal ArticleDOI

Toward Causal Representation Learning

Reads0
Chats0
TLDR
The authors reviewed fundamental concepts of causal inference and related them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research.
Abstract
The two fields of machine learning and graphical causality arose and are developed separately. However, there is, now, cross-pollination and increasing interest in both fields to benefit from the advances of the other. In this article, we review fundamental concepts of causal inference and relate them to crucial open problems of machine learning, including transfer and generalization, thereby assaying how causality can contribute to modern machine learning research. This also applies in the opposite direction: we note that most work in causality starts from the premise that the causal variables are given. A central problem for AI and causality is, thus, causal representation learning, that is, the discovery of high-level causal variables from low-level observations. Finally, we delineate some implications of causality for machine learning and propose key research areas at the intersection of both communities.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Text Data Augmentation for Deep Learning.

TL;DR: A survey of data augmentation for text data can be found in this article, where the major motifs of Data Augmentation are summarized into strengthening local decision boundaries, brute force training, causality and counterfactual examples, and the distinction between meaning and form.
Journal ArticleDOI

Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next

TL;DR: A comprehensive review of the literature on physics-informed neural networks can be found in this article , where the primary goal of the study was to characterize these networks and their related advantages and disadvantages, as well as incorporate publications on a broader range of collocation-based physics informed neural networks.
Journal ArticleDOI

Domain Generalization: A Survey

TL;DR: Domain generalization (DG) aims to achieve OOD generalization by using only source data for model learning as mentioned in this paper , which is a capability natural to humans yet challenging for machines to reproduce.
Proceedings ArticleDOI

SAVi++: Towards End-to-End Object-Centric Learning from Real-World Videos

TL;DR: SAVi ++ is introduced, an object-centric video model which is trained to predict depth signals from a slot-based video representation and is able to learn emergent object segmentation and tracking from videos in the real-world Waymo Open dataset by using sparse depth signals obtained from LiDAR.
Journal ArticleDOI

An overview of artificial intelligence techniques for diagnosis of Schizophrenia based on magnetic resonance imaging modalities: Methods, challenges, and future works

TL;DR: A comprehensive overview of studies conducted on the automated diagnosis of SZ using MRI modalities is presented in this article , where an AI-based computer aided-diagnosis system (CADS) for SZ diagnosis and its relevant sections are presented.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal ArticleDOI

Deep learning

TL;DR: Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.
Proceedings ArticleDOI

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Related Papers (5)