scispace - formally typeset
Open Access

Convex Analysisの二,三の進展について

徹 丸山
- Vol. 70, Iss: 1, pp 97-119
Reads0
Chats0
About
The article was published on 1977-02-01 and is currently open access. It has received 5933 citations till now.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

Increasing Returns and Long-Run Growth

TL;DR: In this paper, the authors present a fully specified model of long-run growth in which knowledge is assumed to be an input in production that has increasing marginal productivity, which is essentially a competitive equilibrium model with endogenous technological change.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Journal ArticleDOI

An Algorithm for Vector Quantizer Design

TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
References
More filters
Journal Article

Optimal distributed online prediction using mini-batches

TL;DR: This work presents the distributed mini-batch algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms that is asymptotically optimal for smooth convex loss functions and stochastic inputs and proves a regret bound for this method.
Journal ArticleDOI

A Douglas–Rachford Splitting Approach to Nonsmooth Convex Variational Signal Recovery

TL;DR: A decomposition method based on the Douglas-Rachford algorithm for monotone operator-splitting for signal recovery problems and applications to non-Gaussian image denoising in a tight frame are demonstrated.
Journal ArticleDOI

On elementary flux modes in biochemical reaction systems at steady state

TL;DR: It is shown that for systems in which all flux- have fixed signs, all elementary modes are given by the generating vectors of a convex cone and can, thus, be computed by an existing algorithm.
Journal ArticleDOI

High-dimensional generalized linear models and the lasso

TL;DR: In this paper, a nonasymptotic oracle inequality for the empirical risk minimizer with Lasso penalty with Lipschitz loss functions was proved for high-dimensional generalized linear models, where the penalty is based on the coefficients in the linear predictor, after normalization with the empirical norm.
Journal ArticleDOI

Optimum power allocation for parallel Gaussian channels with arbitrary input distributions

TL;DR: This paper gives the power allocation policy that maximizes the mutual information over parallel channels with arbitrary input distributions, and admits a graphical interpretation, referred to as mercury/waterfilling, which generalizes the waterfilling solution and allows retaining some of its intuition.