scispace - formally typeset
Open Access

Convex Analysisの二,三の進展について

徹 丸山
- Vol. 70, Iss: 1, pp 97-119
Reads0
Chats0
About
The article was published on 1977-02-01 and is currently open access. It has received 5933 citations till now.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

Increasing Returns and Long-Run Growth

TL;DR: In this paper, the authors present a fully specified model of long-run growth in which knowledge is assumed to be an input in production that has increasing marginal productivity, which is essentially a competitive equilibrium model with endogenous technological change.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Journal ArticleDOI

An Algorithm for Vector Quantizer Design

TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
References
More filters
Journal ArticleDOI

Relaxation methods for minimum cost ordinary and generalized network flow problems

TL;DR: These algorithms are based on iterative improvement of a dual cost and operate in a manner that is reminiscent of coordinate ascent and Gauss-Seidel relaxation methods, and are found to be several times faster on standard benchmark problems, and faster by an order of magnitude on large, randomly generated problems.
Journal ArticleDOI

An $O(1/k)$ Gradient Method for Network Resource Allocation Problems

TL;DR: This paper develops a completely distributed fast gradient method for solving the dual of the NUM problem, and shows that the generated primal sequences converge to the unique optimal solution of theNUM problem at rate O(1/k).
Posted Content

Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem

TL;DR: The rate region of the quadratic Gaussian two-encoder source-coding problem is determined and the techniques can be used to determine the sum-rate of some generalizations of this classical problem.
Journal ArticleDOI

Modeling and optimization of risk

TL;DR: The most recent advances in the context of decision making under uncertainty are surveyed, with an emphasis on the modeling of risk-averse preferences using the apparatus of axiomatically defined risk functionals and their connection to utility theory, stochastic dominance, and other more established methods.
Proceedings ArticleDOI

Concise Integer Linear Programming Formulations for Dependency Parsing

TL;DR: This formulation of the problem of non-projective dependency parsing as a polynomial-sized integer linear program is able to handle non-local output features in an efficient manner and is compatible with prior knowledge encoded as hard constraints, and can also learn soft constraints from data.