scispace - formally typeset
Open Access

Convex Analysisの二,三の進展について

徹 丸山
- Vol. 70, Iss: 1, pp 97-119
Reads0
Chats0
About
The article was published on 1977-02-01 and is currently open access. It has received 5933 citations till now.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Journal ArticleDOI

Increasing Returns and Long-Run Growth

TL;DR: In this paper, the authors present a fully specified model of long-run growth in which knowledge is assumed to be an input in production that has increasing marginal productivity, which is essentially a competitive equilibrium model with endogenous technological change.
Book

Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers

TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.

Pattern Recognition and Machine Learning

TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Journal ArticleDOI

An Algorithm for Vector Quantizer Design

TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
References
More filters

Estimate sequence methods: extensions and approximations

TL;DR: A simple, self-contained, and unified framework for the study of estimate sequences is developed, with which some accelerating scheme proposed by Nesterov can be recovered, notably the acceleration procedure for constrained cubic regularization in convex optimization, and obtain easily generalizations to regularization schemes of any order.
Posted Content

Master Funds in Portfolio Analysis with General Deviation Measures

TL;DR: In this article, generalized measures of deviation are considered as substitutes for standard deviation in a framework like that of classical portfolio theory for coping with the uncertainty inherent in achieving rates of return beyond the risk-free rate.
ReportDOI

Living on the edge: A geometric theory of phase transitions in convex optimization

TL;DR: A new summary parameter, called the statistical dimension, is introduced that canonically extends the dimension of a linear subspace to the class of convex cones and leads to an approximate version of the conic kinematic formula that gives bounds on the probability that a randomly oriented cone shares a ray with a fixed cone.
Journal ArticleDOI

An Efficient Inexact Symmetric Gauss-Seidel Based Majorized ADMM for High-Dimensional Convex Composite Conic Programming

TL;DR: The results show that for the vast majority of the tested problems, the sGS-imsPADMM is 2–3 times faster than the directly extended multi-block ADMM with the aggressive step-length of 1.618, which is currently the benchmark among first-order methods for solving multi- block linear and quadratic SDP problems though its convergence is not guaranteed.
Proceedings ArticleDOI

Distributed Non-Autonomous Power Control through Distributed Convex Optimization

TL;DR: This work considers the uplink power control problem where mobile users in different cells are communicating with their base stations and proposes convergent, distributed and iterative power control algorithms that are non- autonomous.