scispace - formally typeset
Search or ask a question

Convex Analysisの二,三の進展について

01 Feb 1977-Vol. 70, Iss: 1, pp 97-119
About: The article was published on 1977-02-01 and is currently open access. It has received 5933 citations till now.

Content maybe subject to copyright    Report

Citations
More filters
Book
22 Mar 1994
TL;DR: In this paper, the authors present a detailed overview of the history of multifingered hands and dextrous manipulation, and present a mathematical model for steerable and non-driveable hands.
Abstract: INTRODUCTION: Brief History. Multifingered Hands and Dextrous Manipulation. Outline of the Book. Bibliography. RIGID BODY MOTION: Rigid Body Transformations. Rotational Motion in R3. Rigid Motion in R3. Velocity of a Rigid Body. Wrenches and Reciprocal Screws. MANIPULATOR KINEMATICS: Introduction. Forward Kinematics. Inverse Kinematics. The Manipulator Jacobian. Redundant and Parallel Manipulators. ROBOT DYNAMICS AND CONTROL: Introduction. Lagrange's Equations. Dynamics of Open-Chain Manipulators. Lyapunov Stability Theory. Position Control and Trajectory Tracking. Control of Constrained Manipulators. MULTIFINGERED HAND KINEMATICS: Introduction to Grasping. Grasp Statics. Force-Closure. Grasp Planning. Grasp Constraints. Rolling Contact Kinematics. HAND DYNAMICS AND CONTROL: Lagrange's Equations with Constraints. Robot Hand Dynamics. Redundant and Nonmanipulable Robot Systems. Kinematics and Statics of Tendon Actuation. Control of Robot Hands. NONHOLONOMIC BEHAVIOR IN ROBOTIC SYSTEMS: Introduction. Controllability and Frobenius' Theorem. Examples of Nonholonomic Systems. Structure of Nonholonomic Systems. NONHOLONOMIC MOTION PLANNING: Introduction. Steering Model Control Systems Using Sinusoids. General Methods for Steering. Dynamic Finger Repositioning. FUTURE PROSPECTS: Robots in Hazardous Environments. Medical Applications for Multifingered Hands. Robots on a Small Scale: Microrobotics. APPENDICES: Lie Groups and Robot Kinematics. A Mathematica Package for Screw Calculus. Bibliography. Index Each chapter also includes a Summary, Bibliography, and Exercises

6,592 citations

Journal ArticleDOI
TL;DR: Optimal auctions are derived for a wide class of auction design problems when the seller has imperfect information about how much the buyers might be willing to pay for the object.
Abstract: This paper considers the problem faced by a seller who has a single object to sell to one of several possible buyers, when the seller has imperfect information about how much the buyers might be willing to pay for the object. The seller's problem is to design an auction game which has a Nash equilibrium giving him the highest possible expected utility. Optimal auctions are derived in this paper for a wide class of auction design problems.

6,003 citations

Journal ArticleDOI
TL;DR: In this paper, a new approach to optimize or hedging a portfolio of financial instruments to reduce risk is presented and tested on applications, which focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value at Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well.
Abstract: A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional Value-at-Risk (CVaR) rather than minimizing Value-at-Risk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage rms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenario-based methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of nance.

5,622 citations


Cites background from "Convex Analysisの二,三の進展について"

  • ...For background on convexity, which is a key property in optimization that in particular eliminates the possibility of a local minimum being di erent from a global minimum, see Rockafellar (1970), Shor (1985), for instance....

    [...]

  • ...The vector x can be interpreted as representing a portfolio, with X as the set of available portfolios (subject to various constraints), but other interpretations could be made as well....

    [...]

Journal ArticleDOI
TL;DR: The theory of proper scoring rules on general probability spaces is reviewed and developed, and the intuitively appealing interval score is proposed as a utility function in interval estimation that addresses width as well as coverage.
Abstract: Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distributionF if he or she issues the probabilistic forecast F, rather than G ≠ F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the ...

4,644 citations


Cites methods from "Convex Analysisの二,三の進展について"

  • ...Our contributions lie in the notion of regularity, the rigorous treatment, and the introduction of appropriate tools for convex analysis (Rockafellar 1970, sects....

    [...]

  • ...For instance, we conclude from theorem 24.2 of Rockafellar (1970) that S(p,1) = lim q→1 G(q) − ∫ 1 p (G′(q) − G′(p))dq (13) for p ∈ (0,1), and because G′(p) is increasing, S(p,1) is increasing as well....

    [...]

Journal ArticleDOI
TL;DR: A first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and it can show linear convergence, i.e. O(ωN) for some ω∈(0,1), on smooth problems.
Abstract: In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(? N ) for some ??(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.

4,487 citations

References
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a fully specified model of long-run growth in which knowledge is assumed to be an input in production that has increasing marginal productivity, which is essentially a competitive equilibrium model with endogenous technological change.
Abstract: This paper presents a fully specified model of long-run growth in which knowledge is assumed to be an input in production that has increasing marginal productivity. It is essentially a competitive equilibrium model with endogenous technological change. In contrast to models based on diminishing returns, growth rates can be increasing over time, the effects of small disturbances can be amplified by the actions of private agents, and large countries may always grow faster than small countries. Long-run evidence is offered in support of the empirical relevance of these possibilities.

18,200 citations

Book
23 May 2011
TL;DR: It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
Abstract: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.

17,433 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data.
Abstract: An efficient and intuitive algorithm is presented for the design of vector quantizers based either on a known probabilistic model or on a long training sequence of data. The basic properties of the algorithm are discussed and demonstrated by examples. Quite general distortion measures and long blocklengths are allowed, as exemplified by the design of parameter vector quantizers of ten-dimensional vectors arising in Linear Predictive Coded (LPC) speech compression with a complicated distortion measure arising in LPC analysis that does not depend only on the error vector.

7,935 citations