scispace - formally typeset
Search or ask a question
Author

William Karush

Bio: William Karush is an academic researcher from System Development Corporation. The author has contributed to research in topics: Functional equation & Two-sided Laplace transform. The author has an hindex of 10, co-authored 11 publications receiving 1298 citations.

Papers
More filters
Book ChapterDOI
01 Jan 2014
TL;DR: In this article, the problem of determining necessary conditions and sufficient conditions for a relative minimum of a function in the class of points x satisfying the inequalities has been considered, where m may be less than, equal to, or greater than n.
Abstract: The problem of determining necessary conditions and sufficient conditions for a relative minimum of a function \( f({x_1},{x_2},....,{x_n})\) in the class of points \( x = ({x_1},{x_2},....,{x_n})\) Satisfying the equations \( \rm {g_{\alpha}(X)= 0 (\alpha = 1, 2,....,m),} \) where the functions f and gα have continuous derivatives of at least the second order, has been satisfactorily treated [1]*. This paper proposes to take up the corresponding problem in the class of points x satisfying the inequalities \( \begin{array}{clcclclclcl}\rm {g_{\alpha}(x)\geqq 0} & & & & & & \rm{\alpha = 1,2,...,m}\end{array} \) where m may be less than, equal to, or greater than n.

1,027 citations

Journal ArticleDOI
TL;DR: The relation between lost sales and inventory level is an important problem in inventory control as discussed by the authors, and an explicit mathematical solution is obtained by methods of general interest for a probabilistic model that arose in connection with consulting work for an industrial client.
Abstract: The relation between lost sales and inventory level is an important problem in inventory control. An explicit mathematical solution is obtained by methods of general interest for a probabilistic model that arose in connection with consulting work for an industrial client. Customer demand for a given commodity is a Poisson process with mean rate λ, and replenishment time for restocking is random. At any moment, the constant inventory n is divided between in-stock amount n0, and inreplenishment process amount n − n0. Customer arrival when n0 > 0 results in a unit sale and the initiation of replenishment of that unit. Successive replenishment times are independent. Customer arrival when n0 = 0, results in a lost sale. The unique stationary probabilities p(n0∣n) of the states n0 (fixed n), are obtained, they are given by the Erlang congestion formula, and depend upon the replenishment time only to the extent of its mean value. A generalization is obtained where λ may be a function of the state of the system. ...

70 citations

Journal ArticleDOI
TL;DR: In this paper, the authors considered the continuous case of convex functions and showed that the functional equation F(A, C) = F (A, B) + F(B, C − F(C, B, C), where C is a convex function.
Abstract: An optimization problem which frequently arises in applications of mathematical programming is the following: Where fi are convex functions. In this paper, the function F is studied and shown to satisfy F(A, B) = M (A) + N(B), where M and N are increasing and decreasing convex functions, respectively. Also, the functional equation F (A, C) = F(A, B) + F(B, C) − F(B, B) is established. These results generalize to the continuous case F(A, B)=min ∫ f(t, x(t))dt, with x(t) increasing and A ≤ x (0) ≤ x (T) ≤ B. The results obtained in this paper are useful for reducing an optimization problem with many variables to one with fewer variables.

52 citations

Journal ArticleDOI
TL;DR: In this paper, a convolution of two functions ƒ and g, h = ǫ * g, defined by g, is introduced, where h is a function defined by
Abstract: (1) F(xu x2, • • • , xN) = fi(xi) + f2(x2) + +/N(XN) over the region R defined by xi+x2 + • • • +XN = X> x^O. Under various assumptions concerning the ƒ», this problem can be studied analytically; cf. Karush [ l ; 2] , and it can also be treated analytically by means of the theory of dynamic programming [3]. I t is natural in this connection to introduce a \"convolution\" of two functions ƒ and g, h = ƒ * g, defined by

38 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI
TL;DR: This tutorial gives an overview of the basic ideas underlying Support Vector (SV) machines for function estimation, and includes a summary of currently used algorithms for training SV machines, covering both the quadratic programming part and advanced methods for dealing with large datasets.
Abstract: In this tutorial we give an overview of the basic ideas underlying Support Vector (SV) machines for function estimation. Furthermore, we include a summary of currently used algorithms for training SV machines, covering both the quadratic (or convex) programming part and advanced methods for dealing with large datasets. Finally, we mention some modifications and extensions that have been applied to the standard SV algorithm, and discuss the aspect of regularization from a SV perspective.

10,696 citations

Proceedings Article
24 Apr 2017
TL;DR: In this article, a modification of the variational autoencoder (VAE) framework is proposed to learn interpretable factorised latent representations from raw image data in a completely unsupervised manner.
Abstract: Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial intelligence that is able to learn and reason in the same way that humans do. We introduce beta-VAE, a new state-of-the-art framework for automated discovery of interpretable factorised latent representations from raw image data in a completely unsupervised manner. Our approach is a modification of the variational autoencoder (VAE) framework. We introduce an adjustable hyperparameter beta that balances latent channel capacity and independence constraints with reconstruction accuracy. We demonstrate that beta-VAE with appropriately tuned beta > 1 qualitatively outperforms VAE (beta = 1), as well as state of the art unsupervised (InfoGAN) and semi-supervised (DC-IGN) approaches to disentangled factor learning on a variety of datasets (celebA, faces and chairs). Furthermore, we devise a protocol to quantitatively compare the degree of disentanglement learnt by different models, and show that our approach also significantly outperforms all baselines quantitatively. Unlike InfoGAN, beta-VAE is stable to train, makes few assumptions about the data and relies on tuning a single hyperparameter, which can be directly optimised through a hyper parameter search using weakly labelled data or through heuristic visual inspection for purely unsupervised data.

3,670 citations

Book
17 Aug 2012
TL;DR: This graduate-level textbook introduces fundamental concepts and methods in machine learning, and provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application.
Abstract: This graduate-level textbook introduces fundamental concepts and methods in machine learning. It describes several important modern algorithms, provides the theoretical underpinnings of these algorithms, and illustrates key aspects for their application. The authors aim to present novel theoretical tools and concepts while giving concise proofs even for relatively advanced topics. Foundations of Machine Learning fills the need for a general textbook that also offers theoretical details and an emphasis on proofs. Certain topics that are often treated with insufficient attention are discussed in more detail here; for example, entire chapters are devoted to regression, multi-class classification, and ranking. The first three chapters lay the theoretical foundation for what follows, but each remaining chapter is mostly self-contained. The appendix offers a concise probability review, a short introduction to convex optimization, tools for concentration bounds, and several basic properties of matrices and norms used in the book. The book is intended for graduate students and researchers in machine learning, statistics, and related areas; it can be used either as a textbook or as a reference text for a research seminar.

2,511 citations

Journal ArticleDOI
TL;DR: A review of machine learning methods employing positive definite kernels, ranging from binary classifiers to sophisticated methods for estimation with structured data, which include nonlinear functions as well as functions defined on nonvectorial data.
Abstract: We review machine learning methods employing positive definite kernels. These methods formulate learning and estimation problems in a reproducing kernel Hilbert space (RKHS) of functions defined on the data domain, expanded in terms of a kernel. Working in linear spaces of function has the benefit of facilitating the construction and analysis of learning algorithms while at the same time allowing large classes of functions. The latter include nonlinear functions as well as functions defined on nonvectorial data. We cover a wide range of methods, ranging from binary classifiers to sophisticated methods for estimation with structured data.

1,791 citations