scispace - formally typeset
Search or ask a question
Author

Jayant R. Kalagnanam

Other affiliations: Carnegie Mellon University
Bio: Jayant R. Kalagnanam is an academic researcher from IBM. The author has contributed to research in topics: Common value auction & Optimization problem. The author has an hindex of 35, co-authored 148 publications receiving 9149 citations. Previous affiliations of Jayant R. Kalagnanam include Carnegie Mellon University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service.
Abstract: The paradigmatic shift from a Web of manual interactions to a Web of programmatic interactions driven by Web services is creating unprecedented opportunities for the formation of online business-to-business (B2B) collaborations. In particular, the creation of value-added services by composition of existing ones is gaining a significant momentum. Since many available Web services provide overlapping or identical functionality, albeit with different quality of service (QoS), a choice needs to be made to determine which services are to participate in a given composite service. This paper presents a middleware platform which addresses the issue of selecting Web services for the purpose of their composition in a way that maximizes user satisfaction expressed as utility functions over QoS attributes, while satisfying the constraints set by the user and by the structure of the composite service. Two selection approaches are described and compared: one based on local (task-level) selection of services and the other based on global allocation of tasks to services using integer programming.

2,872 citations

Proceedings ArticleDOI
20 May 2003
TL;DR: This paper proposes a global planning approach to optimally select component services during the execution of a composite service, and experimental results show that thisglobal planning approach outperforms approaches in which the component services are selected individually for each task in a Composite service.
Abstract: The process-driven composition of Web services is emerging as a promising approach to integrate business applications within and across organizational boundaries. In this approach, individual Web services are federated into composite Web services whose business logic is expressed as a process model. The tasks of this process model are essentially invocations to functionalities offered by the underlying component services. Usually, several component services are able to execute a given task, although with different levels of pricing and quality. In this paper, we advocate that the selection of component services should be carried out during the execution of a composite service, rather than at design-time. In addition, this selection should consider multiple criteria (e.g., price, duration, reliability), and it should take into account global constraints and preferences set by the user (e.g., budget constraints). Accordingly, the paper proposes a global planning approach to optimally select component services during the execution of a composite service. Service selection is formulated as an optimization problem which can be solved using efficient linear programming methods. Experimental results show that this global planning approach outperforms approaches in which the component services are selected individually for each task in a composite service.

1,229 citations

Journal ArticleDOI
TL;DR: The information technology foundation and principles for Smarter Cities™ are described, which enables the adaptation of city services to the behavior of the inhabitants, which permits the optimal use of the available physical infrastructure and resources.
Abstract: This paper describes the information technology (IT) foundation and principles for Smarter Cities™. Smarter Cities are urban areas that exploit operational data, such as that arising from traffic congestion, power consumption statistics, and public safety events, to optimize the operation of city services. The foundational concepts are instrumented, interconnected, and intelligent. Instrumented refers to sources of near-real-time real-world data from both physical and virtual sensors. Interconnected means the integration of those data into an enterprise computing platform and the communication of such information among the various city services. Intelligent refers to the inclusion of complex analytics, modeling, optimization, and visualization in the operational business processes to make better operational decisions. This approach enables the adaptation of city services to the behavior of the inhabitants, which permits the optimal use of the available physical infrastructure and resources, for example, in sensing and controlling consumption of energy and water, managing waste processing and transportation systems, and applying optimization to achieve new efficiencies among these resources. Additional roles exist in intelligent interaction between the city and its inhabitants and further contribute to operational efficiency while maintaining or enhancing quality of life.

953 citations

Journal ArticleDOI
TL;DR: A new sampling technique is presented that generates and inverts the Hammersley points to provide a representative sample for multivariate probability distributions and is compared to a sample obtained from a Latin hypercube design by propagating it through a set of nonlinear functions.
Abstract: The basic setting of this article is that of parameter-design studies using data from computer models. A general approach to parameter design is introduced by coupling an optimizer directly with the computer simulation model using stochastic descriptions of the noise factors. The computational burden of these approaches can be extreme, however, and depends on the sample size used for characterizing the parametric uncertainties. In this article, we present a new sampling technique that generates and inverts the Hammersley points (a low-discrepancy design for placing n points uniformly in a k-dimensional cube) to provide a representative sample for multivariate probability distributions. We compare the performance of this to a sample obtained from a Latin hypercube design by propagating it through a set of nonlinear functions. The number of samples required to converge to the mean and variance is used as a measure of performance. The sampling technique based on the Hammersley points requires far fewer sampl...

309 citations

Journal ArticleDOI
TL;DR: In this paper, a sampling technique is presented that generates and inverts the Hammersley points (an optimal design for placing n points uniformly on a k-dimensional cube) to provide a representative sample for multivariate probability distributions.
Abstract: The concept of robust design involves identification of design settings that make the product performance less sensitive to the effects of seasonal and environmental variations. This concept is discussed in this article in the context of batch distillation column design with feed stock variations, and internal and external uncertainties. Stochastic optimization methods provide a general approach to robust/parameter design as compared to conventional techniques. However, the computational burden of these approaches can be extreme and depends on the sample size used for characterizing the parametric variations and uncertainties. A novel sampling technique is presented that generates and inverts the Hammersley points (an optimal design for placing n points uniformly on a k-dimensional cube) to provide a representative sample for multivariate probability distributions. The example of robust batch-distillation column design illustrates that the new sampling technique offers significant computational savings and better accuracy.

245 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI

6,278 citations

Book
30 Jun 2002
TL;DR: This paper presents a meta-anatomy of the multi-Criteria Decision Making process, which aims to provide a scaffolding for the future development of multi-criteria decision-making systems.
Abstract: List of Figures. List of Tables. Preface. Foreword. 1. Basic Concepts. 2. Evolutionary Algorithm MOP Approaches. 3. MOEA Test Suites. 4. MOEA Testing and Analysis. 5. MOEA Theory and Issues. 3. MOEA Theoretical Issues. 6. Applications. 7. MOEA Parallelization. 8. Multi-Criteria Decision Making. 9. Special Topics. 10. Epilog. Appendix A: MOEA Classification and Technique Analysis. Appendix B: MOPs in the Literature. Appendix C: Ptrue & PFtrue for Selected Numeric MOPs. Appendix D: Ptrue & PFtrue for Side-Constrained MOPs. Appendix E: MOEA Software Availability. Appendix F: MOEA-Related Information. Index. References.

5,994 citations

Book
01 Jan 2001
TL;DR: The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams, and presents a thorough introduction to state-of-the-art solution and analysis algorithms.
Abstract: Probabilistic graphical models and decision graphs are powerful modeling tools for reasoning and decision making under uncertainty. As modeling languages they allow a natural specification of problem domains with inherent uncertainty, and from a computational perspective they support efficient algorithms for automatic construction and query answering. This includes belief updating, finding the most probable explanation for the observed evidence, detecting conflicts in the evidence entered into the network, determining optimal strategies, analyzing for relevance, and performing sensitivity analysis. The book introduces probabilistic graphical models and decision graphs, including Bayesian networks and influence diagrams. The reader is introduced to the two types of frameworks through examples and exercises, which also instruct the reader on how to build these models. The book is a new edition of Bayesian Networks and Decision Graphs by Finn V. Jensen. The new edition is structured into two parts. The first part focuses on probabilistic graphical models. Compared with the previous book, the new edition also includes a thorough description of recent extensions to the Bayesian network modeling language, advances in exact and approximate belief updating algorithms, and methods for learning both the structure and the parameters of a Bayesian network. The second part deals with decision graphs, and in addition to the frameworks described in the previous edition, it also introduces Markov decision processes and partially ordered decision problems. The authors also provide a well-founded practical introduction to Bayesian networks, object-oriented Bayesian networks, decision trees, influence diagrams (and variants hereof), and Markov decision processes. give practical advice on the construction of Bayesian networks, decision trees, and influence diagrams from domain knowledge. give several examples and exercises exploiting computer systems for dealing with Bayesian networks and decision graphs. present a thorough introduction to state-of-the-art solution and analysis algorithms. The book is intended as a textbook, but it can also be used for self-study and as a reference book.

4,566 citations