Institution
Carnegie Mellon University
Education•Pittsburgh, Pennsylvania, United States•
About: Carnegie Mellon University is a education organization based out in Pittsburgh, Pennsylvania, United States. It is known for research contribution in the topics: Computer science & Robot. The organization has 36317 authors who have published 104359 publications receiving 5975734 citations. The organization is also known as: CMU & Carnegie Mellon.
Papers published on a yearly basis
Papers
More filters
••
TL;DR: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function and connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.
Abstract: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function. Introducing the notion of polynomial-size, constant-depth reduction, similar results are shown for the majority, multiplication, and transitive closure functions. Connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.
915 citations
••
12 May 2002TL;DR: This work considers the problem that arises when the server is overwhelmed by the volume of requests from its clients, and proposes Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server.
Abstract: In this paper, we discuss the problem of distributing streaming media content, both live and on-demand, to a large number of hosts in a scalable way Our work is set in the context of the traditional client-server framework Specifically, we consider the problem that arises when the server is overwhelmed by the volume of requests from its clients As a solution, we propose Cooperative Networking (CoopNet), where clients cooperate to distribute content, thereby alleviating the load on the server We discuss the proposed solution in some detail, pointing out the interesting research issues that arise, and present a preliminary evaluation using traces gathered at a busy news site during the flash crowd that occurred on September 11, 2001
914 citations
••
TL;DR: In this paper, the authors describe a supply chain modeling framework designed to overcome the time and effort required to develop models with sufficient fidelity to the actual supply chain of interest, which is essential to perform risk-benefit analysis of reengineering alternatives before making a final decision.
Abstract: A global economy and increase in customer expectations in terms of cost and services have put a premium on effective supply chain reengineering. It is essential to perform risk-benefit analysis of reengineering alternatives before making a final decision. Simulation provides an effective pragmatic approach to detailed analysis and evaluation of supply chain design and management alternatives. However, the utility of this methodology is hampered by the time and effort required to develop models with sufficient fidelity to the actual supply chain of interest. In this paper, we describe a supply chain modeling framework designed to overcome this difficulty. Using our approach, supply chain models are composed from software components that represent types of supply chain agents (e.g., retailers, manufacturers, transporters), their constituent control elements (e.g., inventory policy), and their interaction protocols (e.g., message types). The underlying library of supply chain modeling components has been derived from analysis of several different supply chains. It provides a reusable base of domain-specific primitives that enables rapid development of customized decision support tools.
914 citations
••
TL;DR: In this article, a Gaussian mixture model (GMM) of the joint probability density of source and target features is employed for performing spectral conversion between speakers, and a conversion method based on the maximum-likelihood estimation of a spectral parameter trajectory is proposed.
Abstract: In this paper, we describe a novel spectral conversion method for voice conversion (VC). A Gaussian mixture model (GMM) of the joint probability density of source and target features is employed for performing spectral conversion between speakers. The conventional method converts spectral parameters frame by frame based on the minimum mean square error. Although it is reasonably effective, the deterioration of speech quality is caused by some problems: 1) appropriate spectral movements are not always caused by the frame-based conversion process, and 2) the converted spectra are excessively smoothed by statistical modeling. In order to address those problems, we propose a conversion method based on the maximum-likelihood estimation of a spectral parameter trajectory. Not only static but also dynamic feature statistics are used for realizing the appropriate converted spectrum sequence. Moreover, the oversmoothing effect is alleviated by considering a global variance feature of the converted spectra. Experimental results indicate that the performance of VC can be dramatically improved by the proposed method in view of both speech quality and conversion accuracy for speaker individuality.
914 citations
••
TL;DR: The basic theory of categorization developed in Anderson (1990) is presented and the theory has been greatly extended and applied to many new phenomena and new developments and applications are described.
Abstract: A rational model of human categorization behavior is presented that assumes that categorization reflects the derivation of optimal estimates of the probability of unseen features of objects. A Bayesian analysis is performed of what optimal estimations would be if categories formed a disjoint partitioning of the object space and if features were independently displayed within a category. This Bayesian analysis is placed within an incremental categorization algorithm. The resulting rational model accounts for effects of central tendency of categories, effects of specific instances, learning of linearly nonseparable categories, effects of category labels, extraction of basic level categories, base-rate effects, probability matching in categorization, and trial-by-trial learning functions. Although the rational model considers just I level of categorization, it is shown how predictions can be enhanced by considering higher and lower levels. Considering prediction at the lower, individual level allows integration of this rational analysis of categorization with the earlier rational analysis of memory (Anderson & Milson, 1989). Anderson (1990) presented a rational analysis ot 6 human cognition. The term rational derives from similar "rational-man" analyses in economics. Rational analyses in other fields are sometimes called adaptationist analyses. Basically, they are efforts to explain the behavior in some domain on the assumption that the behavior is optimized with respect to some criteria of adaptive importance. This article begins with a general characterization ofhow one develops a rational theory of a particular cognitive phenomenon. Then I present the basic theory of categorization developed in Anderson (1990) and review the applications from that book. Since the writing of the book, the theory has been greatly extended and applied to many new phenomena. Most of this article describes these new developments and applications.
914 citations
Authors
Showing all 36645 results
Name | H-index | Papers | Citations |
---|---|---|---|
Yi Chen | 217 | 4342 | 293080 |
Rakesh K. Jain | 200 | 1467 | 177727 |
Robert C. Nichol | 187 | 851 | 162994 |
Michael I. Jordan | 176 | 1016 | 216204 |
Jasvinder A. Singh | 176 | 2382 | 223370 |
J. N. Butler | 172 | 2525 | 175561 |
P. Chang | 170 | 2154 | 151783 |
Krzysztof Matyjaszewski | 169 | 1431 | 128585 |
Yang Yang | 164 | 2704 | 144071 |
Geoffrey E. Hinton | 157 | 414 | 409047 |
Herbert A. Simon | 157 | 745 | 194597 |
Yongsun Kim | 156 | 2588 | 145619 |
Terrence J. Sejnowski | 155 | 845 | 117382 |
John B. Goodenough | 151 | 1064 | 113741 |
Scott Shenker | 150 | 454 | 118017 |