scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1999"


Journal ArticleDOI
TL;DR: In this paper, the authors present and justify a set of four desirable properties for measures of risk, and call the measures satisfying these properties "coherent", and demonstrate the universality of scenario-based methods for providing coherent measures.
Abstract: In this paper we study both market risks and nonmarket risks, without complete markets assumption, and discuss methods of measurement of these risks. We present and justify a set of four desirable properties for measures of risk, and call the measures satisfying these properties “coherent.” We examine the measures of risk provided and the related actions required by SPAN, by the SEC=NASD rules, and by quantile-based methods. We demonstrate the universality of scenario-based methods for providing coherent measures. We offer suggestions concerning the SEC method. We also suggest a method to repair the failure of subadditivity of quantile-based methods.

8,651 citations


Proceedings ArticleDOI
30 Aug 1999
TL;DR: These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period, and can be used to generate and select realistic topologies for simulation purposes.
Abstract: Despite the apparent randomness of the Internet, we discover some surprisingly simple power-laws of the Internet topology. These power-laws hold for three snapshots of the Internet, between November 1997 and December 1998, despite a 45% growth of its size during that period. We show that our power-laws fit the real data very well resulting in correlation coefficients of 96% or higher.Our observations provide a novel perspective of the structure of the Internet. The power-laws describe concisely skewed distributions of graph properties such as the node outdegree. In addition, these power-laws can be used to estimate important parameters such as the average neighborhood size, and facilitate the design and the performance analysis of protocols. Furthermore, we can use them to generate and select realistic topologies for simulation purposes.

5,023 citations


Book
07 Jan 1999

4,478 citations


Book
30 Sep 1999
TL;DR: This edition includes recent research results pertaining to the diagnosis of discrete event systems, decentralized supervisory control, and interval-based timed automata and hybrid automata models.
Abstract: Introduction to Discrete Event Systems is a comprehensive introduction to the field of discrete event systems, offering a breadth of coverage that makes the material accessible to readers of varied backgrounds. The book emphasizes a unified modeling framework that transcends specific application areas, linking the following topics in a coherent manner: language and automata theory, supervisory control, Petri net theory, Markov chains and queuing theory, discrete-event simulation, and concurrent estimation techniques. This edition includes recent research results pertaining to the diagnosis of discrete event systems, decentralized supervisory control, and interval-based timed automata and hybrid automata models.

4,330 citations


Journal ArticleDOI
TL;DR: The performance of the genomic control method is quite good for plausible effects of liability genes, which bodes well for future genetic analyses of complex disorders.
Abstract: A dense set of single nucleotide polymorphisms (SNP) covering the genome and an efficient method to assess SNP genotypes are expected to be available in the near future. An outstanding question is how to use these technologies efficiently to identify genes affecting liability to complex disorders. To achieve this goal, we propose a statistical method that has several optimal properties: It can be used with case control data and yet, like family-based designs, controls for population heterogeneity; it is insensitive to the usual violations of model assumptions, such as cases failing to be strictly independent; and, by using Bayesian outlier methods, it circumvents the need for Bonferroni correction for multiple tests, leading to better performance in many settings while still constraining risk for false positives. The performance of our genomic control method is quite good for plausible effects of liability genes, which bodes well for future genetic analyses of complex disorders.

3,130 citations


Proceedings ArticleDOI
01 Aug 1999
TL;DR: The results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small, and that all the methods perform comparably when the categories are over 300 instances.
Abstract: This paper reports a controlled study with statistical signi cance tests on ve text categorization methods: the Support Vector Machines (SVM), a k-Nearest Neighbor (kNN) classi er, a neural network (NNet) approach, the Linear Leastsquares Fit (LLSF) mapping and a Naive Bayes (NB) classier. We focus on the robustness of these methods in dealing with a skewed category distribution, and their performance as function of the training-set category frequency. Our results show that SVM, kNN and LLSF signi cantly outperform NNet and NB when the number of positive training instances per category are small (less than ten), and that all the methods perform comparably when the categories are su ciently common (over 300 instances).

2,877 citations


Journal ArticleDOI
TL;DR: In this paper, a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion is presented, which is based on matching surfaces by matching points using the spin image representation.
Abstract: We present a 3D shape-based object recognition system for simultaneous recognition of multiple objects in scenes containing clutter and occlusion. Recognition is based on matching surfaces by matching points using the spin image representation. The spin image is a data level shape descriptor that is used to match surfaces represented as surface meshes. We present a compression scheme for spin images that results in efficient multiple object recognition which we verify with results showing the simultaneous recognition of multiple objects from a library of 20 models. Furthermore, we demonstrate the robust performance of recognition in the presence of clutter and occlusion through analysis of recognition trials on 100 scenes.

2,798 citations


Book ChapterDOI
22 Mar 1999
TL;DR: This paper shows how boolean decision procedures, like Stalmarck's Method or the Davis & Putnam Procedure, can replace BDDs, and introduces a bounded model checking procedure for LTL which reduces model checking to propositional satisfiability.
Abstract: Symbolic Model Checking [3, 14] has proven to be a powerful technique for the verification of reactive systems. BDDs [2] have traditionally been used as a symbolic representation of the system. In this paper we show how boolean decision procedures, like Stalmarck's Method [16] or the Davis & Putnam Procedure [7], can replace BDDs. This new technique avoids the space blow up of BDDs, generates counterexamples much faster, and sometimes speeds up the verification. In addition, it produces counterexamples of minimal length. We introduce a bounded model checking procedure for LTL which reduces model checking to propositional satisfiability. We show that bounded LTL model checking can be done without a tableau construction. We have implemented a model checker BMC, based on bounded model checking, and preliminary results are presented.

2,424 citations


Journal ArticleDOI
TL;DR: A growing body of empirical work has documented the superior performance characteristics of exporting plants and firms relative to non-exporters as discussed by the authors, showing that good firms become exporters, both growth rates and levels of success measures are higher ex-ante for exporters.

2,416 citations


Journal ArticleDOI
TL;DR: Agroup-based method for identifying distinctive groups of individual trajectories within the population and for profiling the characteristics of group members is demonstrated.
Abstract: Carnegie Mellon UniversityA developmental trajectory describes the course of a behavior over age or time. Agroup-based method for identifying distinctive groups of individual trajectorieswithin the population and for profiling the characteristics of group members isdemonstrated. Such clusters might include groups of "increasers." "decreasers,"and "no changers." Suitably defined probability distributions are used to handle 3data types—count, binary, and psychometric scale data. Four capabilities are dem-onstrated: (a) the capability to identify rather than assume distinctive groups oftrajectories, (b) the capability to estimate the proportion of the population followingeach such trajectory group, (c) the capability to relate group membership probabil-ity to individual characteristics and circumstances, and (d) the capability to use thegroup membership probabilities for various other purposes such as creating profilesof group members.

2,163 citations


Journal ArticleDOI
TL;DR: Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature.
Abstract: This paper focuses on a comparative evaluation of a wide-range of text categorization methods, including previously published results on the Reuters corpus and new results of additional experiments. A controlled study using three classifiers, kNN, LLSF and WORD, was conducted to examine the impact of configuration variations in five versions of Reuters on the observed performance of classifiers. Analysis and empirical evidence suggest that the evaluation results on some versions of Reuters were significantly affected by the inclusion of a large portion of unlabelled documents, mading those results difficult to interpret and leading to considerable confusions in the literature. Using the results evaluated on the other versions of Reuters which exclude the unlabelled documents, the performance of twelve methods are compared directly or indirectly. For indirect compararions, kNN, LLSF and WORD were used as baselines, since they were evaluated on all versions of Reuters that exclude the unlabelled documents. As a global observation, kNN, LLSF and a neural network method had the best performances except for a Naive Bayes approach, the other learning algorithms also performed relatively well.

Journal ArticleDOI
TL;DR: This article reviews the now extensive research literature addressing the impact of accountability on a wide range of social judgments and choices and highlights the utility of treating thought as a process of internalized dialogue and the importance of documenting social and institutional boundary conditions on putative cognitive biases.
Abstract: This article reviews the now extensive research literature addressing the impact of accountabilit y on a wide range of social judgments and choices. It focuses on 4 issues: (a) What impact do various accountability ground rules have on thoughts, feelings, and action? (b) Under what conditions will accountability attenuate, have no effect on, or amplify cognitive biases? (c) Does accountability alter how people think or merely what people say they think? and (d) What goals do accountable decision makers seek to achieve? In addition, this review explores the broader implications of accountability research. It highlights the utility of treating thought as a process of internalized dialogue; the importance of documenting social and institutional boundary conditions on putative cognitive biases; and the potential to craft empirical answers to such applied problems as how to structure accountability relationships in organizations.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that firms in geographical clusters that maintain networks rich in bridging ties and sustain ties to regional institutions are well-positioned to access new information, ideas, and opportunities.
Abstract: What explains differences in firms’ abilities to acquire competitive capabilities? In this paper we propose that a firm’s embeddedness in a network of ties is an important source of variation in the acquisition of competitive capabilities. We argue that firms in geographical clusters that maintain networks rich in bridging ties and sustain ties to regional institutions are well-positioned to access new information, ideas, and opportunities. Hypotheses based on these ideas were tested on a stratified random sample of 227 job shop manufacturers located in the Midwest United States. Data were gathered using a mailed questionnaire. Results from structural equation modeling broadly support the embeddedness hypotheses and suggest a number of insights about the link between firms’ networks and the acquisition of competitive capabilities. Copyright © 1999 John Wiley & Sons, Ltd.

Journal ArticleDOI
11 Nov 1999-Nature
TL;DR: Functional magnetic resonance imaging was used to measure brain activation during performance of a task where, for a particular subset of trials, the strength of selection-for-action is inversely related to the degree of response conflict, providing evidence in favour of the conflict-monitoring account of ACC function.
Abstract: The anterior cingulate cortex (ACC), on the medial surface of the frontal lobes of the brain, is widely believed to be involved in the regulation of attention. Beyond this, however, its specific contribution to cognition remains uncertain. One influential theory has interpreted activation within the ACC as reflecting 'selection-for-action', a set of processes that guide the selection of environmental objects as triggers of or targets for action. We have proposed an alternative hypothesis, in which the ACC serves not to exert top-down attentional control but instead to detect and signal the occurrence of conflicts in information processing. Here, to test this theory against the selection-for-action theory, we used functional magnetic resonance imaging to measure brain activation during performance of a task where, for a particular subset of trials, the strength of selection-for-action is inversely related to the degree of response conflict. Activity within the ACC was greater during trials featuring high levels of conflict (and weak selection-for-action) than during trials with low levels of conflict (and strong selection-for-action), providing evidence in favour of the conflict-monitoring account of ACC function.

Journal ArticleDOI
TL;DR: This work surveys the most widely-used algorithms for smoothing models for language n -gram modeling, and presents an extensive empirical comparison of several of these smoothing techniques, including those described by Jelinek and Mercer (1980), and introduces methodologies for analyzing smoothing algorithm efficacy in detail.

Proceedings ArticleDOI
10 May 1999
TL;DR: The Monte Carlo localization method is introduced, where the probability density is represented by maintaining a set of samples that are randomly drawn from it, and it is shown that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location.
Abstract: To navigate reliably in indoor environments, a mobile robot must know where it is. Thus, reliable position estimation is a key problem in mobile robotics. We believe that probabilistic approaches are among the most promising candidates to providing a comprehensive and real-time solution to the robot localization problem. However, current methods still face considerable hurdles. In particular the problems encountered are closely related to the type of representation used to represent probability densities over the robot's state space. Earlier work on Bayesian filtering with particle-based density representations opened up a new approach for mobile robot localization based on these principles. We introduce the Monte Carlo localization method, where we represent the probability density involved by maintaining a set of samples that are randomly drawn from it. By using a sampling-based representation we obtain a localization method that can represent arbitrary distributions. We show experimentally that the resulting method is able to efficiently localize a mobile robot without knowledge of its starting location. It is faster, more accurate and less memory-intensive than earlier grid-based methods,.

Journal ArticleDOI
TL;DR: In this article, a MATLAB implementation of infeasible path-following algorithms for solving standard semidefinite programs (SDP) is presented, and Mehrotra-type predictor-corrector variants are included.
Abstract: This software package is a MATLAB implementation of infeasible path-following algorithms for solving standard semidefinite programs (SDP). Mehrotra-type predictor-corrector variants are included. Analogous algorithms for the homogeneous formulation of the standard SDP are also implemented. Four types of search directions are available, namely, the AHO, HKM, NT, and GT directions. A few classes of SDP problems are included as well. Numerical results for these classes show that our algorithms are fairly efficient and robust on problems with dimensions of the order of a hundred.

Journal ArticleDOI
TL;DR: In this paper, the authors summarize the recent developments in the synthesis, structural characterization, properties, and applications of amorphous and nanocrystalline soft magnetic materials, including: kinetics and thermodynamics, structure, microstructure, and intrinsic and extrinsic magnetic properties.

Journal ArticleDOI
TL;DR: A semi-parametric mixture model was used with a sample of 1,037 boys assessed repeatedly from 6 to 15 years of age to approximate a continuous distribution of developmental trajectories for three externalizing behaviors.
Abstract: A semi-parametric mixture model was used with a sample of 1,037 boys assessed repeatedly from 6 to 15 years of age to approximate a continuous distribution of developmental trajectories for three externalizing behaviors. Regression models were then used to determine which trajectories best predicted physically violent and nonviolent juvenile delinquency up to 17 years of age. Four developmental trajectories were identified for the physical aggression, opposition, and hyperactivity externalizing behavior dimensions: a chronic problem trajectory, a high level near-desister trajectory, a moderate level desister trajectory, and a no problem trajectory. Boys who followed a given trajectory for one type of externalizing problem behavior did not necessarily follow the same trajectory for the two other types of behavior problem. The different developmental trajectories of problem behavior also led to different types of juvenile delinquency. A chronic oppositional trajectory, with the physical aggression and hyperactivity trajectories being held constant, led to covert delinquency (theft) only, while a chronic physical aggression trajectory, with the oppositional and hyperactivity trajectories being held constant, led to overt delinquency (physical violence) and to the most serious delinquent acts.

Proceedings Article
23 Aug 1999
TL;DR: It is concluded that PGP 5.0 is not usable enough to provide effective security for most computer users, despite its attractive graphical user interface, supporting the hypothesis that user interface design for effective security remains an open problem.
Abstract: User errors cause or contribute to most computer security failures, yet user interfaces for security still tend to be clumsy, confusing, or near-nonexistent. Is this simply due to a failure to apply standard user interface design techniques to security? We argue that, on the contrary, effective security requires a different usability standard, and that it will not be achieved through the user interface design techniques appropriate to other types of consumer software. To test this hypothesis, we performed a case study of a security program which does have a good user interface by general standards: PGP 5.0. Our case study used a cognitive walkthrough analysis together with a laboratory user test to evaluate whether PGP 5.0 can be successfully used by cryptography novices to achieve effective electronic mail security. The analysis found a number of user interface design flaws that may contribute to security failures, and the user test demonstrated that when our test participants were given 90 minutes in which to sign and encrypt a message using PGP 5.0, the majority of them were unable to do so successfully. We conclude that PGP 5.0 is not usable enough to provide effective security for most computer users, despite its attractive graphical user interface, supporting our hypothesis that user interface design for effective security remains an open problem. We close with a brief description of our continuing work on the development and application of user interface design principles and techniques for security.

Journal ArticleDOI
TL;DR: In this article, the authors use a dynamic model to predict changes in a firm's systematic risk, and its expected return, and show that the model simultaneously reproduces the time-series relation between the book-to-market ratio and asset returns, the cross-sectional relation between book to market, market value, and return, contrarian effects at short horizons, momentum effects at longer horizons and the inverse relation between interest rates and the market risk premium.
Abstract: As a consequence of optimal investment choices, a firm's assets and growth options change in predictable ways. Using a dynamic model, we show that this imparts predictability to changes in a firm's systematic risk, and its expected return. Simulations show that the model simultaneously reproduces: (i) the time-series relation between the book-to-market ratio and asset returns; (ii) the cross-sectional relation between book-to-market, market value, and return; (iii) contrarian effects at short horizons; (iv) momentum effects at longer horizons; and (v) the inverse relation between interest rates and the market risk premium. RECENT EMPIRICAL RESEARCH IN FINANCE has focused on regularities in the cross section of expected returns that appear anomalous relative to traditional models. Stock returns are related to book-to-market, and market value.1 Past returns have also been shown to predict relative performance, through the documented success of contrarian and momentum strategies.2 Existing explanations for these results are that they are due to behavioral biases or risk premia for omitted state variables.3 These competing explanations are difficult to evaluate without models that explicitly tie the characteristics of interest to risks and risk premia. For example, with respect to book-to-market, Lakonishok et al. (1994) argue: "The point here is simple: although the returns to the B/M strategy are impressive, B/M is not a 'clean' variable uniquely associated with eco

Journal ArticleDOI
TL;DR: A version of Markov localization which provides accurate position estimates and which is tailored towards dynamic environments, and includes a filtering technique which allows a mobile robot to reliably estimate its position even in densely populated environments in which crowds of people block the robot's sensors for extended periods of time.
Abstract: Localization, that is the estimation of a robot's location from sensor data, is a fundamental problem in mobile robotics. This papers presents a version of Markov localization which provides accurate position estimates and which is tailored towards dynamic environments. The key idea of Markov localization is to maintain a probability density over the space of all locations of a robot in its environment. Our approach represents this space metrically, using a fine-grained grid to approximate densities. It is able to globally localize the robot from scratch and to recover from localization failures. It is robust to approximate models of the environment (such as occupancy grid maps) and noisy sensors (such as ultrasound sensors). Our approach also includes a filtering technique which allows a mobile robot to reliably estimate its position even in densely populated environments in which crowds of people block the robot's sensors for extended periods of time. The method described here has been implemented and tested in several real-world applications of mobile robots, including the deployments of two mobile robots as interactive museum tour-guides.

Journal ArticleDOI
TL;DR: A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information.
Abstract: A novel scene reconstruction technique is presented, different from previous approaches in its ability to cope with large changes in visibility and its modeling of intrinsic scene color and texture information. The method avoids image correspondence problems by working in a discretized scene space whose voxels are traversed in a fixed visibility ordering. This strategy takes full account of occlusions and allows the input cameras to be far apart and widely distributed about the environment. The algorithm identifies a special set of invariant voxels which together form a spatial and photometric reconstruction of the scene, fully consistent with the input images. The approach is evaluated with images from both inward-facing and outward-facing cameras.

01 Jan 1999
TL;DR: This paper uses maximum entropy techniques for text classification by estimating the conditional distribution of the class variable given the document by comparing accuracy to naive Bayes and showing that maximum entropy is sometimes significantly better, but also sometimes worse.
Abstract: This paper proposes the use of maximum entropy techniques for text classification. Maximum entropy is a probability distribution estimation technique widely used for a variety of natural language tasks, such as language modeling, part-of-speech tagging, and text segmentation. The underlying principle of maximum entropy is that without external knowledge, one should prefer distributions that are uniform. Constraints on the distribution, derived from labeled training data, inform the technique where to be minimally non-uniform. The maximum entropy formulation has a unique solution which can be found by the improved iterative scaling algorithm. In this paper, maximum entropy is used for text classification by estimating the conditional distribution of the class variable given the document. In experiments on several text datasets we compare accuracy to naive Bayes and show that maximum entropy is sometimes significantly better, but also sometimes worse. Much future work remains, but the results indicate that maximum entropy is a promising technique for text classification.

Journal ArticleDOI
TL;DR: The software architecture of an autonomous, interactive tour-guide robot is presented, which integrates localization, mapping, collision avoidance, planning, and various modules concerned with user interaction and Web-based telepresence and enables robots to operate safely, reliably, and at high speeds in highly dynamic environments.

Journal ArticleDOI
TL;DR: This paper explored the development of reading skill and bases of developmental dyslexia using connectionist models and found that representing phonological knowledge in an attractor network yielded improved learning and generalization.
Abstract: The development of reading skill and bases of developmental dyslexia were explored using connectionist models. Four issues were examined: the acquisition of phonological knowledge prior to reading, how this knowledge facilitates learning to read, phonological and non phonological bases of dyslexia, and effects of literacy on phonological representation. Compared with simple feedforward networks, representing phonological knowledge in an attractor network yielded improved learning and generalization. Phonological and surface forms of developmental dyslexia, which are usually attributed to impairments in distinct lexical and nonlexical processing “routes,” were derived from different types of damage to the network. The results provide a computationally explicit account of many aspects of reading acquisition using connectionist principles. Phonological information plays a central role in learning to read and in skilled reading. Several converging sources of evidence indicate that learning to relate the spoken and written forms of language is a critical step in learning to read (see Adams, 1990, for an extensive review). Children’ s knowledge of the phonological structure of language is a good predictor of early reading ability (Bradley & Bryant, 1983; Tunmer & Nesdale, 1985; Mann, 1984; Olson, Wise, Conners, Rack, & Fulker, 1989; Shankweiler & Liberman, 1989) and impairments in the representation or processing of phonological information are implicated in at least some forms of developmental dyslexia (Manis, Seidenberg, Doi, McBride-Chang, & Peterson, 1996; Stanovich, Siegel, & Gottardo, 1997). Use of phonolog

Journal ArticleDOI
TL;DR: This work presents an algorithm that establishes a tight bound within this minimal amount of search, and shows how to distribute the desired search across self-interested manipulative agents.

Proceedings ArticleDOI
01 Jun 1999
TL;DR: This paper applies bounded model checking to equivalence and invariant checking and presents several optimizations that reduce the size of generated propositional formulas in hardware verification.
Abstract: In this paper, we study the application of propositional decision procedures in hardware verification. In particular, we apply bounded model checking to equivalence and invariant checking. We present several optimizations that reduce the size of generated propositional formulas. In many instances, our SAT-based approach can significantly outperform BDD-based approaches. We observe that SAT-based techniques are particularly efficient in detecting errors in both combinational and sequential designs.

Proceedings ArticleDOI
10 May 1999
TL;DR: An interactive tour-guide robot is described, which was successfully exhibited in a Smithsonian museum, and uses learning pervasively at all levels of the software architecture to address issues such as safe navigation in unmodified and dynamic environments, and short-term human-robot interaction.
Abstract: This paper describes an interactive tour-guide robot, which was successfully exhibited in a Smithsonian museum. During its two weeks of operation, the robot interacted with thousands of people, traversing more than 44 km at speeds of up to 163 cm/sec. Our approach specifically addresses issues such as safe navigation in unmodified and dynamic environments, and short-term human-robot interaction. It uses learning pervasively at all levels of the software architecture.

Journal ArticleDOI
TL;DR: Assessment of the approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts, using a new probabilistically motivated error metric.
Abstract: This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.