scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2000"


Proceedings ArticleDOI
31 Jul 2000
TL;DR: It is found in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques, including computationally-intensive randomization tests.
Abstract: Statistical significance testing of differences in values of metrics like recall, precision and balanced F-score is a necessary part of empirical natural language processing. Unfortunately, we find in a set of experiments that many commonly used tests often underestimate the significance and so are less likely to detect differences that exist between different techniques. This underestimation comes from an independence assumption that is often violated. We point out some useful tests that do not make this assumption, including computationally-intensive randomization tests.

436 citations


Journal ArticleDOI
01 Apr 2000
TL;DR: Theoretical background and implementation details of SEMINT are provided and experimental results from large and complex real databases are presented.
Abstract: One step in interoperating among heterogeneous databases is semantic integration: Identifying relationships between attributes or classes in diAerent database schemas. SEMantic INTegrator (SEMINT) is a tool based on neural networks to assist in identifying attribute correspondences in heterogeneous databases. SEMINT supports access to a variety of database systems and utilizes both schema information and data contents to produce rules for matching corresponding attributes automatically. This paper provides theoretical background and implementation details of SEMINT. Experimental results from large and complex real databases are presented. We discuss the eAectiveness of SEMINT and our experiences with attribute correspondence identification in various environments. ” 2000 Elsevier Science B.V. All rights reserved.

428 citations


Proceedings ArticleDOI
03 Oct 2000
TL;DR: An annotation scheme for temporal expressions, and a method for resolving temporal expressions in print and broadcast news, based on both hand-crafted and machine-learnt rules are described.
Abstract: We introduce an annotation scheme for temporal expressions, and describe a method for resolving temporal expressions in print and broadcast news. The system, which is based on both hand-crafted and machine-learnt rules, achieves an 83.2% accuracy (F-measure) against hand-annotated data. Some initial steps towards tagging event chronologies are also described.

392 citations


Journal ArticleDOI
01 Mar 2000
TL;DR: In this paper, the authors present a design for a molecular-scale electronic half adder and a full adder based on molecular wires and diode switches, which correspond to conductive monomolecular circuits that would be one million times smaller in area than the corresponding micron-scale digital logic circuits fabricated on conventional solid-state semiconductor computer chips.
Abstract: Recently, there have been significant advances in the fabrication and demonstration of individual molecular electronic wires and diode switches. This paper reviews those developments and shows how demonstrated molecular devices might be combined to design molecular-scale electronic digital computer logic. The design for the demonstrated rectifying molecular diode switches is refined and made more compatible with the demonstrated wires through the introduction of intramolecular dopant groups chemically bonded to modified molecular wires. Quantum mechanical calculations are performed to characterize some of the electrical properties of the proposed molecular diode switches. Explicit structural designs are displayed for AND, OR, and XOR gates that are built from molecular wires and molecular diode switches. The diode-based molecular electronic logic gates are combined to produce a design for a molecular-scale electronic half adder and a molecular-scale electronic full adder. These designs correspond to conductive monomolecular circuit structures that would be one million times smaller in area than the corresponding micron-scale digital logic circuits fabricated on conventional solid-state semiconductor computer chips. It appears likely that these nanometer-scale molecular electronic logic circuits could be fabricated and tested in the foreseeable future. At the very least, such molecular circuit designs constitute an exploration of the ultimate limits of electronic computer circuit miniaturization.

366 citations


Journal ArticleDOI
TL;DR: It is demonstrated that by using an adaptive space-time array the interference from multiple, strong interferers plus multipath can be canceled down close to the noise floor without producing serious loss or distortion of a GPS signal.
Abstract: We have demonstrated that by using an adaptive space-time array the interference from multiple, strong interferers plus multipath can be canceled down close to the noise floor without producing serious loss or distortion of a GPS signal. Design criteria are presented and limitations are examined. We also compare space-time processing with suboptimum space-frequency processing, and demonstrate by simulation that for equal computational complexity space-time processing slightly outperforms suboptimum space-frequency processing.

321 citations


Journal Article
TL;DR: Andes as discussed by the authors is an Intelligent Tutoring System for introductory college physics that encourages the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, and facilitates transfer from the system by making the interface as much like a piece of paper as possible.
Abstract: Andes is an Intelligent Tutoring System for introductory college physics. The fundamental principles underlying the design of Andes are: (1) encourage the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, (2) facilitate transfer from the system by making the interface as much like a piece of paper as possible, (3) give immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths, and (4) give the student flexibility in the order in which actions are performed, and allow them to skip steps when appropriate. This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system.

219 citations


Book ChapterDOI
19 Jun 2000
TL;DR: This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system.
Abstract: Andes is an Intelligent Tutoring System for introductory college physics. The fundamental principles underlying the design of Andes are: (1) encourage the student to construct new knowledge by providing hints that require them to derive most of the solution on their own, (2) facilitate transfer from the system by making the interface as much like a piece of paper as possible, (3) give immediate feedback after each action to maximize the opportunities for learning and minimize the amount of time spent going down wrong paths, and (4) give the student flexibility in the order in which actions are performed, and allow them to skip steps when appropriate. This paper gives an overview of Andes, focusing on the overall architecture and the student's experience using the system.

213 citations


Journal Article
TL;DR: The use of a recommender system to enable continuous knowledge acquisition and individualized tutoring of application software across an organization and the results of a year-long naturalistic inquiry into application’s usage patterns are presented, based on logging users’ actions.
Abstract: We describe the use of a recommender system to enable continuous knowledge acquisition and individualized tutoring of application software across an organization. Installing such systems will result in the capture of evolving expertise and in organization-wide learning (OWL). We present the results of a year-long naturalistic inquiry into application’s usage patterns, based on logging users’ actions. We analyze the data to develop user models, individualized expert models, confidence intervals, and instructional indicators. We show how this information could be used to tutor users. Introduction Recommender Systems typically help people select products, services, and information. A novel application of recommender systems is to help individuals select ’what to learn next’ by recommending knowledge that their peers have found useful. For example, people typically utilize only a small portion of a software application’s functionality (one study shows users applying less than 10% of Microsoft Word’s commands). A recommender system can unobtrusively note which portions of an application’s functionality that the members of an organization find useful, group the organization’s members into sets of similar users, or peers (based on similar demographic factors such as job title, or similarities in command usage patterns), and produce recommendations for learning that are specific to the individual in the context of his/her organization, peers, and current activities. This paper reports research on a recommender system (Resnick & Varian, 1997) intended to promote gradual but perpetual performance improvement in the use of application software. We present our rationale, an analysis of a year’s collected data, and a vision of how users might learn from the system. We have worked with one commercial application, and believe our approach is generally applicable. The research explores the potential of a new sort of user modeling based on summaries of logged user data. This method of user modeling enables the observation of a large number of users over a long period of time, enables concurrent development of student models and individualized expert models, and applies recommender system techniques to on-the-job instruction. Earlier work is reported in Linton (1990), and Linton (1996). Kay and Thomas (1995), Thomas (1996) report on related work with a text editor in an academic environment. A recommender system to enhance the organization-wide learning of application software is a means of promoting organizational learning (Senge, 1990). By pooling and sharing expertise, recommender systems augment and assist the natural social process of people learning from each other. This approach is quite distinct from systems, such as Microsoft’s Office Assistant, which recommend new commands based on their logical equivalence to the lessefficient way a user may be performing a task. The system presented here will (1) capture evolving expertise from community of practice (Lave & Wenger 1991), (2) support less-skilled members of the community in acquiring expertise, and (3) serve as an organizational memory for the expertise it captures. In many workplaces ... mastery is in short supply and what is required is a kind of collaborative bootstrapping of expertise. (Eales & Welch, 1995, p. 100) The main goal of the approach taken in this work is to continuously improve the performance of application users by providing individualized modeling and coaching based on the automated comparison of user models to expert models. The system described here would be applicable in any situation where a number of application users perform similar tasks on networked computers 65 From: AAAI Technical Report WS-98-08. Compilation copyright © 1998, AAAI (www.aaai.org). All rights reserved. In the remainder of this section we describe the logging process and make some initial remarks about modeling and coaching software users. We then present an analysis of the data we have logged and our process of creating individual models of expertise. In the final section we describe further work and close with a summary. Each time a user issues a Word command such as Cut or Paste, the command is written to the log, together with a time stamp, and then executed. The logger, called OWL for Organization-Wide Learning, comes up when the user opens Word; it creates a separate log for each file the user edits, and when the user quits Word, it sends the logs to a server where they are periodically loaded into a database for analysis. A toolbar button labeled ’OWL is ON’ (or OFF) informs users of OWL’s tate and gives them control. Individual models of expertise We have selected the Edit commands for further analysis. A similar analysis could be performed for each type of command. The first of the three tables in Figure 1 presents data on the Edit commands for each of our 16 users. In the table, each column contains data for one user and each row contains data for one command (Edit commands that were not used have been omitted). A cell then, contains the count of the number of times the individual has used the command. The columns have been sorted so that the person using the most commands is on the left and the person using the fewest is on the right. Similarly, the rows have been sorted so that the most frequently used command is in the top row and the least frequently used command is in the bottom row. Consequently the cells with the largest values are in the upper left corner and those with the smallest values are in the lower right comer. The table has been shaded to make the contours of the numbers visible: the largest numbers have the darkest shading and the smallest numbers have no shading, each shade indicates an order of magnitude. Inspection of the first table reveals that users tend to acquire the Edit commands in a specific sequence, i.e., those that know fewer commands know a subset of the commands used by their more-knowledgeable peers. If instead, users acquired commands in an idiosyncratic order, the data would not sort as it does. And if they acquired commands in a manner that strongly reflected their job tasks or their writing tasks, there would be subgroups of users who shared common commands. Also, the more-knowledgeable users do not replace commands learned early on with more powerful commands, but instead keep adding new commands to their repertoire. Finally, the sequence of command acquisition corresponds to the commands’ frequency of use. While this last point is not necessarily a surprise, neither is it a given. There are some peaks and valleys in the data as sorted, and a fairly rough edge where commands transition from being used rarely to being used not at all. These peaks, valleys, and rough edges may represent periods of repetitive tasks or lack of data, respectively, or they may represent overdependence on some command that has a more powerful substitute or ignorance of a command or of a task (a sequence of commands) that uses the command. In other words, some of the peaks, valleys, and rough edges may represent opportunities to learn more effective use of the software. In the second table in Figure 1 the data have been smoothed. The observed value in each cell has been replaced by an expected value, the most likely value for the cell, using a method taken from statistics, based on the row, column and grand totals for the table (Howell, 1982). In the case of software use, the row effect is the overall relative utility of the command (for all users) and the column effect is the usage of related commands by the individual user. The expected value is the usage the command would have if the individual used it in a manner consistent with his/her usage of related commands and consistent with his/her peers’ usage of the command. These expected values are a new kind of expert model, one that is unique to each individual and each moment in time; the expected value in each cell reflects the individual’s use of related commands, and one’s peers’ use of the same command. The reason for differences between observed and expected values, between one’s actual and expert model, might have several explanations such as the individual’s tasks, preferences, experiences, or hardware, but we are most interested when the difference indicates the lack of knowledge or skill.

183 citations


Patent
05 Jan 2000
TL;DR: In this paper, a hand-held antenna specifically for GPS applications is provided which includes a microstrip patch antenna having a ground board, a single radiating patch spaced from the ground board and a resonant cavity defined between the ground boards and the single RSUs.
Abstract: A hand-held antenna specifically for GPS applications is provided which includes a microstrip patch antenna having a ground board, a single radiating patch spaced from the ground board and a resonant cavity defined between the ground board and the single radiating patch. Feed points are provided, one in the geometrical center of the radiating patch, and one, two, or four equidistantly spaced from the central feed point and disposed at 90° angular intervals. A feed network couples fundamental modes of excitation to the side feed points on the patch and a higher mode of excitation to the central feed point. Amplitude and phase controllers are provided in the feed network for amplitude and phase shifting between the fundamental and higher order modes of excitation in order to steer a spatial null in azimuth and elevation.

171 citations


Proceedings ArticleDOI
03 Jul 2000
TL;DR: It is shown that bundles can be modified to remove all inbound linking paths, if encryption does not overlap in the two protocols, and that the resulting bundle does not depend on any activity of the secondary protocol.
Abstract: One protocol (called the primary protocol) is independent of other protocols (jointly called the secondary protocol) if the question whether the primary protocol achieves a security goal never depends on whether the secondary protocol is in use. We use multiprotocol strand spaces to prove that two cryptographic protocols are independent if they use encryption in non-overlapping ways. This theorem applies even if the protocols share public key certificates and secret key "tickets". We use the method of Guttman et al. (2000) to study penetrator paths, namely sequences of penetrator actions connecting regular nodes (message transmissions or receptions) in the two protocols. Of special interest are inbound linking paths, which lead from a message transmission in the secondary protocol to a message reception in the primary protocol. We show that bundles can be modified to remove all inbound linking paths, if encryption does not overlap in the two protocols. The resulting bundle does not depend on any activity of the secondary protocol. We illustrate this method using the Neuman-Stubblebine protocol as an example.

152 citations


Book ChapterDOI
TL;DR: The authors asked "What is a Learning Classifier System" to some of the best-known researchers in the field and these are their answers.
Abstract: We asked "What is a Learning Classifier System" to some of the best-known researchers in the field. These are their answers.

Proceedings Article
01 May 2000
TL;DR: Annotation Graphs as mentioned in this paper is a formal model for annotating linguistic artifacts, from which an application programming interface (API) can be derived to a suite of tools for manipulating these annotations.
Abstract: We describe a formal model for annotating linguistic artifacts, from which we derive an application programming interface (API) to a suite of tools for manipulating these annotations. The abstract logical model provides for a range of storage formats and promotes the reuse of tools that interact through this API. We focus first on “Annotation Graphs,” a graph model for annotations on linear signals (such as text and speech) indexed by intervals, for which efficient database storage and querying techniques are applicable. We note how a wide range of existing annotated corpora can be mapped to this annotation graph model. This model is then generalized to encompass a wider variety of linguistic “signals,” including both naturally occuring phenomena (as recorded in images, video, multi-modal interactions, etc.), as well as the derived resources that are increasingly important to the engineering of natural language processing systems (such as word lists, dictionaries, aligned bilingual corpora, etc.). We conclude with a review of the current efforts towards implementing key pieces of this architecture.

Journal ArticleDOI
TL;DR: This paper provides a more exact analysis of code-tracking accuracy for early-late discriminators processing conventional binary phase shift keyed signals in white noise using linear models based on a small-error assumption.
Abstract: Code-tracking accuracy, an important attribute of GPS receivers, depends both on characteristics of the signal being tracked and on the design of the receiver. A simple expression has been available to predict code-tracking accuracy for early-late processing of signals with sinc-squared spectra in white noise for an infinite front-end bandwidth receiver. However, the literature has not indicated when this approximation holds. This paper provides a more exact analysis of code-tracking accuracy for early-late discriminators processing conven-tional binary phase shift keyed signals in white noise. New analytical expressions apply for various front-end bandwidths, discriminator spacings, and code-tracking loop bandwidths, while using linear models based on a small-error assumption. A theoretical lower bound is also supplied, indicating inherent limits on accuracy for given conditions. While evaluation of the exact expressions requires numerical integrations, new algebraic approximations are also provided. Numerical results compare the various expressions and approximations.

Journal ArticleDOI
TL;DR: Chirps are highly specific and sensitive spectrographic signatures of epileptic seizure activity and may serve as templates for matched filter design to detect seizures, and as such, can demonstrate localization and propagation of seizures from an epileptic focus.

Journal ArticleDOI
Chris Clifton1
TL;DR: This paper shows how lower bounds from pattern recognition theory can be used to determine sample sizes where data mining tools cannot obtain reliable results.
Abstract: Data mining introduces new problems in database security. The basic problem of using non-sensitive data to infer sensitive data is made more difficult by the "probabilistic" inferences possible with data mining. This paper shows how lower bounds from pattern recognition theory can be used to determine sample sizes where data mining tools cannot obtain reliable results.

Journal ArticleDOI
TL;DR: This paper describes how artificial neural networks were applied in a database integration problem and how they represent an attribute with its metadata as discriminators and the difficulties of using neural networks for this problem and the wish list for the Machine Learning community.
Abstract: Applications in a wide variety of industries require access to multiple heterogeneous distributed databases. One step in heterogeneous database integration is semantic integration: identifying corresponding attributes in different databases that represent the same real world concept. The rules of semantic integration can not be ‘pre-programmed’ since the information to be accessed is heterogeneous and attribute correspondences could be fuzzy. Manually comparing all possible pairs of attributes is an unreasonably large task. We have applied artificial neural networks (ANNs) to this problem. Metadata describing attributes is automatically extracted from a database to represent their ‘signatures’. The metadata is used to train neural networks to find similar patterns of metadata describing corresponding attributes from other databases. In our system, the rules to determine corresponding attributes are discovered through machine learning. This paper describes how we applied neural network techniques in a database integration problem and how we represent an attribute with its metadata as discriminators. This paper focuses on our experiments on effectiveness of neural networks and each discriminator. We also discuss difficulties of using neural networks for this problem and our wish list for the Machine Learning community.

Journal ArticleDOI
TL;DR: There are many different types of multimedia videos found in the world today—consider home videos, surveillance camera videos, television broadcasts as general categories; commercial users, government personnel, and home consumers all have specific requirements to search these videos for topics and/or events.
Abstract: There are many different types of multimedia videos found in the world today—consider home videos, surveillance camera videos, television broadcasts as general categories. Commercial users, government personnel, and home consumers all have specific requirements to search these videos for topics and/or events. In order to support user query for these elements of interest , multimedia systems must segment and retrieve relevant segments of information. With advances in video digitization, annotation and extraction, automated multimedia processing systems are being created for many of the various video types. In these systems, event segmentation occurs manually, semiautomatically, or automatically. Each type of multimedia video has varying levels of structure. For example, a home video may contain stories of a vacation, child's birthday party, and Christmas morning. The birthday party story may contain events of a child blowing out the candles, opening gifts, and playing games. In some stories , there may only be one event per story. The event pertaining to the child blowing out the candles may contain shots of the child's excitement of the oncoming cake, the friends singing, and the News on Demand FOR OF Deconstructing broadcast news using all sources of input from the multimedia stream.

22 Sep 2000
TL;DR: The Wide Area Augmentation System (WAAS) as mentioned in this paper provides real-time differential GPS corrections and integrity information for aircraft navigation use, where the system guides the aircraft to within a few hundred feet of the ground.
Abstract: The Wide Area Augmentation System (WAAS) will provide real-time differential GPS corrections and integrity information for aircraft navigation use. The most stringent application of this system will be precision approach, where the system guides the aircraft to within a few hundred feet of the ground. Precision approach operations require the use of differential ionospheric corrections. WAAS must incorporate information from reference stations to create a correction map of the ionosphere. More importantly, this map must contain confidence bounds describing the integrity of the corrections. The confidence bounds must be large enough to describe the error in the correction, but tight enough to allow the operation to proceed. The difficulty in generating these corrections is that the reference station measurements are not co-located with the aviation user measurements. For an undisturbed ionosphere over the Conterminous United States (CONUS), this is not a problem as the ionosphere is nominally well behaved. However, a concern is that irregularities in the ionosphere will decrease the correlation between the ionosphere observed by the reference stations and that seen by the user. Therefore, it is essential to detect when such irregularities may be present and adjust the confidence bounds accordingly. The approach outlined in this paper conservatively bounds the ionospheric errors even for the worst observed ionospheric conditions to date, using data sets taken from the operational receivers in the WAAS reference station network. As we progress through the current solar cycle and gather more data on the behavior of the ionosphere, many of our pessimistic assumptions will be relaxed. This will result in higher availability while maintaining full integrity.

Proceedings ArticleDOI
22 Oct 2000
TL;DR: This work is using data mining techniques to identify sequences of alarms that likely result from normal behavior, enabling construction of filters to eliminate those alarms from a particular environment.
Abstract: One aspect of constructing secure networks is identifying unauthorized use of those networks. Intrusion detection systems look for unusual or suspicious activity, such as patterns of network traffic that are likely indicators of unauthorized activity. However, normal operation often produces traffic that matches likely "attack signatures", resulting in false alarms. We are using data mining techniques to identify sequences of alarms that likely result from normal behavior, enabling construction of filters to eliminate those alarms. This can be done at a low cost for specific environments, enabling the construction of customized intrusion detection filters. We present our approach, and preliminary results identifying common sequences in alarms from a particular environment.

Proceedings ArticleDOI
20 Nov 2000
TL;DR: An approach to supporting quality-of-service (QoS) in a dynamic network environment by implementing a new protocol called dynamic RSVP (dRSVP), which is an extension to RSVP.
Abstract: This paper presents an approach to supporting Quality of Service (QoS) in a dynamic network environment. With this approach, resource reservations represent ranges, and applications adapt to an allocated level of QoS provided by the network at some point within the requested range. To explore this approach, we have implemented a new protocol called dynamic RSVP (dRSVP), which is an extension to RSVP.

Journal ArticleDOI
TL;DR: This work describes the observation and logging processes and presents an overview of the results of the long-term observations of a number of users of one desktop application, and presents the method of providing individualized instruction to each user by employing a newkind of user model and a new kind of expert model.
Abstract: Information technology has recently become the medium in which much professional office work is performed. This change offers an unprecedented opportunity to observe and record exactly how that work is performed. We describe our observation and logging processes and present an overview of the results of our long-term observations of a number of users of one desktop application. We then present our method of providing individualized instruction to each user by employing a new kind of user model and a new kind of expert model. The user model is based on observing the individual's behavior in a natural environment, while the expert model is based on pooling the knowledge of numerous individuals. Individualized instructional topics are selected by comparing an individual's knowledge to the pooled knowledge of her peers.

Proceedings Article
14 May 2000
TL;DR: This work introduces authentication tests and illustrates their power giving new and straightforward proofs of security goals for several protocols, and expresses the ideas in the strand space formalism and proves them correct elsewhere.
Abstract: Suppose a principal in a cryptographic protocol creates and transmits a message containing a new value v, which it later receives back in cryptographically altered form. It can conclude that some principal possessing the relevant key has transformed the message containing v. In some circumstances, this must be a regular participant of the protocol, not the penetrator. An inference of this kind is an authentication test. We introduce two main kinds of authentication test. An outgoing test is one in which the new value v is transmitted in encrypted form, and only a regular participant can extract it from that form. An incoming test is one in which v is received back in encrypted form, and only a regular participant can put it in that form. We combine these two tests with a supplementary idea, the unsolicited test, and a related method for checking that certain values remain secret. Together they determine what authentication properties are achieved by a wide range of cryptographic protocols. We introduce authentication tests and illustrate their power giving new and straightforward proofs of security goals for several protocols. We also illustrate how to use the authentication tests as a heuristic for finding attacks against incorrect protocols. Finally we suggest a protocol design process. We express these ideas in the strand space formalism and prove them correct elsewhere (Gullman and Thayer Fabrega, 2000).

Journal ArticleDOI
TL;DR: In this paper, the nature of the emerging field of web-based simulation is examined in terms of its relationship to the fundamental aspects of simulation research and practice, assuming a form of debate.
Abstract: The nature of the emerging field of web-based simulation is examined in terms of its relationship to the fundamental aspects of simulation research and practice. The presentation, assuming a form of debate, is based on a panel session held at the first International Conference on Web-Based Modeling and Simulation, which was sponsored by the Society for Computer Simulation during 11-14 January 1998 in San Diego, California. While no clear “winner” is evident in this debate, the issues raised here certainly merit ongoing attention and contemplation.

Journal ArticleDOI
TL;DR: How a computer-based system to apply trauma resuscitation protocols to patients with penetrating thoracoabdominal trauma is now used to objectively critique the actual care given to those patients for process errors in reasoning, independent of outcome is described.
Abstract: Objective A computer-based system to apply trauma resuscitation protocols to patients with penetrating thoracoabdominal trauma was previously validated for 97 consecutive patients at a Level 1 trauma center by a panel of the trauma attendings and further refined by a panel of national trauma experts. The purpose of this article is to describe how this system is now used to objectively critique the actual care given to those patients for process errors in reasoning, independent of outcome. Methods A chronological narrative of the care of each patient was presented to the computer program. The actual care was compared with the validated computer protocols at each decision point and differences were classified by a predetermined scoring system from 0 to 100, based on the potential impact on outcome, as critical/noncritical/no errors of commission, omission, or procedure selection. Results Errors in reasoning occurred in 100% of the 97 cases studied, averaging 11.9/case. Errors of omission were more prevalent than errors of commission (2. 4 errors/case vs 1.2) and were of greater severity (19.4/error vs 5. 1). The largest number of errors involved the failure to record, and perhaps observe, beside information relevant to the reasoning process, an average of 7.4 missing items/patient. Only 2 of the 10 adverse outcomes were judged to be potentially related to errors of reasoning. Conclusions Process errors in reasoning were ubiquitous, occurring in every case, although they were infrequently judged to be potentially related to an adverse outcome. Errors of omission were assessed to be more severe. The most common error was failure to consider, or document, available relevant information in the selection of appropriate care.


Posted Content
TL;DR: Qaviar, an experimental automated evaluation system for question answering applications, determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time.
Abstract: In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.

Journal ArticleDOI
TL;DR: A survey of results in developing Real-Time CORBA, a standard for real-time management of distributed objects, describes major RT CORBA research efforts, commercial development efforts, and standardization efforts by the Object Management Group.
Abstract: This paper presents a survey of results in developing Real-Time CORBA, a standard for real-time management of distributed objects. This paper includes background on two areas that have been combined to realize Real-Time CORBA: the CORBA standards that have been produced by the international Object Management Group; and techniques for distributed real-time computing that have been produced in the research community. The survey describes major RT CORBA research efforts, commercial development efforts, and standardization efforts by the Object Management Group.

Proceedings ArticleDOI
11 Dec 2000
TL;DR: A feedforward neural network architecture employing hyperbolic tangent tanh(z) function defined in the entire complex domain, which can easily outperform the non-analytic split complex activation function in convergence speed and achievable minimum squared error when the domain is bounded around the unit circle.
Abstract: One of the challenges in designing a neural network to process complex-valued signals is finding a suitable nonlinear complex activation function. The main reason for this difficulty is the conflict between the boundedness and the differentiability of complex functions in the entire complex plane, stated by Louiville's theorem. To avoid this difficulty, splitting, i.e., using two separate real nonlinear activation functions for the real and imaginary signal components has been the traditional approach. We introduce a feedforward neural network (FNN) architecture employing hyperbolic tangent tanh(z) function defined in the entire complex domain, and compare its performance with the FNN that uses a split complex structure. Since tanh(z) is analytic and bounded almost everywhere in the complex plane, when trained by backpropagation, it can easily outperform the non-analytic split complex activation function in convergence speed and achievable minimum squared error when the domain is bounded around the unit circle. We demonstrate this property by an equalization example, equalization of multi-phase shift keying (MPSK) signals corrupted by a multipath channel. The properties of tanh(z) and future directions to combat nonlinear distortions in complex transmission schemes are discussed.

Journal ArticleDOI
TL;DR: The necessity for intrusion confinement during detection is justified by using a probabilistic analysis model, and a general solution to achieve intrusion confinement is proposed, which can be applied to many types of information systems.
Abstract: System protection mechanisms such as access controls can be fooled by authorized but malicious users, masqueraders, and misfeasors. Intrusion detection techniques are therefore used to supplement them. However, damage could have occurred before an intrusion is detected. In many computing systems the requirement for a high degree of soundness of intrusion reporting can yield poor performance in detecting intrusions and cause long detection latency. As a result, serious damage can be caused either because many intrusions are never detected or the average detection latency is too long. The process of bounding the damage caused by intrusions during intrusion detection is referred to as intrusion confinement. We justify the necessity for intrusion confinement during detection by using a probabilistic analysis model, and propose a general solution to achieve intrusion confinement. The key idea of the solution is to isolate likely suspicious actions before a definite determination of intrusion is reported. We also present two concrete isolation protocols in the database and file system contexts, respectively, to evaluate the feasibility of the general solution, which can be applied to many types of information systems.

Journal ArticleDOI
TL;DR: A statistical noise model is developed from mathematical modeling of the physical mechanisms that generate noise in communication receivers employing antenna arrays by generalizing an approach for single antenna cases suggested by Middleton (1967, 1974, 1976, 1977).
Abstract: A statistical noise model is developed from mathematical modeling of the physical mechanisms that generate noise in communication receivers employing antenna arrays. Such models have been lacking for cases where the antenna observations may be statistically dependent from antenna to antenna. The model is developed by generalizing an approach for single antenna cases suggested by Middleton (1967, 1974, 1976, 1977). The model derived here is applicable to a wide variety of physical situations. The focus is primarily on problems defined by Middleton to be Class A interference. The number of noise sources in a small region of space is assumed to be Poisson distributed, and the emission times are assumed to be uniformly distributed over a long time interval. Finally, an additive Gaussian background component is included to represent the thermal noise that is always present in real receivers.