scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2003"


Journal ArticleDOI
TL;DR: This work identifies three fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and describes the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling.
Abstract: Increasingly, network applications must communicate with counterparts across disparate networking environments characterized by significantly different sets of physical and operational constraints; wide variations in transmission latency are particularly troublesome. The proposed Interplanetary Internet, which must encompass both terrestrial and interplanetary links, is an extreme case. An architecture based on a "least common denominator" protocol that can operate successfully and (where required) reliably in multiple disparate environments would simplify the development and deployment of such applications. The Internet protocols are ill suited for this purpose. We identify three fundamental principles that would underlie a delay-tolerant networking (DTN) architecture and describe the main structural elements of that architecture, centered on a new end-to-end overlay network protocol called Bundling. We also examine Internet infrastructure adaptations that might yield comparable performance but conclude that the simplicity of the DTN architecture promises easier deployment and extension.

1,419 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared the effects of visual and audio information regarding traffic and flight parameters on the performance of 12 instrument-rated pilots in a high-fidelity simulation.
Abstract: In the first part of the reported research, 12 instrument-rated pilots flew a high-fidelity simulation, in which air traffic control presentation of auditory (voice) information regarding traffic and flight parameters was compared with advanced display technology presentation of equivalent information regarding traffic (cockpit display of traffic information) and flight parameters (data link display). Redundant combinations were also examined while pilots flew the aircraft simulation, monitored for outside traffic, and read back communications messages. The data suggested a modest cost for visual presentation over auditory presentation, a cost mediated by head-down visual scanning, and no benefit for redundant presentation. The effects in Part 1 were modeled by multiple-resource and preemption models of divided attention. In the second part of the research, visual scanning in all conditions was fit by an expected value model of selective attention derived from a previous experiment. This model accounted for 94% of the variance in the scanning data and 90% of the variance in a second validation experiment. Actual or potential applications of this research include guidance on choosing the appropriate modality for presenting in-cockpit information and understanding task strategies induced by introducing new aviation technology.

325 citations


Proceedings ArticleDOI
10 Nov 2003
TL;DR: A set of definitions that form a framework for describing the types of awareness that humans have of robot activities and the knowledge that robots have of the commands given them by humans are provided.
Abstract: This paper provides a set of definitions that form a framework for describing the types of awareness that humans have of robot activities and the knowledge that robots have of the commands given them by humans. As a case study, we applied this human-robot interaction (HRI) awareness framework to our analysis of the HRI approaches used at an urban search and rescue competition. We determined that most of the critical incidents (e.g., damage done by robots to the test arena) were directly attributable to lack of one or more kinds of HRI awareness.

276 citations


Journal ArticleDOI
TL;DR: In this paper, the effects of scintillation on the availability of GPS and satellite-based augmentation system (SBAS) for L1 C/A and L2 semicodeless receivers are estimated in terms of loss of lock and degradation of accuracy.
Abstract: [1] Ionospheric scintillation is a rapid change in the phase and/or amplitude of a radio signal as it passes through small-scale plasma density irregularities in the ionosphere. These scintillations not only can reduce the accuracy of GPS/Satellite-Based Augmentation System (SBAS) receiver pseudorange and carrier phase measurements but also can result in a complete loss of lock on a satellite. In a worst case scenario, loss of lock on enough satellites could result in lost positioning service. Scintillation has not had a major effect on midlatitude regions (e.g., the continental United States) since most severe scintillation occurs in a band approximately 20° on either side of the magnetic equator and to a lesser extent in the polar and auroral regions. Most scintillation occurs for a few hours after sunset during the peak years of the solar cycle. Typical delay locked loop/phase locked loop designs of GPS/SBAS receivers enable them to handle moderate amounts of scintillation. Consequently, any attempt to determine the effects of scintillation on GPS/SBAS must consider both predictions of scintillation activity in the ionosphere and the residual effect of this activity after processing by a receiver. This paper estimates the effects of scintillation on the availability of GPS and SBAS for L1 C/A and L2 semicodeless receivers. These effects are described in terms of loss of lock and degradation of accuracy and are related to different times, ionospheric conditions, and positions on the Earth. Sample results are presented using WAAS in the western hemisphere.

239 citations


Journal ArticleDOI
TL;DR: Three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs, which shows the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularities nearest to the origin.
Abstract: We investigate the approximation ability of a multilayer perceptron (MLP) network when it is extended to the complex domain. The main challenge for processing complex data with neural networks has been the lack of bounded and analytic complex nonlinear activation functions in the complex domain, as stated by Liouville's theorem. To avoid the conflict between the boundedness and the analyticity of a nonlinear complex function in the complex domain, a number of ad hoc MLPs that include using two real-valued MLPs, one processing the real part and the other processing the imaginary part, have been traditionally employed. However, since nonanalytic functions do not meet the Cauchy-Riemann conditions, they render themselves into degenerative backpropagation algorithms that compromise the efficiency of nonlinear approximation and learning in the complex vector field. A number of elementary transcendental functions (ETFs) derivable from the entire exponential function ez that are analytic are defined as fully complex activation functions and are shown to provide a parsimonious structure for processing data in the complex domain and address most of the shortcomings of the traditional approach. The introduction of ETFs, however, raises a new question in the approximation capability of this fully complex MLP. In this letter, three proofs of the approximation capability of the fully complex MLP are provided based on the characteristics of singularity among ETFs. First, the fully complex MLPs with continuous ETFs over a compact set in the complex vector field are shown to be the universal approximator of any continuous complex mappings. The complex universal approximation theorem extends to bounded measurable ETFs possessing a removable singularity. Finally, it is shown that the output of complex MLPs using ETFs with isolated and essential singularities uniformly converges to any nonlinear mapping in the deleted annulus of singularity nearest to the origin.

218 citations


Proceedings ArticleDOI
Leo Obrst1
03 Nov 2003
TL;DR: It is argued that information technology has evolved into a world of largely loosely coupled systems and as such, needs increasingly more explicit, machine-interpretable semantics, so ontologies in the form of logical domain theories and their knowledge bases offer the richest representations of machine- interpretable semantics for systems and databases in the loosely coupled world.
Abstract: In this paper, we discuss the use of ontologies for semantic interoperability and integration. We argue that information technology has evolved into a world of largely loosely coupled systems and as such, needs increasingly more explicit, machine-interpretable semantics. Ontologies in the form of logical domain theories and their knowledge bases offer the richest representations of machine-interpretable semantics for systems and databases in the loosely coupled world, thus ensuring greater semantic interoperability and integration. Finally, we discuss how ontologies support semantic interoperability in the real, commercial and governmental world.

212 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of systems is large and initial samples from each system have already been taken is addressed.
Abstract: In this paper we address the problem of finding the simulated system with the best (maximum or minimum) expected performance when the number of systems is large and initial samples from each system have already been taken This problem may be encountered when a heuristic search procedure--perhaps one originally designed for use in a deterministic environment--has been applied in a simulation-optimization context Because of stochastic variation, the system with the best sample mean at the end of the search procedure may not coincide with the true best system encountered during the search This paper develops statistical procedures that return the best system encountered by the search (or one near the best) with a prespecified probability We approach this problem using combinations of statistical subset selection and indifference-zone ranking procedures The subset-selection procedures, which use only the data already collected, screen out the obviously inferior systems, while the indifference-zone procedures, which require additional simulation effort, distinguish the best from the less obviously inferior systems

190 citations


Book ChapterDOI
09 Sep 2003
TL;DR: It is shown that the cost of maintaining a continuous k-NN query result for moving points represented in this way can be significantly reduced with a modest increase in the number of events processed in the presence of updates.
Abstract: In recent years there has been an increasing interest in databases of moving objects where the motion and extent of objects are represented as a function of time. The focus of this paper is on the maintenance of continuous K- nearest neighbor (k-NN) queries on moving points when updates are allowed. Updates change the functions describing the motion of the points, causing pending events to change. Events are processed to keep the query result consistent as points move. It is shown that the cost of maintaining a continuous k-NN query result for moving points represented in this way can be significantly reduced with a modest increase in the number of events processed in the presence of updates. This is achieved by introducing a continuous within query to filter the number of objects that must be taken into account when maintaining a continuous k-NN query. This new approach is presented and compared with other recent work. Experimental results are presented showing the utility of this approach.

187 citations


Posted Content
TL;DR: A Challenge Evaluation task that was created for the Knowledge Discovery and Data Mining (KDD) Challenge Cup, where 18 participating groups provided systems that flagged articles for curation, based on whether the article contained experimental evidence for gene expression products.
Abstract: MOTIVATION: The biological literature is a major repository of knowledge. Many biological databases draw much of their content from a careful curation of this literature. However, as the volume of literature increases, the burden of curation increases. Text mining may provide useful tools to assist in the curation process. To date, the lack of standards has made it impossible to determine whether text mining techniques are sufficiently mature to be useful. RESULTS: We report on a Challenge Evaluation task that we created for the Knowledge Discovery and Data Mining (KDD) Challenge Cup. We provided a training corpus of 862 articles consisting of journal articles curated in FlyBase, along with the associated lists of genes and gene products, as well as the relevant data fields from FlyBase. For the test, we provided a corpus of 213 new (`blind') articles; the 18 participating groups provided systems that flagged articles for curation, based on whether the article contained experimental evidence for gene expression products. We report on the the evaluation results and describe the techniques used by the top performing groups. CONTACT: asy@mitre.org KEYWORDS: text mining, evaluation, curation, genomics, data management

176 citations


Journal ArticleDOI
TL;DR: For example, the KDD Challenge Evaluation Task as mentioned in this paper evaluated text mining techniques for gene expression products and reported on the evaluation results and describe the techniques used by the top performing groups.
Abstract: Motivation: The biological literature is a major repository of knowledge. Many biological databases draw much of their content from a careful curation of this literature. However, as the volume of literature increases, the burden of curation increases. Text mining may provide useful tools to assist in the curation process. To date, the lack of standards has made it impossible to determine whether text mining techniques are sufficiently mature to be useful. Results: We report on a Challenge Evaluation task that we created for the Knowledge Discovery and Data Mining (KDD) Challenge Cup. We provided a training corpus of 862 articles consisting of journal articles curated in FlyBase, along with the associated lists of genes and gene products, as well as the relevant data fields from FlyBase. For the test, we provided a corpus of 213 new (‘blind’) articles; the 18 participating groups provided systems that flagged articles for curation, based on whether the article contained experimental evidence for gene expression products. We report on the evaluation results and describe the techniques used by the top performing groups.

164 citations


Proceedings ArticleDOI
19 May 2003
TL;DR: This paper presents LHAP a scalable and light-weight authentication protocol for ad hoc networks based on hop-by-hop authentication for verifying the authenticity of all the packets transmitted in the network and one-way key chain and TESLA for packet authentication and for reducing the overhead for establishing trust among nodes.
Abstract: Most ad hoc networks do not implement any network access control, leaving these networks vulnerable to resource consumption attacks where a malicious node injects packets into the network with the goal of depleting the resources Of the nodes relaying the packets. To thwart or prevent such attacks, it is necessary to employ authentication mechanisms that ensure that only authorized nodes can inject traffic into the network. In this paper we present LHAP a scalable and light-weight authentication protocol for ad hoc networks. LHAP is based on two techniques: (i) hop-by-hop authentication for verifying the authenticity of all the packets transmitted in the network and (ii) one-way key chain and TESLA for packet authentication and for reducing the overhead for establishing trust among nodes. We analyze the security of LHAP and show LHAP is a lightweight security protocol through detailed performance analysis.

Patent
09 Oct 2003
TL;DR: Cross-correlators as discussed by the authors are used to compute parallel short-time correlations of received signal samples and replica code sequence samples, and a means for calculating the cross-correlation values utilizing discrete-time Fourier analysis of the computed STCs.
Abstract: Signal processing architectures for direct acquisition of spread spectrum signals using long codes. Techniques are described for achieving a high of parallelism, employing code matched filter banks and other hardware sharing. In one embodiment, upper and lower sidebands are treated as two independent signals with identical spreading codes. Cross-correlators, in preferred embodiments, are comprised of a one or more banks of CMFs for computing parallel short-time correlations (STCs) of received signal samples and replica code sequence samples, and a means for calculating the cross-correlation values utilizing discrete-time Fourier analysis of the computed STCs. One or more intermediate quantizers may optionally be disposed between the bank of code matched filters and the cross-correlation calculation means for reducing word-sizes of the STCs prior to Fourier analysis. The techniques described may be used with BOC modulated signals or with any signal having at least two distinct sidebands.

01 Jan 2003
TL;DR: This paper presents technical research into advanced adaptive techniques and identifies those parameters or characteristics where regulation and policy need to be reviewed and provides insight into the benefits and possible limitations of such approaches to cognitive radio design and policy.
Abstract: Trends in technology are enhancing the future capability of devices to access the electromagnetic spectrum using the full range of dimensions associated with the spectrum This increased capability to access the full spectrum “hyperspace” not only improves the ability of systems to use the spectrum but can also reduce interference and the adverse impact to system performance This paper presents technical research into advanced adaptive techniques and identifies those parameters or characteristics where regulation and policy need to be reviewed Fundamental to these new adaptive techniques is a behaviorbased approach to design and, possibly, policy Consequently, this paper also presents a technical case study of one behavior-based approach, Dynamic Frequency Selection, that provides insight into the benefits and possible limitations of such approaches to cognitive radio design and policy

Proceedings ArticleDOI
27 May 2003
TL;DR: This paper describes a domain-independent, machine-learning based approach to temporally anchoring and ordering events in news that achieves 84.6% accuracy in temporally Anchoring events and 75.4% in partially ordering them.
Abstract: This paper describes a domain-independent, machine-learning based approach to temporally anchoring and ordering events in news. The approach achieves 84.6% accuracy in temporally anchoring events and 75.4% accuracy in partially ordering them.

Journal ArticleDOI
TL;DR: The current policy requires only high-direct-cost (>US$500,000/yr) grantees to share research data, starting 1 October 2003.
Abstract: Recently issued NIH policy statement and implementation guidelines (National Institutes of Health, 2003) promote the sharing of research data. While urging that “all data should be considered for data sharing” and “data should be made as widely and freely available as possible” the current policy requires only high-direct-cost (>US$500,000/yr) grantees to share research data, starting 1 October 2003. Data sharing is central to science, and we agree that data should be made available.

Journal ArticleDOI
TL;DR: This paper reports on the work done to implement statistical error control within a heuristic search procedure, and on an automated procedure to deliver a statistical guarantee after the search procedure is finished.
Abstract: Research on the optimization of stochastic systems via simulation often centers on the development of algorithms for which global convergence can be guaranteed. On the other hand, commercial software applications that perform optimation via simulation typically employ search heuristics that have been successful in deterministic settings. Such search heuristics give up on global convergence in order to be more generally applicable and to yield rapid progress towards good solutions. Unfortunately, commercial applications do not always formally account for the randomness in simulation responses, meaning that their progress may be no better than a random search if the variability of the outputs is high. In addition, they do not provide statistical guarantees about the "goodness" of the final results. In practice, simulation studies often rely heavily on engineers who, in addition to developing the simulation model and generating the alternatives to be compared, must also perform the statistical analyses off-l...

Proceedings ArticleDOI
01 Jan 2003
TL;DR: The work described here is focused on measuring the uncertainty in sector demand predictions under current operational conditions, and on applying those measurements towards improving the performance and human factors of TFM decision support systems.
Abstract: Traffic flow management (TFM) in the U.S. is the process by which the Federal Aviation Administration (FAA), with the participation of airspace users, seeks to balance the capacity of airspace and airport resources with the demand for these resources. This is a difficult process, complicated by the presence of severe weather or unusually high demand. TFM in en-route airspace is concerned with managing airspace demand, specifically the number of flights handled by air traffic control (ATC) sectors; a sector is the volume of airspace managed by an air traffic controller or controller team. Therefore, effective decision-making requires accurate sector demand predictions. While it is commonly accepted that the sector demand predictions used by current and proposed TFM decision support systems contain significant uncertainty, this uncertainty is typically not quantified or taken into account in any meaningful way. The work described here is focused on measuring the uncertainty in sector demand predictions under current operational conditions, and on applying those measurements towards improving the performance and human factors of TFM decision support systems.

Proceedings ArticleDOI
27 Oct 2003
TL;DR: A new fingerprinting scheme that does not depend on a primary key attribute is proposed that constructs virtual primary keys from the most significant bits of some of each tuple's attributes.
Abstract: Agrawal and Kiernan's watermarking technique for database relations [1] and Li et al's fingerprinting extension [6] both depend critically on primary key attributes. Hence, those techniques cannot embed marks in database relations without primary key attributes. Further, the techniques are vulnerable to simple attacks that alter or delete the primary key attribute.This paper proposes a new fingerprinting scheme that does not depend on a primary key attribute. The scheme constructs virtual primary keys from the most significant bits of some of each tuple's attributes. The actual attributes that are used to construct then virtual primary key differ from tuple to tuple. Attribute selection is based on a secret key that is known to the merchant only. Further, the selection does not depend on an apriori ordering over the attributes, or on knowledge of the original relation or fingerprint codeword.The virtual primary keys are then used in fingerprinting as in previous work [6]. Rigorous analysis shows that, with high probability, only embedded fingerprints can be detected and embedded fingerprints cannot be modified or erased by a variety of attacks. Attacks include adding, deleting, shuffling, or modifying tuples or attributes (including a primary key attribute if one exists), guessing secret keys, and colluding with other recipients of a relation.

Journal ArticleDOI
TL;DR: A strategy is being developed whereby the current set of internationally standardized space data communications protocols can be incrementally evolved so that a first version of an operational "Interplanetary Internet" is feasible by the end of the decade as discussed by the authors.

Proceedings ArticleDOI
27 Oct 2003
TL;DR: A worm analytic framework is provided that captures the generalized mechanical process a worm goes through while moving through a specific environment and its state as it does so and can be used to evaluate worm potency and develop and validate defensive countermeasures and postures in both static and dynamic worm conflict.
Abstract: We present a general framework for reasoning about network worms and analyzing the potency of worms within a specific network. First, we present a discussion of the life cycle of a worm based on a survey of contemporary worms. We build on that life cycle by developing a relational model that associates worm parameters, attributes of the environment, and the subsequent potency of the worm. We then provide a worm analytic framework that captures the generalized mechanical process a worm goes through while moving through a specific environment and its state as it does so. The key contribution of this work is a worm analytic framework. This framework can be used to evaluate worm potency and develop and validate defensive countermeasures and postures in both static and dynamic worm conflict. This framework will be implemented in a modeling and simulation language in order to evaluate the potency of specific worms within an environment.

Proceedings ArticleDOI
11 Jul 2003
TL;DR: A simple method for the automatic creation of large quantities of imperfect training data for a biological entity (gene or protein) extraction system and has the advantage of being rapidly transferable to new domains that have similar existing resources.
Abstract: Machine-learning based entity extraction requires a large corpus of annotated training to achieve acceptable results. However, the cost of expert annotation of relevant data, coupled with issues of inter-annotator variability, makes it expensive and time-consuming to create the necessary corpora. We report here on a simple method for the automatic creation of large quantities of imperfect training data for a biological entity (gene or protein) extraction system. We used resources available in the FlyBase model organism database; these resources include a curated lists of genes and the articles from which the entries were drawn, together a synonym lexicon. We applied simple pattern matching to identify gene names in the associated abstracts and filtered these entities using the list of curated entries for the article. This process created a data set that could be used to train a simple Hidden Markov Model (HMM) entity tagger. The results from the HMM tagger were comparable to those reported by other groups (F-measure of 0.75). This method has the advantage of being rapidly transferable to new domains that have similar existing resources.

Journal ArticleDOI
TL;DR: Fuzzy SDT generally reduced the computed false alarm rate for both the human and machine conflict systems, partly because conflicts just outside the conflict criterion used in conventional SDT, were defined by fuzzy SDT as a signal worthy of some attention.
Abstract: This paper applies fuzzy SDT (signal detection theory) techniques, which combine fuzzy logic and conventional SDT, to empirical data. Two studies involving detection of aircraft conflicts in air traffic control (ATC) were analysed using both conventional and fuzzy SDT. Study 1 used data from a preliminary field evaluation of an automated conflict probe system, the User Request Evaluation Tool (URET). The second study used data from a laboratory controller-in-the-loop simulation of Free Flight conditions. Instead of assigning each potential conflict event as a signal (conflict) or non-signal, each event was defined as a signal (conflict) to some fuzzy degree between 0 and 1 by mapping distance into the range [0, 1]. Each event was also given a fuzzy membership, [0, 1], in the set ‘response’, based on the perceived probability of a conflict or on the colour-coded alert severity. Fuzzy SDT generally reduced the computed false alarm rate for both the human and machine conflict systems, partly because conflict...

Proceedings ArticleDOI
15 Jul 2003
TL;DR: A way in which protocols that use this computational primitive can be verified using formal methods is proposed, and it is demonstrated that if there exists a formal attack that violates the formal security condition, then it maps to a computational algorithm that solves the Diffie-Hellman problem.
Abstract: The Diffie-Hellman key exchange scheme is a standard component of cryptographic protocols. In this paper, we propose a way in which protocols that use this computational primitive can be verified using formal methods. In particular, we separate the computational aspects of such an analysis from the formal aspects. First, we use Strand Space terminology to define a security condition that summarizes the security guarantees of Diffie-Hellman. Once this property is assumed, the analysis of a protocol is a purely formal enterprise. (We demonstrate the applicability and usefulness of this property by analyzing a sample protocol.) Furthermore, we show that this property is sound in the computational setting by mapping formal attacks to computational algorithms. We demonstrate that if there exists a formal attack that violates the formal security condition, then it maps to a computational algorithm that solves the Diffie-Hellman problem. Hence, if the Diffie-Hellman problem is hard, the security condition holds globally.

01 Jan 2003
TL;DR: In the 1980s, most of the rigorous work in information security was focused on operating systems, but the 1990s saw a strong trend toward network and distributed system security, and a less stringent model of security, not focused on covert channels, is now relevant.
Abstract: In the 1980s, most of the rigorous work in information security was focused on operating systems, but the 1990s saw a strong trend toward network and distributed system security. The difficulty of having an impact in securing operating systems was part of the motivation for this trend. There were two major obstacles. First, the only operating systems with significant deployment were large proprietary systems. Superimposing a security model and gaining assurance that the implementation enforced the model seemed intractable [6]. Second, the prime security model [2] was oriented toward preventing disclosure in multi-level secure systems [1], and this required ensuring that even Trojan horse software exploiting covert channels in the system’s implementation could compromise information only at a negligible rate. This was ultimately found to be unachievable [10]. These obstacles seem more tractable now. Open-source secure operating systems are now available, which are compatible with existing applications software, and hence attractive for organizations wanting more secure platforms for publicly accessible servers. Security Enhanced Linux (SELinux) in particular offers well thought out security services [4, 5]. Moreover, a less stringent model of security, not focused on covert channels, is now relevant. Commonly, a network server must service unauthenticated clients (as in retail electronic commerce), or must provide its own authentication and access control for its clients (as in a database server). Sensitive resources must reside on the same server so that transactions can complete. The programs manipulating the resources directly must be trustworthy; direct manipulation by Trojan horses is not our concern. The core goals are protecting the confidentiality and integrity of these resources. To preserve integrity, each causal chain of interactions leading from untrusted sources to sensitive destinations must traverse a program considered trusted to filter transactions. Dually, to preserve confidentiality, causal chains leading from sensitive sources to untrusted destinations must traverse a program trusted to filter outbound data. The trustworthy program determines what data can be released to the untrusted destination. In both cases, the security goal is an information flow goal. Each says that information flowing between particular

Patent
29 Apr 2003
TL;DR: In this paper, a method for coordinating numerous vehicles within a common maneuver is characterized by receiving, at a moving vehicle, state and projected maneuver information of other vehicles participating in the maneuver via a broadcast link and utilizing the information to control the trajectory of the moving vehicle based on predicted paths of all participating vehicles for future times for which the maneuver is to be conducted.
Abstract: A method for coordinating numerous vehicles within a common maneuver is characterized by receiving, at a moving vehicle, state and projected maneuver information of other vehicles participating in the maneuver via a broadcast link and utilizing the information to control the trajectory of the moving vehicle based on predicted paths of all participating vehicles for future times for which the maneuver is to be conducted. For simple maneuvers, such as flight-deck based self-spacing of aircraft in a runway approach, the method allows each participating vehicle to virtually perform the coordinated maneuver by predictive simulation using current state data to determine if corrective measures need be undertaken, or, if the predicted maneuver shows that it would complete as planned, maintaining its current trajectory. This affords vehicle operators the freedom to perform other duties, as the operators are alerted only when a change in state is necessitated. In the runway approach problem, the state changes are primarily in a change of speed and the choice of a simple, stepped speed profile reduces the amount of speed control that must be attended to by aircraft flight crews, allows for fuel efficient flight profiles within constraints of solving the spacing problem, and allows for higher runway throughput as compared with the prior art.

Journal ArticleDOI
TL;DR: Simulations suggest that a nanomemory of this type can be operated successfully at a density of 1011 bits/cm2, and modest device alterations and system design alternatives are suggested that might improve the performance and the scalability of the nanmemory array.
Abstract: A BSTRACT : Simulations were performed to assess the prospective performance of a 16 Kbit nanowire-based electronic nanomemory system. Commercial off-the-shelf microcomputer system modeling software was applied to evaluate the operation of an ultra-dense storage array. This array consists of demonstrated experimental non-volatile nanowire diode switches, plus encoder‐decoder structures consisting of demonstrated experimental nanowirebased nanotransistors, with nanowire interconnects among all the switching devices. The results of these simulations suggest that a nanomemory of this type can be operated successfully at a density of 10 11 bits/cm 2 . Furthermore, modest device alterations and system design alternatives are suggested that might improve the performance and the scalability of the nanomemory array. These simulations represent early steps toward the development of a simulation-based methodology to guide nanoelectronic system design in a manner analogous to the way such methodologies are used to guide microelectronic system design in the silicon industry.

Journal ArticleDOI
01 Dec 2003
TL;DR: The BioLINK group is organizing a CASP-like evaluation for the text data-mining community applied to biology, addressing two major bottlenecks for text mining in biology: the correct detection of gene and protein names in text and the extraction of functional information related to proteins based on the GO classification system.
Abstract: An increasing number of groups are now working in the area of text mining, focusing on a wide range of problems and applying both statistical and linguistic approaches. However, it is not possible to compare the different approaches, because there are no common standards or evaluation criteria; in addition, the various groups are addressing different problems, often using private datasets. As a result, it is impossible to determine how well the existing systems perform, and particularly what performance level can be expected in real applications. This is similar to the situation in text processing in the late 1980s, prior to the Message Understanding Conferences (MUCs). With the introduction of a common evaluation and standardized evaluation metrics as part of these conferences, it became possible to compare approaches, to identify those techniques that did or did not work and to make progress. This progress has resulted in a common pipeline of processes and a set of shared tools available to the general research community. The field of biology is ripe for a similar experiment. Inspired by this example, the BioLINK group (Biological Literature, Information and Knowledge [1]) is organizing a CASP-like evaluation for the text data-mining community applied to biology. The two main tasks specifically address two major bottlenecks for text mining in biology: (1) the correct detection of gene and protein names in text; and (2) the extraction of functional information related to proteins based on the GO classification system. For further information and participation details, see http://www.pdg.cnb.uam.es/BioLink/BioCreative.eval.html

Journal ArticleDOI
TL;DR: This work uses early-late gate processing on an objective function derived from an adaptive finite impulse response (FIR) filter that attempts to match the crosscorrelation of the received signal with a multipath-free replica of the desired crosscor correlation.
Abstract: New expressions are presented for the multipath-induced pseudorange error (i.e. bias) and variance introduced by multipath onto the time-of-arrival estimate obtained using a noncoherent early-late gate discriminator. The results include the effect of front-end bandwidth and early-late gate spacing. We also investigate a blind method for cancelling the multipath, in order to improve the time-of-arrival estimate. Our approach uses early-late gate processing on an objective function derived from an adaptive finite impulse response (FIR) filter that attempts to match the crosscorrelation of the received signal with a multipath-free replica of the desired crosscorrelation. This method performs reasonably well, and decreases the multipath-induced pseudorange error by approximately a factor of 2, even in very stressing multipath environments.

Patent
17 Jan 2003
TL;DR: In this paper, a polynomial hash is generated based on the document, the voice signature, the encrypted voice PIN and the encrypted hash are sent to the intended recipient, and this hash is itself encrypted based on PIN to generate an encrypted hash.
Abstract: Voice signature method. The signer obtains a public key from an intended recipient of a document to be electronically signed. The signer speaks a personal identification number (PIN) to generate a voice PIN and the signer also speaks at least the signer's name. The voice PIN and voice signature are appended to the document and the voice PIN is encrypted using the public key to create an encrypted voice PIN. A polynomial hash is generated based on the document, the voice signature and the encrypted voice PIN and this hash is itself encrypted based on the PIN to generate an encrypted hash. Finally, the document, the voice signature, the encrypted voice PIN and the encrypted hash are sent to the intended recipient.

01 Jan 2003
TL;DR: In this paper, the authors present an analysis of traffic flow management (TFM) events of two types: en route events in the Pennsylvania (PA) region of the U.S. and events affecting the Chicago O'Hare airport (ORD) terminal area.
Abstract: This paper presents an analysis of traffic flow management (TFM) events of two types: en route events in the Pennsylvania (PA) region of the U.S. and events affecting the Chicago O’Hare airport (ORD) terminal area. We present a method of accounting for uncertain weather information at the time of TFM decisions, based on Bayesian decision networks. However, we show that data from past TFM events is, by itself, insufficient to distinguish between the efficacy of different strategic TFM decisions, at least for delay, cancellation, diversion, and departure backlog performance metrics. Patterns in TFM performance metrics exist, but there is wide variability across TFM events. Other, less comprehensive metrics that address how well TFM plans execute without undesirable modifications may distinguish among TFM actions better. Modeling as a means to augment data from actual TFM events is discussed. Learning and adaptation implications for the TFM system are presented.