scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 2001"


Journal ArticleDOI
TL;DR: This paper characterizes the potential contributions of cognitive radio to spectrum pooling and outlines an initial framework for formal radio-etiquette protocols.
Abstract: Wireless multimedia applications require significant bandwidth, some of which will be provided by third-generation (3G) services. even with substantial investment in 3G infrastructure, the radio spectrum allocated to 3G will be limited. Cognitive radio offers a mechanism for the flexible pooling of radio spectrum using a new class of protocols called formal radio etiquettes. This approach could expand the bandwidth available for conventional uses (e.g., police, fire and rescue) and extend the spatial coverage of 3G in a novel way. Cognitive radio is a particular extension of software radio that employs model-based reasoning about users, multimedia content, and communications context. This paper characterizes the potential contributions of cognitive radio to spectrum pooling and outlines an initial framework for formal radio-etiquette protocols.

1,295 citations


Journal ArticleDOI
TL;DR: The best systems are now able to answer more than two thirds of factual questions in this evaluation, with recent successes reported in a series of question-answering evaluations.
Abstract: As users struggle to navigate the wealth of on-line information now available, the need for automated question answering systems becomes more urgent. We need systems that allow a user to ask a question in everyday language and receive an answer quickly and succinctly, with sufficient context to validate the answer. Current search engines can return ranked lists of documents, but they do not deliver answers to the user.Question answering systems address this problem. Recent successes have been reported in a series of question-answering evaluations that started in 1999 as part of the Text Retrieval Conference (TREC). The best systems are now able to answer more than two thirds of factual questions in this evaluation.

436 citations


Journal ArticleDOI
John W. Betz1
TL;DR: A class of particularly attractive modulations called binary offset carrier (BOC) is described, important characteristics of modulations for radionavigation are presented, several specific BOC designs are introduced, and receiver processing for these modulations are described.
Abstract: Current signaling for GPS employs phase shift keying (PSK) modulation using conventional rectangular (non-return to zero) spreading symbols. Attention has been focused primarily on the design of the spreading code and selection of the keying rates. But better modulation designs are available for next-generation radionavigation systems, offering improved performance and the opportunity for spectrum sharing while retaining implementation simplicity. This paper describes a class of particularly attractive modulations called binary offset carrier (BOC). It presents important characteristics of modulations for radionavigation, introduces several specific BOC designs that satisfy different applications in evolving radionavigation systems, describes receiver processing for these modulations, and provides analytical and numerical results that describe the modulations' performance and demonstrate advantages over comparable conventional PSK modulations with rectangular spreading symbols.

422 citations


Journal ArticleDOI
TL;DR: The ancient art of storytelling and its adaptation in film and video can now be used to efficiently convey information in the authors' increasingly computerized world.
Abstract: A well-told story conveys great quantities of information in relatively few words in a format that is easily assimilated by the listener or viewer. People usually find it easier to understand information integrated into stories than information spelled out in serial lists (such as bulleted items in an overhead slide). Stories are also just more compelling. For example, despite its sketchiness, the story fragment in Figure 1 is loaded with information, following an analysis similar to that of John Thomas of IBM Research [5]. We find that Jim uses technology (a pager and the Internet) and is dedicated to his job. Many other pieces of information can be deduced about Jim and his work, as well as about his relationships with his coworkers, as noted in the right side of the figure. The story does not express all this information explicitly; some is only implied; for example, we can surmise that Jim is probably not at the gym and his attendance at the meeting is important to his boss and coworkers, as well as to his company’s business performance. As in most stories, this one involves uncerFor as long as people have been around, they have used stories to convey information, cultural values, and experiences. Since the invention of writing and the printing press until today, technology and culture have constantly provided new and increasingly sophisticated means to tell stories. More recently, technology, entertainment, and art have converged in the computer. The ancient art of storytelling and its adaptation in film and video can now be used to efficiently convey information in our increasingly computerized world. What Storytelling Can Do for Information Visualization

317 citations


Inderjeet Mani1
01 Jan 2001
TL;DR: The new variety is distinct with its large, thick, coriaceous leaves, rust colored tomentose new growth, stout stems, single, white flowers, improved cold hardiness, and increased resistance to leaf spot and fireblight.
Abstract: This paper provides an overview of different methods for evaluating automatic summarization systems. The challenges in evaluating summaries are characterized. Both intrinsic and extrinsic approaches are discussed. Methods for assessing informativeness and coherence are described. The advantages and disadvantages of specific methods are assessed, along with criteria for choosing among them. The paper concludes with some suggestions for future directions.

141 citations


Journal ArticleDOI
TL;DR: IEEE Standard 1471 identifies sound practices to establish a framework and vocabulary for software architecture concepts to support good architectural description practices in both software intensive systems and more general systems.
Abstract: IEEE Standard 1471 identifies sound practices to establish a framework and vocabulary for software architecture concepts.In 2000, the Computer Society approved IEEE Standard 1471, which documents a consensus on good architectural description practices. Five core concepts and relationships provide the foundation for the approved IEEE 1471 version: every system has an architecture, but an architecture is not a system; an architecture and an architecture description are not the same thing; architecture standards, descriptions, and development processes can differ and be developed separately; architecture descriptions are inherently multiviewed; and separating the concept of an object's view from its specification is an effective way to write architecture description standards. IEEE 1471 focuses on both software intensive systems and more general systems, such as information systems, embedded systems, systems-of-systems, product lines, and product families in which software plays a substantial role in development, operation, or evolution.

120 citations


Journal ArticleDOI
TL;DR: This paper takes a detailed look at the performance of components of an idealized question answering system on two different tasks: the TREC Question Answering task and a set of reading comprehension exams.
Abstract: In this paper, we take a detailed look at the performance of components of an idealized question answering system on two different tasks: the TREC Question Answering task and a set of reading comprehension exams. We carry out three types of analysis: inherent properties of the data, feature analysis, and performance bounds. Based on these analyses we explain some of the performance results of the current generation of Q/A systems and make predictions on future work. In particular, we present four findings: (1) Q/A system performance is correlated with answer repetition; (2) relative overlap scores are more effective than absolute overlap scores; (3) equivalence classes on scoring functions can be used to quantify performance bounds; and (4) perfect answer typing still leaves a great deal of ambiguity for a Q/A system because sentences often contain several items of the same type.

110 citations


Journal ArticleDOI
TL;DR: The terrestrial and oceanic sources of moisture that supply warm-season rainfall to the Mississippi River basin and its subbasins are examined over a 36-yr period (1963-98) using hourly observed precipitation, National Centers for Environmental Prediction (NCEP) reanalyses at 6-h intervals, and a back-trajectory algorithm.
Abstract: The terrestrial and oceanic sources of moisture that supply warm-season rainfall to the Mississippi River basin and its subbasins are examined over a 36-yr period (1963–98). Using hourly observed precipitation, National Centers for Environmental Prediction (NCEP) reanalyses at 6-h intervals, and a back-trajectory algorithm, the water falling during observed precipitation events is probabilistically traced to its most recent surface evaporative source, terrestrial or oceanic. Maps of these sources generally show dual maxima, one terrestrial and one oceanic, in spring and a dominance of terrestrial sources in summer. Pentad time series averaged over the 36 years show a late-summer maximum of precipitation recycling in all but the Missouri subbasin. During the 36 years analyzed, 32% of warm-season precipitation in the entire Mississippi basin originated as evaporation within the basin (recycled). About 20% of warm-season precipitation was contributed directly by evaporation from the Gulf of Mexico a...

104 citations


Journal ArticleDOI
TL;DR: The Extensible Markup Language, HTML's likely successor for capturing much Web content, is receiving a great deal of attention from the computing and Internet communities, and although the hype raises unrealistic expectations, XML does reduce the obstacles to sharing data among diverse applications and databases.
Abstract: The Extensible Markup Language, HTML's likely successor for capturing much Web content, is receiving a great deal of attention from the computing and Internet communities. Although the hype raises unrealistic expectations, XML does reduce the obstacles to sharing data among diverse applications and databases by providing a common format for expressing data structure and content. Although some benefits are already within reach, others will require new database technologies and vocabularies for affected application communities.

96 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: This work investigates text classification by format style, i.e. "genre", and demonstrates, by complementing topic classification, that it can significantly improve retrieval of information.
Abstract: Categorization of text in IR has traditionally focused on topic. As use of the Internet and e-mail increases, categorization has become a key area of research as users demand methods of prioritizing documents. This work investigates text classification by format style, i.e. "genre", and demonstrates, by complementing topic classification, that it can significantly improve retrieval of information. The paper compares use of presentation features to word features, and the combination thereof, using Naive Bayes, C4.5 and SVM classifiers. Results show use of combined feature sets with SVM yields 92% classification accuracy in sorting seven genres.

95 citations


Proceedings ArticleDOI
06 Jul 2001
TL;DR: A biographical multi-document summarizer that summarizes information about people described in the news, using corpus statistics along with linguistic knowledge to select and merge descriptions of people from a document collection, removing redundant descriptions.
Abstract: We describe a biographical multi-document summarizer that summarizes information about people described in the news. The summarizer uses corpus statistics along with linguistic knowledge to select and merge descriptions of people from a document collection, removing redundant descriptions. The summarization components have been extensively evaluated for coherence, accuracy, and non-redundancy of the descriptions produced.

Journal ArticleDOI
TL;DR: The Common Vulnerabilities and Exposures (CVE) initiative seeks the adoption of a common naming practice for describing software vulnerabilities, which will be included within security tools and services and on the fix sites of commercial and open source software package providers.
Abstract: Most organizations recognize the importance of cyber security and are implementing various forms of protection. However, many are failing to find and fix known security problems in the software packages they use as the building blocks of their networks and systems, a vulnerability that a hacker can exploit to bypass all other efforts to secure the enterprise. The Common Vulnerabilities and Exposures (CVE) initiative seeks to avoid such disasters and transform this area from a liability to a key asset in the fight to build and maintain secure systems. Coordinating international, community-based efforts from industry, government and academia, CVE strives to find and fix software product vulnerabilities more rapidly, predictably, and efficiently. The initiative seeks the adoption of a common naming practice for describing software vulnerabilities. Once adopted, these names will be included within security tools and services and on the fix sites of commercial and open source software package providers. As vendors respond to more users requests for CVE-compatible fix sites, securing the enterprise will gradually include the complete cycle of finding, analyzing, and fixing vulnerabilities.

Journal ArticleDOI
TL;DR: This article describes an approach for providing dynamic quality of service (QoS) support in a variable bandwidth network, which may include wireless links and mobile nodes, and implemented a new protocol called dynamic resource reservation protocol (dRSVP) and a new QoS application program interface (API).
Abstract: This article describes an approach for providing dynamic quality of service (QoS) support in a variable bandwidth network, which may include wireless links and mobile nodes. The dynamic QoS approach centers on the notion of providing QoS support at some point within a range requested by applications. To utilize dynamic QoS, applications must be capable of adapting to the level of QoS provided by the network, which may vary during the course of a connection. To demonstrate and evaluate the dynamic QoS concept, we have implemented a new protocol called dynamic resource reservation protocol (dRSVP) and a new QoS application program interface (API). The paper describes this new protocol and API and also discusses our experience with adaptive streaming video and audio applications that work with the new protocol in a testbed network, including wireless local area network connectivity and wireless link connectivity emulated over the wired Ethernet. Qualitative and quantitative assessments of the dynamic RSVP protocol are provided.

Patent
17 Sep 2001
TL;DR: In this paper, a spatial null steering microstrip antenna array comprising two concentric microstrip patch antenna elements is used as an auxiliary element in nulling interference received by an outer annular ring antenna disposed around the inner antenna.
Abstract: A spatial null steering microstrip antenna array comprising two concentric microstrip patch antenna elements An inner circular antenna is used as an auxiliary element in nulling interference received by an outer annular ring antenna disposed around the inner antenna The outer annular antenna is resonant in a higher order mode but forced to generate a right hand circularly polarized lower order (TMll) far field radiation pattern, thereby allowing co-modal phase tracking between the two antenna elements for adaptive cancellation Each antenna element is appropriately excited by symmetrically spaced probes Other applications of the antenna array include GPS multipath suppression, simultaneous satellite and terrestrial communications, and co-site interference suppression Dual frequency band applications are achieved by stacked array configurations

Journal ArticleDOI
TL;DR: A new procedure is developed that has better performance for large estimation errors, and when used to initialize the Weiss and Friedlander (1991) MUSIC-based iterative technique, is seen significant improvement over existing techniques for both small and large errors.

Journal ArticleDOI
TL;DR: The approach outlined in this paper conservatively bounds the ionospheric errors even for the worst observed ionosphere conditions to date, using data sets taken from the operational receivers in the WAAS reference station network.
Abstract: The approach outlined in this paper conservatively bounds the ionospheric errors even for the worst observed ionospheric conditions to date, using data sets taken from the operational receivers in the WAAS reference station network.

Proceedings ArticleDOI
05 Nov 2001
TL;DR: This paper shows how the Dolev-Yao model may be used for protocol analysis, while a further analysis gives a quantitative bound on the extent to which real cryptographic primitives may diverge from the idealized model.
Abstract: Dolev and Yao initiated an approach to studying cryptographic protocols which abstracts from possible problems with the cryptography so as to focus on the structural aspects of the protocol. Recent work in this framework has developed easily applicable methods to determine many security properties of protocols. A separate line of work, initiated by Bellare and Rogaway, analyzes the way specific cryptographic primitives are used in protocols. It gives asymptotic bounds on the risk of failures of secrecy or authentication.In this paper we show how the Dolev-Yao model may be used for protocol analysis, while a further analysis gives a quantitative bound on the extent to which real cryptographic primitives may diverge from the idealized model. We develop this method where the cryptographic primitives are based on Carter-Wegman universal classes of hash functions. This choice allows us to give specific quantitative bounds rather than simply asymptotic bounds.

Journal ArticleDOI
TL;DR: Expert Finder and XperNet are created, two software programs that automatically profile topics of interest and expertise associated with employees based on employees’ tool use, publications, project roles, and written communication with others.
Abstract: COMMUNICATIONS OF THE ACM December 2001/Vol. 44, No. 12 55 While computer-supported collaborative virtual environments have been successfully applied to revolutionize distance learning, distributed design, and collaborative analysis and planning (see Ragusa and Bochenek’s introduction to this section), a fundamental challenge of these systems is establishing the right teams of individuals during interactive problem solving for consultation, coordination, or collaboration. Motivated by our use of place-based collaborative environments for analysis and ecision support [3], we created Expert Finder and XperNet, two software programs that automatically profile topics of interest and expertise associated with employees based on employees’ tool use, publications, project roles, and written communication with others. Figure 1 illustrates Expert Finder in action. In this case a user types in the keywords “data mining” and Expert Finder replies with a rank-ordered list of employees whose expertise profile, inferred from a variety of evidence sources, best matches this query. Evidence includes the frequency of documents published by an employee on this topic, contents of any published resume, and documents that mention employees in conjunction with a particular topic (for example, corporate newsletters). In the latter case, informatio extraction technology is used to detect names within unstructured documents. These names are then correlated with topic areas in the documents. Despite low human inter-subject agreement, empirical evaluations [1] comparing 10 technical resource managers’ performances with Expert Finder on five specialty areas (data mining, chemicals, human-computer interaction, network security, and collaboration) demonstrated that Expert Finder performed at 60% precision and 40% recall when approEXPERT FINDING FOR COLLABORATIVE VIRTUAL ENVIRONMENTS Mark Maybury, Ray D’Amore, and David House

Proceedings ArticleDOI
07 Oct 2001
TL;DR: It is shown that propagated effects are significant for the 1st leg after leaving an airport affected by reduced capacities and diminish from leg to leg.
Abstract: The Detailed Policy Assessment Tool (DPAT) models the propagation of delay throughout a system of airports and sectors. We present a DPAT analysis to show the effects of simulated changes in capacity due to inclement weather. Local delays are dependent on the capacity to demand ratio of departures and arrivals. We show that propagated effects are significant for the 1st leg after leaving an airport affected by reduced capacities and diminish from leg to leg.

Proceedings ArticleDOI
Inderjeet Mani1
05 Oct 2001
TL;DR: The significance of some recent developments in summarization technology is discussed, often bundled with information retrieval tools, as well as from the need for corporate knowledge management.
Abstract: With the explosion in the quantity of on-line text and multimedia information in recent years, demand for text summarization technology is growing. Increased pressure for technology advances is coming from users of the web, on-line information sources, and new mobile devices, as well as from the need for corporate knowledge management. Commercial companies are increasingly starting to offer text summarization capabilities, often bundled with information retrieval tools. In this paper, I will discuss the significance of some recent developments in summarization technology.

Journal ArticleDOI
TL;DR: An overview of data mining is presented, then Summaries of several emerging standards are given, as well as proposals that have the potential to change the way data mining tools are built.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: This panel, which includes developers of simulation-optimization packages, will discuss this untapped potential, barriers to broader applicability, and approaches for overcoming these barriers.
Abstract: The combination of simulation and optimization, essentially unheard of in practice a decade ago, is much more accessible today, thanks in large part to the development of commercial optimization software designed for use with existing simulation packages. Despite this growth, untapped applications abound. This panel, which includes developers of simulation-optimization packages, will discuss this untapped potential, barriers to broader applicability, and approaches for overcoming these barriers. This paper starts off with a brief introduction by the panel’s organizer, followed by position statements from the panelists.

Proceedings ArticleDOI
01 Sep 2001
TL;DR: Working with data from the air travel domain, this work identified a number of striking differences between the human-human and human-computer interactions.
Abstract: While researchers have many intuitions about the differences between human-computer and human-human interactions, most of these have not previously been subject to empirical scrutiny. This work presents some initial experiments in this direction, with the ultimate goal being to use what we learn to improve computer dialogue systems. Working with data from the air travel domain, we identified a number of striking differences between the human-human and human-computer interactions.

Proceedings ArticleDOI
07 May 2001
TL;DR: The fully complex NN design is extended to employ other complex activation functions of the hyperbolic, circular, and their inverse function family to restore the nonlinear amplitude and phase distortions of non-constant modulus modulated signals.
Abstract: Designing a neural network (NN) for processing complex signals is a challenging task due to the lack of bounded and differentiable nonlinear activation functions in the entire complex domain C. To avoid this difficulty, 'splitting', i.e., using uncoupled real sigmoidal functions for the real and imaginary components has been the traditional approach, and a number of fully complex activation functions introduced can only correct for magnitude distortion but can not handle phase distortion. We have previously introduced a fully complex NN that uses a hyperbolic tangent function defined in the entire complex domain and showed that for most practical signal processing problems, it is sufficient to have an activation function that is bounded and differentiable almost everywhere in the complex domain. In this paper, the fully complex NN design is extended to employ other complex activation functions of the hyperbolic, circular, and their inverse function family. They are shown to successfully restore the nonlinear amplitude and phase distortions of non-constant modulus modulated signals.

Proceedings ArticleDOI
07 Jul 2001
TL;DR: A set of guidelines for annotating time expressions with a canonicalized representation of the times they refer to, and methods for extracting such time expressions from multiple languages are described.
Abstract: This paper introduces a set of guidelines for annotating time expressions with a canonicalized representation of the times they refer to, and describes methods for extracting such time expressions from multiple languages.

Patent
07 Nov 2001
TL;DR: In this paper, a monomolecular electronic device is provided that includes a molecular diode having at least one barrier insulating group chemically bonded between a pair of molecular ring structures.
Abstract: A monomolecular electronic device is provided that includes a molecular diode having at least one barrier insulating group chemically bonded between a pair of molecular ring structures to form a pair of diode sections, at least one dopant group chemically bonded to one of the pair of diode sections, and a molecular gate structure chemically bonded to the one diode section for influencing an intrinsic bias formed by the at least one dopant group. The device thus produced operates as a molecular electronic transistor, exhibiting both switching and power gain. By adding yet another insulating group to the other of the diode sections, an electrical resistance is formed to define an output which represents an inverter or NOT gate function. The NOT gate can be chemically bonded to molecular diode-diode logic structures to form a single molecule that exhibits complex Boolean functions and power gain.

Journal ArticleDOI
TL;DR: Creating a productive collaborative environment requires a delicate balance of technology, knowledge, and trust.
Abstract: Creating a productive collaborative environment requires a delicate balance of technology, knowledge, and trust.

01 Jan 2001
TL;DR: The Controller Acceptance Rating Scale (CARS) as mentioned in this paper was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool (PFAS).
Abstract: The measurement of operational acceptability is important for the development, implementation, and evolution of air traffic management decision support tools. The Controller Acceptance Rating Scale was developed at NASA Ames Research Center for the development and evaluation of the Passive Final Approach Spacing Tool. CARS was modeled after a well-known pilot evaluation rating instrument, the Cooper-Harper Scale, and has since been used in the evaluation of the User Request Evaluation Tool, developed by MITRE's Center for Advanced Aviation System Development. In this paper, we provide a discussion of the development of CARS and an analysis of the empirical data collected with CARS to examine construct validity. Results of intraclass correlations indicated statistically significant reliability for the CARS. From the subjective workload data that were collected in conjunction with the CARS, it appears that the expected set of workload attributes was correlated with the CARS. As expected, the analysis also showed that CARS was a sensitive indicator of the impact of decision support tools on controller operations. Suggestions for future CARS development and its improvement are also provided.

Proceedings ArticleDOI
26 Nov 2001
TL;DR: In this paper, the unscented transformation is extended to use extra test points beyond the minimum necessary to determine the second moments of a multivariate normal distribution, which can improve the estimated mean and variance of the transformed distribution when the transforming function of its derivatives have discontinuities.
Abstract: The unscented transformation is extended to use extra test points beyond the minimum necessary to determine the second moments of a multivariate normal distribution. The additional test points can improve the estimated mean and variance of the transformed distribution when the transforming function of its derivatives have discontinuities.

Proceedings ArticleDOI
08 Oct 2001
TL;DR: Most simple formulations of this problem are NP-hard, lower bounds on the value of the optimal load are established, and it is shown that if there are no memory constraints for all the servers, then there is an allocation algorithm, that is within a factor 2 of the optimum solution.
Abstract: Given the increasing traffic on the World Wide Web (Web), it is difficult for a single popular Web server to handle the demand from its many clients. By clustering a group of Web servers, it is possible to reduce the origin Web server's load significantly and reduce user's response time when accessing a Web document. A fundamental question is how to allocate Web documents among these servers in order to achieve load balancing? In this paper, we are given a collection of documents to be stored on a cluster of Web servers. Each of the servers is associated with resource limits in its memory and its number of HTTP connections. Each document has an associated size and access cost. The problem is to allocate the documents among the servers so that no server's memory size is exceeded, and the load is balanced as equally as possible. In this paper, we show that most simple formulations of this problem are NP-hard, we establish lower bounds on the value of the optimal load, and we show that if there are no memory constraints for all the servers, then there is an allocation algorithm, that is within a factor 2 of the optimal solution. We show that if all servers have the same number of HTTP connections and the same memory size, then a feasible allocation is achieved within a factor 4 of the optimal solution using at most 4 times the optimal memory size. We also provide improved approximation results for the case where documents are relatively small.