scispace - formally typeset
Search or ask a question
Institution

Mitre Corporation

CompanyBedford, Massachusetts, United States
About: Mitre Corporation is a company organization based out in Bedford, Massachusetts, United States. It is known for research contribution in the topics: Air traffic control & National Airspace System. The organization has 4884 authors who have published 6053 publications receiving 124808 citations. The organization is also known as: Mitre & MITRE.


Papers
More filters
Journal ArticleDOI
Ames1, Gasser1, Schell
TL;DR: The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system by adapting the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.
Abstract: Providing highly reliable protection for computerized information has traditionally been a game of wits. No sooner are security controls introduced into systems than are penetrators finding ways to circumvent them. Security kernel technology provides a conceptual base on which to build secure computer systems, thereby replacing this game of wits with a methodical design process. The kernel approach is equally applicable to all types of systems, from general-purpose, multiuser operating systems to special-purpose systems such as communication processors-wherever the protection of shared information is a concern. Most computer installations rely solely on a physical security perimeter, protecting the computer and its users by guards, dogs, and fences. Communications between the computer and remote devices may be encrypted to geographically extend the security perimeter, but if only physical security is used, all users can potentially access all information in the computer system. Consequently, all users must be trusted to the same degree. When the system contains sensitive information that only certain users should access, we must introduce some additional protection mechanisms. One solution is to give each class of users a separate machine. This solution is becoming increasingly less costly because of declining hardware prices, but it does not address the controlled sharing of information among users. Sharing information within a single computer requires internal controls to isolate sensitive information. Continual efforts are being made to develop reliable internal security controls solely through tenacity and hard work. Unfortunately, these attempts have been uniformly unsuccessful for a number of reasons. The first is that the operating system and utility software are typically large and complex. The second is that no one has precisely defined the security provided by the internal controls. Finally, little has been done to ensure the cor-rectness of the security controls that have been implemented. The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system. The second and third problems are addressed by clearly defining a security policy and then following a rigorous methodology that includes developing a mathematical model, constructing a precise specification of behavior, and coding in a high-level language. The security kernel approach is based on the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.I The reference monitor provides an underlying security theory for conceptualizing the idea of protection. In a reference monitor, all …

110 citations

Proceedings ArticleDOI
07 Jul 1997
TL;DR: This paper presents a trainable rule-based algorithm for performing word segmentation that provides a simple, language-independent alternative to large-scale lexical-based segmenters requiring large amounts of knowledge engineering.
Abstract: This paper presents a trainable rule-based algorithm for performing word segmentation. The algorithm provides a simple, language-independent alternative to large-scale lexical-based segmenters requiring large amounts of knowledge engineering. As a stand-alone segmenter, we show our algorithm to produce high performance Chinese segmentation. In addition, we show the transformation-based algorithm to be effective in improving the output of several existing word segmentation algorithms in three different languages.

110 citations

Journal ArticleDOI
TL;DR: This paper takes a detailed look at the performance of components of an idealized question answering system on two different tasks: the TREC Question Answering task and a set of reading comprehension exams.
Abstract: In this paper, we take a detailed look at the performance of components of an idealized question answering system on two different tasks: the TREC Question Answering task and a set of reading comprehension exams. We carry out three types of analysis: inherent properties of the data, feature analysis, and performance bounds. Based on these analyses we explain some of the performance results of the current generation of Q/A systems and make predictions on future work. In particular, we present four findings: (1) Q/A system performance is correlated with answer repetition; (2) relative overlap scores are more effective than absolute overlap scores; (3) equivalence classes on scoring functions can be used to quantify performance bounds; and (4) perfect answer typing still leaves a great deal of ambiguity for a Q/A system because sentences often contain several items of the same type.

110 citations

Proceedings ArticleDOI
01 Dec 1998
TL;DR: The paper provides a Description of the development of the HLA, a technical description of the key elements of the architecture, and a discussion of HLA implementation, including HLA support processes and software.
Abstract: The DoD High Level Architecture (HLA) provides the specification of a common technical architecture for use across all classes of simulations in the US Department of Defense. It provides the structural basis for simulation interoperability. The baseline definition of the HLA includes the HLA rules, the HLA interface specification (IFSpec), and the HLA object model template (OMT). The HLA rules are a set of 10 basic rules that define key principles used in the HLA as well as the responsibilities and relationships among the components of an HLA federation. The HLA IFSpec provides a specification of the functional interfaces between HLA federates and the HLA runtime infrastructure. The HLA OMT provides a common presentation format for HLA simulation and federation object models. The paper provides a description of the development of the HLA, a technical description of the key elements of the architecture, and a discussion of HLA implementation, including HLA support processes and software.

109 citations

Journal ArticleDOI
TL;DR: This corrects the article DOI: 10.1038/srep44499 to indicate that the author of the paper is a doctor of medicine rather than a scientist, as previously reported.
Abstract: Increased interconnection between critical infrastructure networks, such as electric power and communications systems, has important implications for infrastructure reliability and security. Others have shown that increased coupling between networks that are vulnerable to internetwork cascading failures can increase vulnerability. However, the mechanisms of cascading in these models differ from those in real systems and such models disregard new functions enabled by coupling, such as intelligent control during a cascade. This paper compares the robustness of simple topological network models to models that more accurately reflect the dynamics of cascading in a particular case of coupled infrastructures. First, we compare a topological contagion model to a power grid model. Second, we compare a percolation model of internetwork cascading to three models of interdependent power-communication systems. In both comparisons, the more detailed models suggest substantially different conclusions, relative to the simpler topological models. In all but the most extreme case, our model of a "smart" power network coupled to a communication system suggests that increased power-communication coupling decreases vulnerability, in contrast to the percolation model. Together, these results suggest that robustness can be enhanced by interconnecting networks with complementary capabilities if modes of internetwork failure propagation are constrained.

109 citations


Authors

Showing all 4896 results

NameH-indexPapersCitations
Sushil Jajodia10166435556
Myles R. Allen8229532668
Barbara Liskov7620425026
Alfred D. Steinberg7429520974
Peter T. Cummings6952118942
Vincent H. Crespi6328720347
Michael J. Pazzani6218328036
David Goldhaber-Gordon5819215709
Yeshaiahu Fainman5764814661
Jonathan Anderson5719510349
Limsoon Wong5536713524
Chris Clifton5416011501
Paul Ward5240812400
Richard M. Fujimoto5229013584
Bhavani Thuraisingham5256310562
Network Information
Related Institutions (5)
IBM
253.9K papers, 7.4M citations

83% related

Hewlett-Packard
59.8K papers, 1.4M citations

83% related

Carnegie Mellon University
104.3K papers, 5.9M citations

83% related

George Mason University
39.9K papers, 1.3M citations

83% related

Georgia Institute of Technology
119K papers, 4.6M citations

82% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20234
202210
202195
2020139
2019145
2018132