scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 1990"


Journal ArticleDOI
TL;DR: In this paper, the authors estimate that about 11,000 Gt of carbon is stored in clathrates under permafrost regions and about 400 Gt under ocean sediments.
Abstract: Methane clathrates are stable at depths greater than about 200 m in permafrost regions and in ocean sediments at water depths greater than about 250 m, provided bottom waters are sufficiently cold. The thickness of the clathrate stability zone depends on surface temperature and geothermal gradient. Average stability zone thickness is about 400 m in cold regions where average surface temperatures are below freezing, 500 m in ocean sediments, and up to 1,500 m in regions of very cold surface temperature (<-15 °C) or in the deep ocean. The concentration of methane relative to water within the zone of stability determines whether or not clathrate will actually occur. The geologic setting of clathrate occurrences, the isotopic composition of the methane, and the methane to ethane plus propane ratio in both the clathrates and the associated pore fluids indicate that methane in clathrates is produced chiefly by anaerobic bacteria. Methane occurrences and the organic carbon content of sediments are the bases used to estimate the amount of carbon currently stored as clathrates. The estimate of about 11,000 Gt of carbon for ocean sediments, and about 400 Gt for sediments under permafrost regions is in rough accord with an independent estimate by Kvenvolden of 10,000 Gt. The shallowness of the clathrate zone of stability makes clathrates vulnerable to surface disturbances. Warming by ocean flooding of exposed continental shelf, and changes in pressure at depth, caused, for example, by sea-level drop, destabilize clathrates under the ocean, while ice-cap growth stabilizes clathrates under the ice cap. The time scale for thermal destabilization is set by the thermal properties of sediments and is on the order of thousands of years. The time required to fix methane in clathrates as a result of surface cooling is much longer, requiring several tens of thousands of years. The sensitivity of clathrates to surface change, the time scales involved, and the large quantities of carbon stored as clathrate indicate that clathrates may have played a significant role in modifying the composition of the atmosphere during the ice ages. The release of methane and its subsequent oxidation to carbon dioxide may be responsible for the observed swings in atmospheric methane and carbon dioxide concentrations during glacial times. Because methane and carbon dioxide are strong infrared absorbers, the release and trapping of methane by clathrates contribute strong feedback mechanisms to the radiative forcing of climate that results from earth's orbital variations.

273 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: By creating a burst image from the original document image, the processing time of the Hough transform can be reduced by a factor of as much as 7.4 for documents with gray-scale images and interline spacing can be determined more accurately.
Abstract: As part of the development of a document image analysis system, a method, based on the Hough transform, was devised for the detection of document skew and interline spacing-necessary parameters for the automatic segmentation of text from graphics. Because the Hough transform is computationally expensive, the amount of data within a document image is reduced through the computation of its horizontal and vertical black runlengths. Histograms of these runlengths are used to determine whether the document is in portrait or landscape orientation. A gray scale burst image is created from the black runlengths that are perpendicular to the text lines by placing the length of the run in the run's bottom-most pixel. By creating a burst image from the original document image, the processing time of the Hough transform can be reduced by a factor of as much as 7.4 for documents with gray-scale images. Because only small runlengths are input to the Hough transform and because the accumulator array is incremented by the runlength associated with a pixel rather than by a factor of 1, the negative effects of noise, black margins, and figures are avoided. Consequently, interline spacing can be determined more accurately. >

263 citations


Proceedings ArticleDOI
07 May 1990
TL;DR: An analysis of some recent combinatorial theories of computer security is presented from the perspective of information theory, intended to be applicable to nondeterministic systems that may be networked.
Abstract: An analysis of some recent combinatorial theories of computer security is presented from the perspective of information theory. The theories analyzed are information-flow theories based on the concept of nondeducibility. They are intended to be applicable to nondeterministic systems that may be networked. >

262 citations


Proceedings ArticleDOI
16 Jun 1990
TL;DR: A rule-based system for automatically segmenting a document image into regions of text and nontext is presented and allows easy fine tuning of the algorithmic steps to produce robust rules, to incorporate additional tools (as they become available), and to handle special segmentation needs.
Abstract: A rule-based system for automatically segmenting a document image into regions of text and nontext is presented. The initial stages of the system perform image enhancement functions such as adaptive thresholding, morphological processing, and skew detection and correction. The image segmentation process consists of smearing the original image via the run length smoothing algorithm, calculating the connected components locations and statistics, and filtering (segmenting) the image based on these statistics. The text regions can be converted (via an optical character reader) to a computer-searchable form, and the nontext regions can be extracted and preserved. The rule-based structure allows easy fine tuning of the algorithmic steps to produce robust rules, to incorporate additional tools (as they become available), and to handle special segmentation needs. >

158 citations


Proceedings Article
Marc Vilain1
29 Jul 1990
TL;DR: This paper is concerned with making precise the notion that recognizing plans is much like parsing text, and establishes a correspondence between Kautz' plan recognition formalism and existing grammatical frameworks.
Abstract: This paper is concerned with making precise the notion that recognizing plans is much like parsing text. To this end, it establishes a correspondence between Kautz' plan recognition formalism and existing grammatical frameworks. This mapping helps isolate subsets of Kautz' formalism in which plan recognition can be efficiently performed by parsing.

119 citations


Journal ArticleDOI
A. J. Broder1
TL;DR: It is shown that incremental search can be implemented as a sequence of invocations of a previously published non-incremental algorithm, and a new incremental search algorithm is presented which finds the next nearest neighbor more efficiently by eliminating redundant computations.

87 citations


Journal ArticleDOI
TL;DR: A unified discrete channel model from the information source up to the sampler was developed for fading multipath channels and it is shown that the effects of channel measurement noise are less damaging for the decision-directed adaptation technique as compared to any kind of reference- directed adaptation.
Abstract: A unified discrete channel model from the information source up to the sampler was developed for fading multipath channels. Different methods for adaptive channel measurement was studied. The performance of a discrete matched filter using different adaptation techniques and working over a troposcatter channel is predicted. It is shown that the effects of channel measurement noise are less damaging for the decision-directed adaptation technique as compared to any kind of reference-directed adaptation. >

77 citations


Proceedings ArticleDOI
Allison Mankin1
01 Aug 1990
TL;DR: A surprising result was that the Random Drop algorithm was worse in a topology with a single gateway bottleneck than in those with multiple bottlenecks, and showed that local traffic is affected by events at distant gateways.
Abstract: Gateways in very high speed internets will need to have low processing requirements and rapid responses to congestion. This has prompted a study of the performance of the Random Drop algorithm for congestion recovery. It was measured in experiments involving local and long distance traffic using multiple gateways. For the most part, Random Drop did not improve the congestion recovery behavior of the gateways A surprising result was that its performance was worse in a topology with a single gateway bottleneck than in those with multiple bottlenecks. The experiments also showed that local traffic is affected by events at distant gateways.

75 citations


Proceedings ArticleDOI
12 Jun 1990
TL;DR: The author further delineates and improves the evidence that nondeducibility on strategies is a respectable candidate for a definition of security against information compromise, at least for the class of systems that can be modeled as synchronized state machines.
Abstract: The author further delineates and improves the evidence that nondeducibility on strategies is a respectable candidate for a definition of security against information compromise, at least for the class of systems that can be modeled as synchronized state machines. First, the author confirms the thesis of J.T. Wittbold and D.M. Johnson (1990) that nondeducibility on strategies is stronger than the notion of nondeducibility on inputs, defined by D. Sutherland (1986), which is generally viewed as a minimum requirement for security. Second, it is shown that nondeducibility on strategies is preserved when two machines that are secure by this definition are hooked up arbitrarily, even when loops are created by the interconnection. In order to make these more general hookups possible, it is necessary to generalize the definition of a synchronized state machine. >

63 citations


Book ChapterDOI
01 Jul 1990
TL;DR: The logic of imps is based on a version of simple type theory with partial functions and subtypes, which provides relatively large primitive inference steps to facilitate human control of the deductive process and human comprehension of the resulting proofs.
Abstract: imps is an Interactive Mathematical Proof System intended as a general purpose tool for formulating and applying mathematics in a familiar fashion. The logic of imps is based on a version of simple type theory with partial functions and subtypes. Mathematical specication and inference are performed relative to axiomatic theories, which can be related to one another via inclusion and theory interpretation. imps provides relatively large primitive inference steps to facilitate human control of the deductive process and human comprehension of the resulting proofs. An initial theory library containing almost a thousand repeatable proofs covers signicant portions of logic, algebra and analysis, and provides some support for modeling applications in computer science.

61 citations


Journal ArticleDOI
TL;DR: The primary goal of the MITRE compartmented mode workstation (CMW) project was to articulate the security requirements that workstations must meet to process highly classified intelligence data, and a prototype was implemented which demonstrated that workStations could meet the requirements in an operationally useful manner while still remaining binary compatible with off-the-shelf software.
Abstract: The primary goal of the MITRE compartmented mode workstation (CMW) project was to articulate the security requirements that workstations must meet to process highly classified intelligence data As a basis for the validity of the requirements developed, a prototype was implemented which demonstrated that workstations could meet the requirements in an operationally useful manner while still remaining binary compatible with off-the-shelf software The security requirements not only addressed traditional security concerns but also introduced concepts in areas such as labeling and the use of a trusted window management system The CMW labeling paradigm is based on associating two types of security labels with objects: sensitivity levels and information labels Sensitivity levels describe the levels at which objects must be protected Information labels are used to prevent data overclassification and also provide a mechanism for associating with data those markings that are required for accurate data labeling, but which play no role in access control decisions The use of a trusted window manager allows users to easily operate at multiple sensitivity levels and provides a convenient mechanism for communicating security information to users in a relatively unobtrusive manner >

Journal ArticleDOI
01 Jan 1990
TL;DR: The requirements imposed on automated planning systems by battle management is examined, a planning approach that meets these requirements is described, and an architecture for investigating adversarial aspects of battle planning is presented.
Abstract: Adversarial planning for battle management is discussed. Conventional artificial intelligence planning approaches fail to address problems that arise. These include an unpredictable and dynamic environment; control of several semiautonomous intelligent agents; the need to adjust plans dynamically according to developments during plan execution; and the need to consider the presence of an adversary in devising plans. The requirements imposed on automated planning systems by battle management is examined, a planning approach that meets these requirements is described, and an architecture for investigating adversarial aspects of battle planning is presented. >

Proceedings ArticleDOI
07 May 1990
TL;DR: A real-time feasible track-before-detect process for a scanning pulsed Doppler airborne-type radar is described, which sequentially optimizes smoothed rank as the field of view is scanned.
Abstract: A real-time feasible track-before-detect process for a scanning pulsed Doppler airborne-type radar is described. Robust distribution-free based ranking is applied in range separately for each Doppler. A dynamic programming process is used to select a best tentative track passing through each range, azimuth, and Doppler cell. The process sequentially optimizes smoothed rank as the field of view is scanned. Results of a simulation are presented for a system with multiple bursts at a given pulse repetition frequency. Track detections are obtained with probability 0.5 at a signal-to-noise ratio of about 5 dB for a Rician signal pulse noise model as measured at the Doppler outputs. The corresponding false track generation rate is estimated to be 5*10/sup -5/ per processed resolution cell. >

Patent
27 Jul 1990
TL;DR: In this article, a closed feedback system for issuing ground delays in air traffic control is disclosed, whereby the demand of arriving air traffic is constantly monitored and the ground delays are adjusted in real time to account for the demand that has not materialized.
Abstract: A closed feedback system for issuing ground delays in air traffic control is disclosed, whereby the demand of arriving air traffic is constantly monitored and the ground delays are adjusted in real time to account for the demand that has not materialized. The departure messages from air traffic control centers are continuously monitored to determine if flights are departing on time. If a flight does not depart within a specified time interval, it is considered cancelled or delayed due to company reasons and the arrival slot is assigned to a flight that can use that slot. The slot vacated by the reassigned flight is assigned to another flight and so forth until all slots are filled. Provision is made for airlines to inform air traffic control of company delays and cancellations so that departure times can be assigned that the flight can meet. This scheme significantly improves the efficiency of operation and reduces the possibility of flights being given ground delays when the capacity at the airport is not fully utilized.

Journal ArticleDOI
TL;DR: It is shown, using results about infinite rewritings of trees, that term graph rewriting with arbitrary structure sharing (including redex capturing) is sound for left-linear term rewrite systems.
Abstract: Term graphs are a natural generalization of terms in which structure sharing is allowed. Structure sharing makes term graph rewriting a time- and space-efficient method for implementing term rewrite systems. Certain structure sharing schemes can lead to a situation in which a term graph component is rewritten to another component that contains the original. This phenomenon, called redex capturing, introduces cycles into the term graph which is being rewritten—even when the graph and the rule themselves do not contain cycles. In some applications, redex capturing is undesirable, such as in contexts where garbage collectors require that graphs be acyclic. In other applications, for example in the use of the fixed-point combinator Y, redex capturing acts as a rewriting optimization. We show, using results about infinite rewritings of trees, that term graph rewriting with arbitrary structure sharing (including redex capturing) is sound for left-linear term rewrite systems.

Journal ArticleDOI
TL;DR: In this paper, the effects of sodium sulfide solutions at pH 9.2 on fresh fracture and oxidized fracture surfaces of chalcopyrite and pyrite have been investigated by X-ray photoelectron spectroscopy.

Patent
Francis A. Fay1
09 May 1990
TL;DR: An antenna including a lens that has an array of radiating elements located on a focal arc, each radiating element corresponding to a different transmission beam direction; and a beam launcher having a plurality of phased arrays and internal probes, each of the plurality of internal probes being electrically coupled to a corresponding one of the radii.
Abstract: An antenna including a lens that has an array of radiating elements located on a focal arc, each radiating element corresponding to a different transmission beam direction; and a beam launcher having a plurality of phased arrays and a plurality of internal probes, each of the plurality of internal probes being electrically coupled to a corresponding one of the radiating elements, the phased arrays for space feeding a selected one or more of the radiating elements with signals so as to generate corresponding transmission beams from the lens.

Patent
24 Jan 1990
TL;DR: In this paper, a method and apparatus for decohering coherent light projected onto a screen by transmitting the coherent light through a series of light conducting optical fibers, preferably of varying length, is described.
Abstract: The invention comprises a method and apparatus for decohering coherent light projected onto a screen by transmitting the coherent light through a series of light conducting optical fibers, preferably of varying length.

Journal ArticleDOI
TL;DR: The mainframe processed incoming data and displayed it to the flight controllers; however, it performed few functions to convert raw data into information as discussed by the authors and left the job of data analysis to flight controllers.
Abstract: Perhaps one of the most powerful symbols of the United States' technological prowess is the Mission Control Center (MCC) at the Lyndon B. Johnson Space Center in Houston. The rooms at Mission Control have been witness to major milestones in the history of American technology such as the first lunar landing, the rescue of Skylab, and the first launch of the Space Shuttle. When Mission Control was first activated in the early 1960s it was truly a technological marvel. This facility, however, has received only modest upgrades since the Apollo program. Until recently it maintained a mainframe-based architecture that displayed data and left the job of data analysis to flight controllers. The display technology utilized in this system was monochrome and primarily displayed text information with limited graphics (photo 1).An example display of 250 communication parameters is shown in Figure 1. The mainframe processed incoming data and displayed it to the flight controllers; however it performed few functions to convert raw data into information. The job of converting data into information upon which flight decisions could be made was performed by the flight controllers. In some cases, where additional computational support was required, small offline personal computers were added to the complex. Flight controllers visually copied data off the console display screens, and manually entered the data into the small personal computers where offline analysis could be performed.Although this system was technologically outdated, it contained years of customizing efforts and served NASA well through the early Space Shuttle program. Several factors are now driving NASA to change the architecture of Mission Control to accommodate advanced automation. First is the requirement to support an increased flight rate without major growth in the number of personnel assigned to flight control duties.A second major concern is loss of corporate knowledge due to the unique bimodal age distribution of NASA staff. Hiring freezes between the Apollo and Shuttle programs have resulted in NASA being composed of two primary groups. Approximately half of NASA consists of Apollo veterans within five years of retirement. The other half consists of personnel under the age of 35 with Shuttle-only experience. NASA considers it highly desirable to capture the corporate knowledge of the Apollo veterans in knowledge-based systems before they retire. Because the mainframe complex is primarily oriented to data display, it is a poor environment for capturing and utilizing knowledge.These factors have resulted in aggressive efforts by NASA's Mission Operations Directorate to utilize the following: a distributed system of Unix engineering-class workstations to run a mix of online real-time expert systems, and traditional automation to allow flight controllers to perform more tasks and to capture the corporate knowledge of senior personnel. Starting with the first flight of the Space Shuttle after the Challenger accident, the Real-Time Data System (RTDS) has played an increasingly significant role in the flight-critical decision-making process.

Patent
07 Sep 1990
TL;DR: In this article, an associative memory that finds the location of at least one string of characters in the associative RAM that matches a string of words presented sequentially as an input to the RAM is defined.
Abstract: An associative memory that finds the location of at least one string of characters in the associative memory that matches a string of characters presented sequentially as an input to the associative memory. The string of characters in the associative memory, the input string of characters, or both may include a specially marked characters, or set of characters, that acts as a "variable indicator." The specially marked character, or set of characters, will "match" a portion of the other string. A flag is set in the associative memory at either the starting locations or the ending locations of the matching strings. Flags are provided only at locations of stored matching strings of characters found within a selected addressable area or areas. Each flag can be moved from a first byte to a second byte in the associative memory that has a predetermined location relative to the first byte. A selection circuit selects one of the matching stored strings of characters by enabling a test signal which selects one of the flags to propagate through the associative memory circuit in a daisy-chain manner. The daisy-chain path is segmented in order to decrease the propagation time of the test signal. A summation circuit, useful in neural network applications, adds a number presented as at least one input byte to the associative memory to a number stored as at least one byte in the associative memory at the location of a stored string of characters that matches the input string.

Patent
26 Jan 1990
TL;DR: In this article, the authors proposed a microburst detection and measurement system having a plurality of SENSTRANS units positioned at spaced intervals, each SEN STRANS unit monitoring humidity, wind speed, wind direction, barometric pressure, and temperature.
Abstract: The invention comprises a microburst detection and measurement system having a plurality of SENSTRANS units positioned at spaced intervals, each SENSTRANS unit monitoring humidity, wind speed, wind direction, barometric pressure, and temperature. The sensors transmit the observed weather conditions to a central data processing and display unit via RF, FM transmission. The data processing and display unit collects and assimilates the data to determine if the measured parameters indicate the likely presence of microburst or other weather hazards. If so, it issues a weather hazard warning both visually and audibly.

Journal ArticleDOI
TL;DR: A systolic architecture is presented which is capable of both input and output conversion with a throughput equal to that of the fast residue number system (RNS) processes of addition and multiplication.
Abstract: A systolic architecture is presented which is capable of both input and output conversion with a throughput equal to that of the fast residue number system (RNS) processes of addition and multiplication. The converter can be used with an arbitrary RNS (within certain realization-imposed limits). An actual anticipated VLSI layout is described that will be programmable for RNSs with up to eight moduli of six bits or less. This should provide an off-the-shelf solution for many RNS conversion requirements. >

Journal ArticleDOI
TL;DR: In this paper, a class of periodic binary sequences that are obtained from q-ary m-sequences is defined, and a general method to determine their linear spans (the length of the shortest linear recursion over the Galois field GF(2) satisfied by the sequence) is described.
Abstract: A class of periodic binary sequences that are obtained from q-ary m-sequences is defined, and a general method to determine their linear spans (the length of the shortest linear recursion over the Galois field GF(2) satisfied by the sequence) is described. The results imply that the binary sequences under consideration have linear spans that are comparable with their periods, which can be made very long. One application of the results shows that the projective and affine hyperplane sequences of odd order both have full linear span. Another application involves the parity sequence of order n, which has period p/sup m/-1, where p is an odd prime. The linear span of a parity sequence of order n is determined in terms of the linear span of a parity sequence of order 1, and this leads to an interesting open problem involving primes. >

Proceedings ArticleDOI
03 Jun 1990
TL;DR: The queuing behavior of many communication systems is well modeled by a queuing system in which time is slotted and in which the number of entities arriving during a slot is dependent upon the state of a discrete-time, discrete-state Markov chain.
Abstract: The queuing behavior of many communication systems is well modeled by a queuing system in which time is slotted and in which the number of entities arriving during a slot is dependent upon the state of a discrete-time, discrete-state Markov chain. The probability generating function is presented for joint and marginal buffer occupancy distributions of statistical time-division multiplexing systems in this class. A simple technique is discussed for obtaining moments of the queue length distribution. In addition, a discussion is presented of inversion of the probability generating function. Numerical results, including queue length distributions for some special cases, are presented. >

Journal ArticleDOI
TL;DR: Hydroxyurea causes necrosis in proliferating tissues of rabbit embryos on gestational day 12, and co-administration of the antioxidant propyl gallate delays the onset of necrosis until 6 h and ameliorates the teratogenic effects seen at term.

Proceedings ArticleDOI
13 May 1990
TL;DR: A redundancy memory architecture that increases system memory reliability without incurring the memory access speed degradation or size impact that result from using error-correction coding or paper-swapping techniques have been developed.
Abstract: A redundancy memory architecture that increases system memory reliability without incurring the memory access speed degradation or size impact that result from using error-correction coding or paper-swapping techniques have been developed. The architecture uses a small associative cache memory to provide redundant memory locations. Logic is provided to perform memory system testing and remapping of fault memory locations. A VLSI circuit has been developed that incorporates the features of the architecture as a proof-of-concept demonstration. This device has been designated the memory reliability enhancement peripheral (MREP). >

Journal ArticleDOI
TL;DR: The Quadratic span of a periodic binary sequence is defined to be the length of the shortest quadratic feedback shift register (FSR) that generates it.
Abstract: The quadratic span of a periodic binary sequence is defined to be the length of the shortest quadratic feedback shift register (FSR) that generates it. An algorithm for computing the quadratic span of a binary sequence is described. The required increase in quadratic span is determined for the special case of when a discrepancy occurs in a linear FSR that generates an initial portion of a sequence. The quadratic spans of binary DeBruijn sequences are investigated. An upper bound for the quadratic span of a DeBruijn sequence of span n is given; this bound is attained by the class of DeBruijn sequences obtained from m-sequences. It is easy to see that a lower bound is n+1, but a lower bound of n+2 is conjectured. The distributions of quadratic spans of DeBruijn sequences of span 3, 4, 5 and 6 are presented. >

Patent
19 Dec 1990
TL;DR: Paper notes used as a monetary currency are deuterated as mentioned in this paper, where the deuteration occurs in the cellulose fibers forming the currency, and deuterium atoms are used to resist an exchange of hydrogen atoms for deterium atoms, and then they are blended with natural, non-deuterated fibers to form the paper.
Abstract: Paper notes used as a monetary currency are deuterated. The level of deuteration while not complete, is high. For U.S. currency the level of deuteration is at least 0.1 mg of deuterium for each one dollar in value of the currency note, and preferably at least 0.3 mg. Use of X-ray or gamma ray interrogation with a beam energy above 2 MeV produces a nuclear reaction releasing a neutron from the deuterium nucleus. If the currency is in large concentrations, e.g. $100,000 or more, the neutrons emitted by this reaction are reliably detectable. The deuteration occurs in the cellulose fibers forming the currency. To resist an exchange of hydrogen atoms for deterium atoms, the deuterium atoms can be used in the formation of synthetic cellulose where the deuterium is more deeply buried within the cellulose molecule than in naturally occurring cellulose. The deuterated synthetic fibers are blended with natural, non-deuterated fibers to form the paper. The currency can also include a mechanism, such as dye, to signal attempts to use solvents or otherwise facilitate any such hydrogen substitution.

Journal ArticleDOI
TL;DR: The results suggest that the antioxidant properties of these substances interfere with the rapidly occurring toxic effects of HU and that this may account for amelioration of H U developmental toxicity.

Journal ArticleDOI
TL;DR: This work defines and proves the correctness of combinator head reduction using the cyclic Y rule, and shows how to consider reduction with cycles as an optimization of reduction without cycles.
Abstract: Turner popularized a technique of Wadsworth in which a cyclic graph rewriting rule is used to implement reduction of the fixed point combinator Y. We examine the theoretical foundation of this approach. Previous work has concentrated on proving that graph methods are, in a certain sense, sound and complete implementations of term methods. This work is inapplicable to the cyclic Y rule, which is unsound in this sense since graph normal forms can exist without corresponding term normal forms. We define and prove the correctness of combinator head reduction using the cyclic Y rule; the correctness of normal reduction is an immediate consequence. Our proof avoids the use of infinite trees to explain cyclic graphs. Instead, we show how to consider reduction with cycles as an optimization of reduction without cycles.