scispace - formally typeset
Search or ask a question

Showing papers by "Mitre Corporation published in 1983"


Journal ArticleDOI
TL;DR: This paper focuses on the development and testing of algorithms for solving the capacitated Chinese postman problem and extensive computational results are presented and analyzed.

310 citations


Journal ArticleDOI
TL;DR: There is a reasonably good correlation between the results of the cell transformation systems and in vivo carcinogenesis, however, the many deficiencies of the EPA Merged Carcinogen List preclude definitive comparisons.
Abstract: The literature on cell transformation by chemical carcinogens has been critically reviewed. This subject is highly relevant to carcinogenesis in vivo, because the phenotypic changes that are collectively referred to as cell transformation usually involve the acquisition of tumorigenicity on inoculation into suitable rodent hosts. The systems chosen for review fall into 3 categories: cell strains (cells with a limited lifespan); cell lines (cells with an unlimited lifespan); and oncogenic viral-chemical interactions involving cells (Fischer rat embryo cells expressing an endogenous retrovirus, mouse embryo cells expressing the AKR leukemia virus, chemical enhancement of a simian adenovirus, SA7 transformation of Syrian hamster or rat embryo cells). Of the entire literature reviewed, 117 papers have been accepted for data abstraction by pre-defined criteria; these include 41 references to cell strains, 40 in cell lines, and 38 in viral-chemical interactions including cells. Because different systems have been reviewed, it would be meaningless to group all the compounds. The overall summary of the systems is as follows (many compounds have been tested in more than one system and, hence, are duplicated in these totals). (Chart: see text) In general, there is a reasonably good correlation between the results of the cell transformation systems and in vivo carcinogenesis. However, the many deficiencies of the EPA Merged Carcinogen List preclude definitive comparisons. Moreover, a number of 'false negatives' were obtained in systems that did not employ external metabolic activation. Further validation of all systems is required, but it seems very probable that several cell transformation systems will become valuable in assaying (with reasonable time and cost) the carcinogenic potential of environmental chemicals.

220 citations


Journal ArticleDOI
TL;DR: This article examined the theoretical requirements of the MDA model in the context of a realistic lending situation and illustrates the extent of bias when these theoretical assumptions are not fully met, and concluded that failure to rigorously meet all the theoretical assumptions of the statistical model may not be as critical as insuring that credit managers fully understand the limitations of these types of decision tools.
Abstract: Multiple discriminant analysis (MDA) is frequently used to develop statistical credit-scoring models for loan evaluation purposes. Current legislative efforts to insure that credit is being granted in a nondiscriminatory manner have focused considerable attention on the reliability of such models. This article examines the theoretical requirements of the MDA model in the context of a realistic lending situation and illustrates the extent of bias when these theoretical assumptions are not fully met. The article concludes that failure to rigorously meet all the theoretical assumptions of the statistical model may not be as critical as insuring that credit managers fully understand the limitations of these types of decision tools. Furthermore, the evidence indicates that statistical models other than multiple discriminant analysis are possibly more relevant to the credit-granting decision.

152 citations


Journal ArticleDOI
C. Cook1, H. Marsh
TL;DR: The subject addressed here is the reliable detection, at the communications receiver, of the individual bits when interference is present, so that the information carried by the sequence of data bits can be recovered.
Abstract: M ODERN military communications systems are , increasingly adopting the digital method of transmitting information. In a digital communications (or data) link, the information to be sent is represented by a sequence of electronic pulses. In the simplest form of digital signal transmission, these pulses are referred to as binary digits, or “bits.” Each pulse, or bit, is the smallest amount of data that can be communicated, and the messages to be sent ‘are composed of larger sets of these bits. The manner in which message information is imparted to the data bits is the subject of information modulation, a topic well covered in most basic texts on communications.” The subject addressed here is the reliable detection, at the communications receiver, of the individual bits when interference is present, so that the information carried by the sequence of data bits can be recovered. The time duration of a data bit implies a minimum bandwidth capability for the communications link. Thus, it is important to understand the basic concept of bandwidth involved in any criteria for reliable reception.

134 citations


Journal ArticleDOI
TL;DR: This work discusses the low-pass filter characteristics of the two-point central difference algorithm and derives the optimal step size for two types of human eye movement data.
Abstract: There are many algorithms for calculating derivatives. The two-point central difference algorithm is the simplest. Besides simplicity, the two most important characteristics of this algorithm are accuracy and frequency response. The frequency content of the data prescribes a lower limit on the sampling rate. The smoothness and accuracy of the data determine the optimal step size. We discuss the low-pass filter characteristics of this algorithm and derive the optimal step size for two types of human eye movement data. To calculate the velocity of fast (saccadic) eye movements, the algorithm should have a cutoff frequency of 74 Hz. For typical slow (smooth pursuit) eye movements, a step size of 25 or 50 ms is optimal.

110 citations


Journal ArticleDOI
Ames1, Gasser1, Schell
TL;DR: The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system by adapting the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.
Abstract: Providing highly reliable protection for computerized information has traditionally been a game of wits. No sooner are security controls introduced into systems than are penetrators finding ways to circumvent them. Security kernel technology provides a conceptual base on which to build secure computer systems, thereby replacing this game of wits with a methodical design process. The kernel approach is equally applicable to all types of systems, from general-purpose, multiuser operating systems to special-purpose systems such as communication processors-wherever the protection of shared information is a concern. Most computer installations rely solely on a physical security perimeter, protecting the computer and its users by guards, dogs, and fences. Communications between the computer and remote devices may be encrypted to geographically extend the security perimeter, but if only physical security is used, all users can potentially access all information in the computer system. Consequently, all users must be trusted to the same degree. When the system contains sensitive information that only certain users should access, we must introduce some additional protection mechanisms. One solution is to give each class of users a separate machine. This solution is becoming increasingly less costly because of declining hardware prices, but it does not address the controlled sharing of information among users. Sharing information within a single computer requires internal controls to isolate sensitive information. Continual efforts are being made to develop reliable internal security controls solely through tenacity and hard work. Unfortunately, these attempts have been uniformly unsuccessful for a number of reasons. The first is that the operating system and utility software are typically large and complex. The second is that no one has precisely defined the security provided by the internal controls. Finally, little has been done to ensure the cor-rectness of the security controls that have been implemented. The security kernel approach described here directly addresses the size and complexity problem by limiting the protection mechanism to a small portion of the system. The second and third problems are addressed by clearly defining a security policy and then following a rigorous methodology that includes developing a mathematical model, constructing a precise specification of behavior, and coding in a high-level language. The security kernel approach is based on the concept of the reference monitor, an abstract notion adapted from the models of Butler Lampson.I The reference monitor provides an underlying security theory for conceptualizing the idea of protection. In a reference monitor, all …

110 citations


Proceedings ArticleDOI
12 Dec 1983
TL;DR: An implementation of the proposed algorithm to the TSP for various size networks is applied and the results show the algorithm to be inferior to several well-known heuristics in terms of both solution quality and computer time expended.
Abstract: In recent papers by Kirkpatrick et al(1982,1983), an analogy between the statistical mechanics of large multivariate physical systems and combinatorial optimization is presented and used to develop a general strategy for solving discrete optimization problems The method relies on probabilistically accepting intermediate increases in the objective function through a set of user-controlled parameters It is argued that by taking such controlled uphill steps, from time to time, a high quality solution can be found in a moderate amount of computer time This paper applies an implementation of the proposed algorithm to the TSP for various size networks The results show the algorithm to be inferior to several well-known heuristics in terms of both solution quality and computer time expended In addition, set-up time for parameter selection constitutes a major burden for the user Sensitivity of the algorithm to changes in stopping rules and parameter selection is demonstrated through extensive computational experiments

80 citations


Journal ArticleDOI
TL;DR: In this article, the C-O-S-H system was used to estimate the temperature at which the two gases came into isotopic equilibrium, and the carbon and helium isotope ratios together with their geologic settings are strongly suggestive that the large quantities of methane in Lake Kivu and the gases venting along the East Pacific Rise are abiogenic.
Abstract: Thermodynamic calculations for the C-O-S-H system indicate that at a fixed oxygen fugacity methane is in a stable phase relative to carbon dioxide at high pressures and low temperatures. At a constant temperature and pressure, methane is favored at low oxygen fugacities. Volcanic gases and near-surface igneous rocks exhibit high values of oxygen fugacity. However, direct measurement of the oxygen fugacity of spinels from peridotites of deep origin indicate that the oxygen fugacity of these rocks is low, corresponding to an iron - wustite buffer. The relative abundance of the carbon isotopes C12 and C13 varies widely in natural gases. Methane formed by bacterial fermentation is highly enriched in the lighter isotope, while methane from deep deposits is much less so as is the methane flowing from hydrothermal vents on the East Pacific Rise. Except In extreme cases, the carbon isotope ratio cannot be used alone to assess whether methane is biogenic or abiogenic. The carbon isotope ratio in coexisting methane and carbon dioxide can be used to estimate the temperature at which the two gases came into isotopic equilibrium. This ratio indicates a high temperature of equilibration for a number of gas deposits. The carbon and helium isotope ratios together with their geologic settings are strongly suggestive that the large quantities of methane in Lake Kivu and the gases venting along the East Pacific Rise are abiogenic. Methane associated with the Red Sea brines and various geothermal areas may also be in part abiogenic. The high abundance of carbon in the Sun, the atmosphere of the outer planets, carbonaceous chondrites and comets, suggests that carbon may be more abundant in the Earth than it is in near-surface igneous rocks. Such a high abundance could lead to a progressive outgassing of methane at depth, which then is oxidized near the surface or in the atmosphere. Methane hydrates are stable at low temperatures and high pressures. Today, methane hydrates are found in areas of permafrost and in ocean sediments. Methane hydrates in ocean sediments were first formed about 20 mya (million years ago) when the Antarctic ice sheet reached sea level. Terrestrial methane hydrates formed more recently during the glaciations beginning 1.6 mya. Methane hydrates and trapped gas are probably abundant under the Antarctic ice sheet. The formation of methane hydrates may be related to the low values of carbon dioxide in the atmosphere some 20,000 years ago.

51 citations



Patent
27 May 1983
TL;DR: In this article, the wavelength division multiplexer/demultiplexer includes a gradient index of refraction (GRIN) lens and a diffraction grating located adjacent to one end of the GRIN lens.
Abstract: The wavelength division multiplexer/demultiplexer includes a gradient index of refraction (GRIN) lens and a diffraction grating located adjacent to one end of the GRIN lens. The diffraction grating is adapted for switching from a first angle to a second angle with respect to the GRIN lens. For both path and terminal equipment redundancy, first and second input optical fibers are located at first and second input locations on an end surface of the GRIN lens. A first and a second plurality of output optical fibers are also located on this end surface of the GRIN lens. The input and output optical fibers are located so that optical energy will travel from the first input optical fiber to the first plurality of output optical fibers when the diffraction grating is oriented at the first angle and optical energy will travel from the second input optical fiber to the second plurality of output optical fibers when the diffraction grating is oriented at the second angle.

29 citations


Patent
27 May 1983
TL;DR: In this paper, a graded index of refraction (GRIN) lens/diffraction grating combination is used to monitor the output source peak wavelength of an optical source and the output of these detectors are used to control a thermoelectric cooler which will raise or lower the temperature of the source so as to drive the output wavelength toward the desired peak output wavelength.
Abstract: A graded index of refraction (GRIN) lens/diffraction grating combination is used to monitor the output source peak wavelength of an optical source. As the output wavelength of a source varies, different detectors are activated. The output of these detectors is used to control a thermoelectric cooler which will raise or lower the temperature of the source so as to drive the output wavelength toward the desired peak output wavelength.

Journal ArticleDOI
TL;DR: In this article, a map of pH and H+ deposition in precipitation has been developed for the continental United States by analyzing laboratory pH data from nine precipitation chemistry networks and two single stations spread across the continental USA and southern Canada during the late 1970's.
Abstract: Acid rain has become a major environmental concern, the current extent of which is illustrated in this paper. Maps of both pH and H+ deposition in precipitation have been developed for the continental United States by analyzing laboratory pH data from nine precipitation chemistry networks and two single stations spread across the continental United States and southern Canada during the late 1970's. Average laboratory pH values were obtained or calculated for approximately 100 stations, and isopleths of weighted mean pH and mean annual H+ deposition in precipitation were drawn. Results of this analysis show that in spite of a wide variety of collection methods and sampling intervals, there is remarkable uniformity in the average pH among the various stations. The northeastern United States continues to exhibit the most acidic precipitation, with remaining portions of the eastern United States, states along the western coastline and a pocket in central Colorado also experiencing acid precipitation.

Proceedings ArticleDOI
01 Feb 1983
TL;DR: The natural language database query system incorporated in the KNOBS interactive planning system comprises a dictionary driven parser, APE-II, and script interpreter which yield a conceptual dependency conceptualization as a representation of the meaning of user input.
Abstract: The natural language database query system incorporated in the KNOBS interactive planning system comprises a dictionary driven parser, APE-II, and script interpreter which yield a conceptual dependency conceptualization as a representation of the meaning of user input. A conceptualization pattern matching production system then determines and executes a procedure for extracting the desired information from the database. In contrast to syntax driven Q-A systems, e.g., those based on ATN parsers, APE-II is driven bottom-up by expectations associated with word meanings. The processing of a query is based on the contents of several knowledge sources including the dictionary entries (partial conceptualizations and their expectations), frames representing conceptual dependency primitives, scripts which contain stereotypical knowledge about planning tasks used to infer states enabling or resulting from actions, and two production system rule bases for the inference of implicit case fillers, and for determining the responsive database search. The goals of this approach, all of which are currently at least partially achieved, include utilizing similar representations for questions with similar meanings but widely varying surface structures, developing a powerful mechanism for the disambiguation of words with multiple meanings and the determination of pronoun referents, answering questions which require inferences to be understood, and interpreting ellipses and ungrammatical utterances.

Proceedings ArticleDOI
25 Apr 1983
TL;DR: The use of database views in database management systems that enforce user level discretionary and nondiscretionary access control policies is discussed and issues such as how should views be classified, what types of mechanisms should be used to define views, etc.
Abstract: The use of database views in database management systems that enforce user level discretionary and nondiscretionary access control policies is discussed. This discussion involves several issues such as how should views be classified?, what types of mechanisms should be used to define views?, etc. Mapping between views, view updating, and aggregation and inference problems are also discussed.

Journal ArticleDOI
01 Oct 1983
TL;DR: In this article, important integrity and reliability peformance considerations for both landing and en route radionavigation in the National Airspace System are discussed and analyzed, and the performance of the present systems are described in detail to develop a better understanding of the performance that should be expected of alternative systems.
Abstract: Tbe important integrity and reliability peformance considerations for both landing and en route radionavigation in the National Airspace System are discussed and analyzed The integrity and reliability of the present systems are described in detail to develop a better understanding of the performance that should be expected of alternative radionavigation systems NAVSTAR GPS is then analyzed and suggestions are made as to how it could be enhanced to provide increased levels of integrity and reliability

Journal ArticleDOI
TL;DR: A systematic approach to test data design is presented based on both practical translation of theory and organization of professional lore, organized around five domains and achieving coverage (exercise) of them by the test data.
Abstract: A systematic approach to test data design is presented based on both practical translation of theory and organization of professional lore. The approach is organized around five domains and achieving coverage (exercise) of them by the test data. The domains are processing functions, input, output, interaction among functions, and the code itself. Checklists are used to generate data for processing functions. Separate checklists have been constructed for eight common business data processing functions such as editing, updating, sorting, and reporting. Checklists or specific concrete directions also exist for input, output, interaction, and code coverage. Two global heuristics concerning all test data are also used. A limited discussion on documenting test input data, expected results, and actual results is included.

Proceedings ArticleDOI
25 Apr 1983
TL;DR: This paper addresses questions in a personal view of the development and the utility of the "Secure Computer Systems" security model.
Abstract: Eight years after the completion of the "Secure Computer Systems" series, basic questions about that work are being raised. Is the model useful? Is it overly restrictive? Are further modeling efforts necessary to address current problems? This paper addresses those questions in a personal view of the development and the utility of the "Secure Computer Systems" security model.

Journal ArticleDOI
B.S. Babu1
TL;DR: This paper describes a 2400 bit/s vocoder based on spectral envelope estimation, spectral coding to 48 bits, pitch extraction, and decreasing-chirp excitation for voiced synthesis that is robust in acoustic noise environments at a data rate of 2400 bits/s.
Abstract: This paper describes a 2400 bit/s vocoder based on spectral envelope estimation, spectral coding to 48 bits, pitch extraction, and decreasing-chirp excitation for voiced synthesis. Several spectral smoothing and coding schemes are described and intelligibility test results compared. This vocoder was implemented on the CSP-30 high speed digital processor at the RADC/EEV Speech Processing Research and Development Facility at Hanscom AFB, MA. This system yields high performance in a quiet environment and is robust in acoustic noise environments at a data rate of 2400 bits/s.

Proceedings Article
22 Aug 1983
TL;DR: The KNOBS planning system is an experimental expert system which assists a user by instantiating a stereotypical solution to his problem by engaging in a dialog with the user to allow him to enter components of a plan or to ask questions about the contents of a database which describes the planning world.
Abstract: The KNOBS [ENGELMAN 80] planning system is an experimental expert system which assists a user by instantiating a stereotypical solution to his problem. SNUKA, the natural language understanding component of KNOBS, can engage in a dialog with the user to allow him to enter components of a plan or to ask questions about the contents of a database which describes the planning world. User input is processed with respect to several knowledge sources including word definitions, scripts which describe the relationships among the scenes of the problem solution, and four production system rule bases which determine the proper database access for answering questions, infer missing meaning elements, describe how to conduct a conversation, and monitor the topic of the conversation. SNUKA differs from GUS [BOBROW 77], a dialog system similar to SNUKA in its goals, in its use of a script to guide the conversation, interpret indirect answers to questions, determine the referents of nominals, perform inferences to answer the user's questions, and decide upon the order of asking questions of the user to maintain a coherent conversation. SNUKA differs from other script-based language understanders such as SAM [CULLINGFORD 78] and FRUMP [DEJONG 79] in its role as a conversational participant instead of a story understander.

Journal ArticleDOI
TL;DR: The application of selected cell-transformation systems for identifying chemicals that are potentially carcinogenic or that act to enhance carcinogenicity is limited.
Abstract: In a recent comprehensive review, Heidelberger discussed cellular transformation as a basic tool for studying various aspects of chemical carcinogenesis in vitro.' In keeping with the theme of this conference on toxicity testing, this report is limited primarily to the application of selected cell-transformation systems for identifying chemicals that are potentially carcinogenic or that act to enhance carcinogenicity.

Journal ArticleDOI
B. D. Metcalf1, L. Jou1
TL;DR: The performance of a family of dual GRIN lens multiplexers has been investigated with a ray tracing analysis and certain versions were found to provide low loss and a relative insertion loss independence of wavelength.
Abstract: The performance of a family of dual GRIN lens multiplexers has been investigated with a ray tracing analysis. The dependence of the device’s insertion loss on several lens and input/output fiber parameters was examined. Several versions of this multiplexer were built and characterized. Certain versions were found to provide low loss and a relative insertion loss independence of wavelength.

Journal ArticleDOI
01 Oct 1983
TL;DR: The ESD-MITRE user system interface (USI) design guidelines on sequence control were applied to the first two issues in this list, and were helpful during the design process.
Abstract: A menu-based interface was designed to provide users with an easy, consistent means for finding and initiating various facilities on a host computer. The interface had to be usable by people of varying levels of skill and experience, using different types of terminals. Three major issues were considered in the design: option selection techniques, display design, and functional organization of the choices within the hierarchy. The ESD-MITRE user system interface (USI) design guidelines on sequence control were applied to the first two issues in this list.The guidelines were helpful during the design process. They were used to guide the basic design, and to help make decisions as the design was refined. Different guidelines can lead to different designs, so designers must be prepared to decide which requirements and which guidelines are of greatest importance in their particular situations.

Journal ArticleDOI
TL;DR: By clarifying the notion of information as a physical quantity characteristic of systems and giving it a strict operational definition, the paradoxes and meaningless questions can be resolved or eliminated, and attention focused on important scientific issues connected with information so that progress can then be made unhampered by irrelevant questions.
Abstract: The development of the concept of information as a physical quantity governed by laws and connected with other physical quantities has been carried out largely by L, Brillouin. However. Brillouin's work suffers from confusion on some very critical matters, which leads to attempts to answer meaningless or irrelevant questions. By clarifying the notion of information as a physical quantity characteristic of systems and giving it a strict operational definition, the paradoxes and meaningless questions can be resolved or eliminated, and attention focused on ihe important scientific issues connected with information so that progress can then be made unhampered by irrelevant questions of an episie mo logical or metaphysical nature. A notable clarification of the function of computing systems in particular is realized.

Proceedings ArticleDOI
24 Oct 1983

Journal ArticleDOI
TL;DR: In this paper, a feasibility study of a multibeam frequency-division multiple-access satellite system operating in the 30/20 GHz band is presented, where the transponder design is greatly simplified by the application of a regional concept.
Abstract: The paper summarizes a feasibility study of a multibeam frequency-division multiple-access satellite system operating in the 30/20 GHz band. The system must accommodate a very high volume of traffic within the restrictions of a 5 kW solar cell array and a 2.5 GHz bandwidth. Multibeam satellite operation reduces the dc power demand and allows reuse of the available bandwidth. Interferences among the beams are brought to acceptable levels by appropriate frequency assignments. A transponder design is presented that is greatly simplified by the application of a regional concept. System analysis shows that minimum shift keying modulation is appropriate for a high-capacity system because it conserves the frequency spectrum. Rain attenuation, a serious problem in this frequency band, is combated with sufficient power margins and with coding. Link budgets, cost analysis, and mass and power calculations are also discussed. A satellite-routed, frequency-division multipleaccess system compares favorably in performance and cost with a satellite-switched, time-division multipleaccess system.

Journal ArticleDOI
TL;DR: People scanned lists of hierarchically numbered items in order to perform various tasks and found that for tasks involving location of individual items, a list of format with complete numbering resulted in superior performance.
Abstract: People scanned lists of hierarchically numbered items in order to perform various tasks. For tasks involving location of individual items, a list of format with complete numbering resulted in superior performance. For tasks requiring perception of list structure, an alternative format with implicit numbering was equally good if not better.

01 Dec 1983
TL;DR: In this article, a ray trace method for determination of propagation paths in a semi-empirical, stratified atmosphere is described, and results obtained from the ray trace model are employed to show that the effective earth radius method (EERM) can be used for approximate determinations of grazing angle, ground range and slant range for higher altitude paths.
Abstract: Atmospheric refractivity gradients are responsible for the bending of radio and microwave propagation paths such that the electromagnetic line-of-sight deviates from the geometrical line-of-sight. Such refraction effects must be accounted for when the performance of airborne surveillance radar systems is modeled. For propagation paths within 1 km of the earth's surface, the effective earth radius model is normally valid and commonly used. In the present work, a ray trace method for determination of propagation paths in a semi-empirical, stratified atmosphere is described. Results obtained from the ray trace model are employed to show that the effective earth radius method (EERM) can be used for approximate determinations of grazing angle, ground range and slant range for higher altitude paths. Effective earth radius scale factors are given as functions of transmitter altitude for selected values of surface refractivity.

Journal ArticleDOI
K. Brayer1
TL;DR: A network design approach based on adaptive routing algorithms for the decentralized control of a computer communications network with no central control station, no interchange of routing tables between nodes, and no flooding of traffic is presented.
Abstract: A SET OF AUTONOMOUS adaptive routing algorithms for the decentralized control of a computer communications network has been developed. These algorithms permit the design of a network with no central control station, no interchange of routing tables between nodes, and no flooding of traffic. Simultaneously, a network employing these algorithms will carry high levels of traffic, offer full accountability for messages to senders, operate on communications links exhibiting high error rates, and respond rapidly to topology changes when data links or nodes fail or when subscribers change their locations. In this paper, a network design approach based on these algorithms is presented. Most commercially viable computer communications networks are fixed ground-based networks. The nodes of such networks are in fixed permanent emplacements (for example, ARPANET) and users subscribe via telephone lines. The nodes themselves are interconnected by either fixed dedicated wideband data circuits or the national telephone system of the country or countries that the network serves. Other networks operate as radio networks (for example, ALOHA) with users contending for time on one or more radio links. The worldwide restrictions on radio frequencies are a severe limiting factor on the use of such schemes. There remains a need for a mobile computer communications network. I t could be used between aircraft and other aircraft and ground stations, as a mobile ground-based network, or as a network including satellites, aircraft, and ships. I t could also be used to include mobile nodes into fixed ground-based networks. The connections for such a network could involve point-to-point data links (such as telephone and microwave) and radio links, perhaps of a cellular radio system, thus avoiding contention. Unlike a fixed network, a mobile network does not have a

01 Jan 1983
TL;DR: In this article, the application of ion implantation to emitter and back surface field formation in silicon space solar cells was reviewed and shown that the implantation process is particularly compatible with formation of a high-quality back surface reflector.
Abstract: This paper reviews the application of ion implantation to emitter and back surface field formation in silicon space solar cells. Experiments based on 2 ohm-cm boron-doped silicon are presented. It is shown that the implantation process is particularly compatible with formation of a high-quality back surface reflector. Large area solar cells with AM0 efficiency greater than 14 percent are reported.

Proceedings ArticleDOI
22 Mar 1983
TL;DR: In this article, an evaluation of candidate multichannel, hermaphroditic fiber optic connectors under uniform, and where possible, standard conditions was performed, and the optical insertion loss was measured while the connectors underwent normal mating, repetitive mating, strain relief flexing, temperature cycling, and connector-to-cable tensile stress.
Abstract: This paper describes an evaluation of candidate multichannel, hermaphroditic fiber optic connectors under uniform, and where possible, standard conditions. Connectors manufactured by TRW, Bell-Northern, Hughes, ITT-STL (Ptarmigan) and ITT-Cannon were tested. The optical insertion loss was measured while the connectors underwent (1) normal mating, (2) repetitive mating, (3) strain relief flexing, (4) temperature cycling, (5) connector-to-cable tensile stress and, (6) connector-to-connector tensile stress. In addition, the candidate connectors were rated by attributes for (1) mating forces, (2) alignment ease, (3) cleaning, (4) handling damage, and (5) human factors.© (1983) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.