scispace - formally typeset
Search or ask a question

Showing papers by "Northwestern University published in 1986"


Journal ArticleDOI
TL;DR: In this article, the authors empirically tested whether different models are needed to predict the adoption of technical process innovations that contain a high degree of new knowledge radical innovations and a low degree of incremental innovations.
Abstract: This paper proposes and empirically tests whether different models are needed to predict the adoption of technical process innovations that contain a high degree of new knowledge radical innovations and a low degree of new knowledge incremental innovations. Results from a sample of 40 footwear manufacturers suggest that extensive knowledge depth measured by the number of technical specialists is important for the adoption of both innovation types. Larger firms are likely to have both more technical specialists and to adopt radical innovations. The study did not find associations between the adoption of either innovation type and decentralized decision making, managerial attitudes toward change, and exposure to external information. By implication, managers trying to encourage technical process innovation adoption need not be as concerned about modifying centralization of decision making, managerial attitudes and exposure to external information as would managers trying to encourage other types of innovation adoption, e.g., innovations in social services where these factors have been found to be important. Instead, investment in human capital in the form of technical specialists appears to be a major facilitator of technical process innovation adoption.

2,389 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations.
Abstract: This paper will discuss the current research in building models of conditional variances using the Autoregressive Conditional Heteroskedastic (ARCH) and Generalized ARCH (GARCH) formulations. The discussion will be motivated by a simple asset pricing theory which is particularly appropriate for examining futures contracts with risk averse agents. A new class of models defined to be integrated in variance is then introduced. This new class of models includes the variance analogue of a unit root in the mean as a special case. The models are argued to be both theoretically important for the asset pricing models and empirically relevant. The conditional density is then generalized from a normal to a Student-t with unknown degrees of freedom. By estimating the degrees of freedom, implications about the conditional kurtosis of these models and time aggregated models can be drawn. A further generalization allows the conditional variance to be a non-linear function of the squared innovations. Throughout empirical...

2,055 citations


Journal ArticleDOI
05 Sep 1986-Science
TL;DR: The identity of an important cell type that supports replication of the AIDS retrovirus in brain tissue was determined in two affected individuals and these cells were mononucleated and multinucleated macrophages that actively synthesized viral RNA and produced progeny virions in the brains of the patients.
Abstract: One of the common neurological complications in patients with the acquired immune deficiency syndrome (AIDS) is a subacute encephalopathy with progressive dementia. By using the techniques of cocultivation for virus isolation, in situ hybridization, immunocytochemistry, and transmission electron microscopy, the identity of an important cell type that supports replication of the AIDS retrovirus in brain tissue was determined in two affected individuals. These cells were mononucleated and multinucleated macrophages that actively synthesized viral RNA and produced progeny virions in the brains of the patients. Infected brain macrophages may serve as a reservoir for virus and as a vehicle for viral dissemination in the infected host.

1,675 citations


Journal ArticleDOI
TL;DR: In this paper, a distinction between two types of judgment tasks, memory-based versus on-line, is introduced and is related to the five process models: independent processing, availability, biased retrieval, biased encoding, and incongruity-biased encoding.
Abstract: Five alternative information processing models that relate memory for evidence to judgments based on the evidence are identified in the current social cognition literature: independent processing, availability, biased retrieval, biased encoding, and incongruity-biased encoding. A distinction between two types of judgment tasks, memory-based versus on-line, is introduced and is related to the five process models. In memory-based tasks where the availability model describes subjects' thinking, direct correlations between memory and judgment measures are obtained. In on-line tasks where any of the remaining four process models may apply, prediction of the memory-judgment relationship is equivocal but usually follows the independence model prediction of zero correlation. There ought to be a relationship between memory and judgment. Our intuition tells us that we should be able to generate more arguments and information in support of a favored position than against it, that evaluations of people should be related to the amounts of good and bad information we have about them. When a person is able to remember many arguments against a belief, or to cite many good characteristics of an acquaintance, we are surprised if they endorse the belief or dislike the person. In support of intuitions like these, names have been given to the idea that memory and judgment have a simple direct relationship, including "availability," "dominance of the given," "salience effect," and so forth. However, empirical studies of the relationship between memory and judgment with subject matter as diverse as social impressions, personal attitudes, attributions of causes for behavior, evaluations of legal culpability, and a variety of probability and frequency estimates have not revealed simple relations between memory and judgment. Some relationships have been found, but strong empirical relations are rare and results are often contradictory. Some examples seem to support the expectation of a direct relationship between memory and judgment. Tversky and Kahneman (1973) demonstrated that many judgments of numerosity were directly correlated with the "ease with which instances or associations could be brought to mind" (p. 208). In an illustrative series of experiments, they showed that judgments of the frequency of words in English text were correlated with the ease of remembering the words. Beyth-Marom and Fischhoff(1977) provided more definite evidence on the strength of the mem

1,161 citations



Journal ArticleDOI
TL;DR: In this paper, a general review of experimental work is presented in order to permit a comprehensive evaluation of current understanding of the quantum size effect on the electronic spectrum including magnetic susceptibility, nuclear magnetic resonance, electron spin resonance, heat capacity, optical, and infrared absorption measurements.
Abstract: The subject of small metallic particle properties is outlined with emphasis on quantum electronic effects. The theoretical background for interpretation of experiments is discussed beginning with the work of Kubo. More recent amendments to this have been included, taking into account the techniques of random matrix theory and effects of the spin-orbit interaction. A general review of experimental work is presented in order to permit a comprehensive evaluation of current understanding of the quantum size effect on the electronic spectrum. This survey includes magnetic susceptibility, nuclear magnetic resonance, electron spin resonance, heat capacity, optical, and infrared absorption measurements. These are discussed in many instances from the point of view of there being competing size effects arising from a reduced volume contrasted with those from the surface. A number of stimulating and provocative results have led to the development of new areas of research involving metallic clusters such as cluster beam techniques, far-infrared absorption by particle clusters, adsorbate NMR, and particle-matrix composites. Although there is little question that the experiments themselves indicate the existence of quantum effects, there are as yet, insufficient results to test the theoretical predictions for electron-level distribution functions based on fundamental symmetries of the electron Hamiltonian. A new suggestion for measurement of the electron-level correlation function is made using the magnetic field dependence of the NMR Knight shift. Particle preparation methods are also reviewed with commentary on the problems and advantages of these techniques for investigation of quantum electronic effects.

1,153 citations


Book ChapterDOI
TL;DR: In this paper, the authors reviewed the search theory's performances to date in labor market analysis and found that the steady state fractions that are unemployed are equal to the product of the average frequency and duration of unemployment spells.
Abstract: Publisher Summary The theory of search is an important young actor on the stage of economic analysis. It plays a major part in a dramatic new field, the economics of information and uncertainty. By exploiting its sequential statistical decision theoretic origins, the search theory has found success by specializing in the portrayal of a decision-maker who must acquire and use information to take rational action in an ever changing and uncertain environment. Although the search theory's specific characterizations can now be found in many arenas of applied economic analysis, most of the theory's original roles are found in the labor economics literature. This chapter reviews the search theory's performances to date in labor market analysis. In a given population of labor force participants, the steady state fractions that are unemployed are equal to the product of the average frequency and duration of unemployment spells. The data sources reveal that unemployment spells are typically frequent but short in all phases of the business cycle, although counter-cyclic increases in both frequency and duration contribute to the well-known time series behavior of unemployment rates.

1,109 citations


Journal ArticleDOI
TL;DR: Eight classes of chemosensory neurons in C. elegans fill with fluorescein when living animals are placed in a dye solution, suggesting that dye contact is the principal factor under selection.

884 citations


Journal ArticleDOI
TL;DR: A high correlation was obtained between the CES-D and trait anxiety, which suggests that the CESTheD measures in large part the related conceptual psychological domain of predisposition for anxiousness.
Abstract: The factorial and discriminant validity of the Center for Epidemiological Studies Depression (CES-D) scale was examined for a sample of 116 parents who were participating in family support programs designed to prevent child abuse and neglect. Participants' self-reports of depressive symptoms as measured by the CES-D were analyzed in relation to their self-esteem (measured with the Rosenberg Self-Esteem scale) and state and trait anxiety (measured with Spielberger's State-Trait Anxiety Inventory). Factorial validity was adequate, and results indicated a moderate correlation between the CES-D and self-esteem and state anxiety. However, a high correlation was obtained between the CES-D and trait anxiety, which suggests that the CES-D measures in large part the related conceptual psychological domain of predisposition for anxiousness.

743 citations


Journal ArticleDOI
TL;DR: In this paper, a simple two-dimensional mathematical model is proposed for the analysis of the brittle-ductile transition process, the corresponding elasticity boundary-value problem is formulated in terms of singular integral equations, the solution method is given, and numerical results are obtained and their physical implications are discussed.
Abstract: The micromechanics of brittle failure in compression and the transition from brittle to ductile failure, observed under increasing confining pressures, are eaxamined in the light of existing experimental results and model studies. First, the micromechanics of axial splitting and faulting is briefly reviewed, certain mathematical models recently developed for analysing these failure modes are outlined, and some new, simple closed-form analytic solutions of crack growth in compression and some new quantitative model experimental results are presented. Then, a simple two-dimensional mathematical model is proposed for the analysis of the brittle-ductile transition process, the corresponding elasticity boundary-value problem is formulated in terms of singular integral equations, the solution method is given, and numerical results are obtained and their physical implications are discussed. In addition, a simple closed-form analytic solution is presented and, by comparing its results with those of the exact formulation, it is shown that the analytic estimates are reasonably accurate in the range of the brittle response of the material. Finally, the results of some laboratory model experiments are reported in an effort to support the mathematical models.

735 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare how much profit an owner of a patent can realize by licensing it to an oligopolistic industry producing a homogeneous product, by means of a fixed fee or a per unit royalty.
Abstract: We compare how much profit an owner of a patented cost-reducing invention can realize by licensing it to an oligopolistic industry producing a homogeneous product, by means of a fixed fee or a per unit royalty. Our analysis is conducted in terms of a noncooperative game involving n + 1 players: the inventor and the n firms. In this game the inventor acts as a Stackelberg leader, and it has a unique subgame perfect equilibrium in pure strategies. It is shown that licensing by means of a fixed fee is superior to licensing by means of a royalty for both the inventor and consumers. Only a "drastic" innovation is licensed to a single producer.

Journal ArticleDOI
TL;DR: In this article, the probabilistic finite element method (PFEM) is formulated for linear and non-linear continua with inhomogeneous random fields, and the random field is also discretized.
Abstract: The probabilistic finite element method (PFEM) is formulated for linear and non-linear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in non-linear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem and a two-dimensional plane-stress beam bending problem. The moments calculated compare favourably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.

Journal ArticleDOI
TL;DR: The problem of choosing a portfolio of securities so as to maximize the expected utility of wealth at a terminal planning horizon is solved via stochastic calculus and convex analysis and a martingale representation problem is developed.
Abstract: The problem of choosing a portfolio of securities so as to maximize the expected utility of wealth at a terminal planning horizon is solved via stochastic calculus and convex analysis. This problem is decomposed into two subproblems. With security prices modeled as semimartingales and trading strategies modeled as predictable processes, the set of terminal wealths is identified as a subspace in a space of integrable random variables. The first subproblem is to find the terminal wealth that maximizes expected utility. Convex analysis is used to derive necessary and sufficient conditions for optimality and an existence result. The second subproblem of finding the admissible trading strategy that generates the optimal terminal wealth is a martingale representation problem. The primary advantage of this approach is that explicit formulas can readily be derived for the optimal terminal wealth and the corresponding expected utility, as is shown for the case of an exponential utility function and a risky security modeled as geometric Brownian motion.

Journal ArticleDOI
TL;DR: In this paper, the authors developed a theory and econometric method of portfolio performance measurement using a competitive equilibrium version of the Arbitrage Pricing Theory, and showed that the Jensen coefficient and the appraisal ratio of Treynor and Black are theoretically compatible with the arbitrage pricing theory.

Journal ArticleDOI
TL;DR: The authors found that the ability to analyze an everyday problem with reference to the law of large numbers was much greater for those with several years of training in statistics than for those who had less.

Journal ArticleDOI
07 Feb 1986-Science
TL;DR: This time-dependent redistribution of enzyme activity was directly related to the persistence of synaptic plasticity, suggesting a novel mechanism regulating the strength of synaptic transmission.
Abstract: Protein kinase C activity in rat hippocampal membranes and cytosol was determined 1 minute and 1 hour after induction of the synaptic plasticity of long-term potentiation. At 1 hour after long-term potentiation, but not at 1 minute, protein kinase C activity was increased twofold in membranes and decreased proportionately in cytosol, suggesting translocation of the activity. This time-dependent redistribution of enzyme activity was directly related to the persistence of synaptic plasticity, suggesting a novel mechanism regulating the strength of synaptic transmission.

Journal ArticleDOI
TL;DR: A model is built to study how the frequency of technological change interacts with the intensity of competition to influence the optimal level of integration and is tested and very strongly supported by data from 93 industries.
Abstract: This paper starts with a survey of the received theories of vertical integration. We then extend these theories by arguing that while uncertainty in general will make integration more effective, a particular type of uncertainty, the possibility of technological obsolescence, works the other way. After making this point at a conceptual level, we build a model to study how the frequency of technological change interacts with the intensity of competition to influence the optimal level of integration. The predictions of the model are then tested and very strongly supported by data from 93 industries.

Journal ArticleDOI
TL;DR: In this article, the laminar flame speeds of methane + air and propane + air mixtures, with and without the addition of stoichiometrically small amounts of hydrogen, have been determined by first measuring the flame speeds with stretch and then linearly extrapolating these values to zero stretch.

Journal ArticleDOI
TL;DR: In this paper, the problem of determining the minimum number of vertex guards that can see an n -wall simply connected art gallery is shown to be NP-hard, and it can be modified to show that the problems of finding edge guards and point guards in a simply connected polygonal region are also NP-Hard.
Abstract: We study the computational complexity of the art gallery problem originally posed by Klee, and its variations. Specifically, the problem of determining the minimum number of vertex guards that can see an n -wall simply connected art gallery is shown to be NP-hard. The proof can be modified to show that the problems of determining the minimum number of edge guards and the minimum number of point guards in a simply connected polygonal region are also NP-hard. As a byproduct, the problem of decomposing a simple polygon into a minimum number of star-shaped polygons such that their union is the original polygon is also shown to be NP-hard.

Journal ArticleDOI
TL;DR: It is concluded that a specific, high affinity, low capacity binding protein for hGH with mol wt of 60,000-65,000 exists in normal and hypopituitary human plasma.
Abstract: Human (h) GH in plasma exists as a series of size isomers, which are in partexplained by the presence of hGH oligomers. However, certain aspects of circulating largemol wt hGH, such as its high relative proportion compared to that in the pituitary, are poorly understood. To explore whether binding of hGH to plasma protein (s) could contribute to the phenomenon of large mol wt hGH, we incubated freshly prepared monomeric [125I]hGH or biosynthesized ]3H]hGH with normal human plasma or serum atpH7.4 for various time periods at 22 and 37 C. Plasma radioactive hGH patterns were thenanalyzed simultaneously with unincubated tracer hGH by Sephadex G- 100 and G-200 chromatography. We found that part of the radioactivity was converted to a component with an apparent mol wt of 85,000, suggesting binding to a plasma protein(s). This phenomenon was inhibited in a dose-dependent fashion by unlabeled hGH. Saturation/Scatchard analysis indicatedan association constant (Ka) of 2–3 × 108 M−1 and amaximum binding capacity o...

Book
01 Jan 1986
TL;DR: The representation of acoustic and electromagnetic fields the special theory of relativity radiation resonators the theory of waveguides refraction surface waves scattering by smooth objects diffraction by edges transient waves.
Abstract: The representation of acoustic and electromagnetic fields the special theory of relativity radiation resonators the theory of waveguides refraction surface waves scattering by smooth objects diffraction by edges transient waves. Appendices: Bessel functions Legendre functions Mathieu functions parabolic cylinder functions spheroidal functions tensor calculus asymptotic evaluation of integrals.

Journal ArticleDOI
TL;DR: Results suggest that readers encode these inferences into memory only minimally, but that they can make use of a cue word that represents the inference both at the time of an immediate test and in delayed cued recall.
Abstract: If someone falls off of a 14th story roof, very predictably death will result. The conditions under which readers appear to infer such predictable outcomes were examined with three different retrieval paradigms: immediate recognition test, cued recall, and priming in word recognition. On immediate test, responses to a word representing the implicit outcome (e.g., dead) were slow, but on delayed test these responses were slow or inaccurate only when primed by an explicitly stated word. However, the word expressing the predictable outcome did function as an effective recall cue. Results suggest that readers encode these inferences into memory only minimally, but that they can make use of a cue word that represents the inference (e.g., dead) both at the time of an immediate test and in delayed cued recall.

Journal ArticleDOI
TL;DR: Several new numerical integration formulas on the surface of a sphere in three dimensions are derived in this article, which are superior to the existing ones in that for the same degree of approximation they require fewer integration points for functions with central or planar symmetry.
Abstract: Several new numerical integration formulas on the surface of a sphere in three dimensions are derived. These formulas are superior to the existing ones in that for the same degree of approximation they require fewer integration points for functions with central or planar symmetry. Furthermore, a general method of deriving the integration formulas, which achieves conceptual simplicity at the expense of extensive numerical work left for a computer, is demonstrated. In this method, the coefficients of the integration formula are determined from a system of linear algebraic equations directly representing the conditions for a certain number of terms of the three-dimensional Taylor series expansion of the integrated function about the center of the sphere to vanish, while the unknown locations of the integration points are determined from a condition for the next term (or terms) of the expansion to vanish, and if it cannot be made to vanish, then from a condition for minimizing the magnitude of this term (or these terms). Finally, we formulate a new condition of optimality of the integration formulas which is important for the integration error in certain applications. Es werden verschiedene neue numerische Integrationsformeln auf der Oberflache einer dreidimensionalen Kugel abgeleitet. Diese Formeln sind gegenuber den bereits existierenden insofern besser, als sie fur den gleichen Approximationsgrad fur Funktionen mit zentraler oder ebener Symmetrie weniger Integrationspunkte erfordern. Ferner wird eine allgemeine Methode zur Ableitung der Integrationsformeln bewiesen, die auf Kosten eines einem Computer zu uberlassenden extensiven numerischen Aufwandes begriffliche Einfachheit erreicht. Bei dieser Methode werden die Koeffizienten der Integrationsformel aus einem System linearer algebraischer Gleichungen bestimmt, das direkt die Bedingungen dafur darstellt, das eine gewisse Anzahl von Termen der dreidimensionalen Taylor-Reihenentwicklung der integrierten Funktion um den Kugelmittelpunkt verschwindet. Dagegen werden die unbekannten Lagen der Integrationspunkte aus einer Bedingung dafur bestimmt, das der nachste Term (die nachsten Terme) der Entwicklung verschwindet, und wenn dieser nicht zum Verschwinden gebracht werden kann, aus einer Bedingung zur Minimierung der Grose dieses Terms (dieser Terme). Schlieslich wird eine neue Optimalitatsbedingung der Integrationsformeln formuliert, die fur den Integrationsfehler in gewissen Anwendungen von Bedeutung ist.

OtherDOI
TL;DR: The sections in this article are:Respiratory Homeostasis and Control of Respiratory Movements, Exercise—An Example of an Integrated Response, and Conclusion.
Abstract: The sections in this article are: 1 Respiratory Homeostasis and Control of Respiratory Movements 1.1 Effectors of Ventilation 1.2 Respiratory Muscles and Their Innervation 1.3 Summary 2 Central Location of Respiratory Controller 2.1 Historical Background 2.2 Summary 2.3 Modern View 2.4 Brain Stem Anatomy 2.5 Classification of Respiratory Neurons 2.6 Connections Between Respiratory Neurons 2.7 Location and Mechanisms for Generation of Respiratory Patterns 2.8 Production of Respiratory Pattern 2.9 Central Pattern Generation and Respiration 2.10 Hypothesis for Role of Dorsal and Ventral Respiratory Groups in Generating Respiratory Pattern 3 Sensors 3.1 Time Course of Responses to Respiratory Afferent Stimulation 3.2 Integrated Responses to Changes in Carbon Dioxide 3.3 Integrated Responses to Changes in Oxygen 3.4 Summary 4 Mechanoreceptors 4.1 Pulmonary Stretch Receptors 4.2 Summary 5 Exercise—An Example of an Integrated Response 5.1 Critique 6 State Dependence 7 Conclusion

Journal ArticleDOI
TL;DR: In this article, a finite element method applicable to truss structures for the determination of the probabilistic distribution of the dynamic response has been developed and implemented into a pilot computer code with two-dimensional bar elements.
Abstract: A finite element method applicable to truss structures for the determination of the probabilistic distribution of the dynamic response has been developed. Several solutions have been obtained for the mean and variance of displacements and stresses of a truss structure; nonlinearities due to material and geometrical effects have also been included. In addition, to test this method, Monte Carlo simulations have been used and a new method with implicit and/or explicit time integration and Hermite-Gauss quadrature has also been developed and used. All these methodologies have been implemented into a pilot computer code with two-dimensional bar elements.

Journal ArticleDOI
TL;DR: The similarity of activation patterns indicates that elbow torque was the principal determining factor in muscle synergies, and is suggested that they are best understood on the basis of a model which encodes limb torque in premotor neurons.
Abstract: We studied the patterns of EMG activity in elbow muscles in three normal human subjects. The myoelectrical activity of 7-10 muscles that act across the human elbow joint was simultaneously recorded with intramuscular electrodes during isometric joint torques exerted over a range of directions. These directions included flexion, extension, varus (internal humeral rotation), valgus (external humeral rotation), and several intermediate directions. The forces developed at the wrist covered a range of 360 degrees, all orthogonal to the long axis of the forearm. The levels of EMG activity were observed to increase with increasing joint torque in an approximately linear manner. All muscles were active for ranges less than 360 degrees and most were active for less than 180 degrees. The EMG activity was observed to vary in a systematic manner with changes in torque direction and, when examined over the full angular range at a variety of torque levels, is simply scaled with increasing torque magnitude. There were no torque directions or torque magnitudes for which a single muscle was observed to be active alone. In all cases, joint torque appeared to be produced by a combination of muscles. The direction for which the EMG of a muscle reached a maximum value was observed to correspond to the direction of greatest mechanical advantage as predicted by a simple mechanical model of the elbow and relevant muscles. Muscles were relatively inactive during varus torques. This implies that the muscles were not acting to stabilize the joint in this direction and could have been allowing ligaments to carry the load. Plots of EMG activity in one muscle against EMG activity in another demonstrate some instances of pure synergies, but patterns of coactivation for most muscles are more complicated and vary with torque direction. The complexity of these patterns raises the possibility that synergies are determined by the task and may have no independent existence. Activity in two heads of triceps brachii (medial head--a single-joint muscle and long head--a two-joint muscle) covaried closely for a range of torque magnitudes and directions, though shoulder torque and hence the forces experienced by the long head of the triceps undoubtedly varied. The similarity of activation patterns indicates that elbow torque was the principal determining factor. The origins of muscle synergies are discussed. It is suggested that they are best understood on the basis of a model which encodes limb torque in premotor neurons.(ABSTRACT TRUNCATED AT 400 WORDS)

Journal Article
TL;DR: In this paper, Poisson regression is proposed as a superior alternative to conventional linear regression for many safety studies because it requires smaller sample sizes and has other desirable statistical properties Models are estimated using accident, travel mileage, and environmental data from the Indiana Toll Road.
Abstract: Consideration of highway safety studies in a time-space domain is used to introduce the concept that different study designs result in different underlying probability distributions describing accident occurrence Poisson regression is proposed as a superior alternative to conventional linear regression for many safety studies because it requires smaller sample sizes and has other desirable statistical properties Models are estimated using accident, travel mileage, and environmental data from the Indiana Toll Road A pooled model including all accidents revealed that accident occurrence increases with automobile vehicle miles of travel (VMT), truck VMT, and hours of snowfall Segmentation of the data into subsets that describe different types of collisions revealed that automobile accidents are much more sensitive to environmental conditions than are truck accidents Use of the segmentation technique allowed a much clearer understanding of the effects of travel mileage on accident occurrence than could have been obtained from the pooled data alone

Journal ArticleDOI
25 Apr 1986-Cell
TL;DR: The VAI RNA of adenovirus is a small, RNA polymerase III-transcribed species required for efficient translation of host cell and viral mRNAs late after infection and can be reproduced in extracts of interferon-treated cells.


Journal ArticleDOI
31 Jan 1986-Science
TL;DR: Genetic analysis suggests that melatonin deficiency in C57BL/6J mice results from mutations in two independently segregating, autosomal recessive genes.
Abstract: Pineal melatonin may play an important role in regulation of vertebrate circadian rhythms and in human affective disorders. In some mammals, such as hamsters and sheep, melatonin is involved in photoperiodic time measurement and in control of reproduction. Although wild mice (Mus domesticus) and some wild-derived inbred strains of mice have melatonin in their pineal glands, several inbred strains of laboratory mice (for example, C57BL/6J) were found not to have detectable melatonin in their pineal glands. Genetic analysis suggests that melatonin deficiency in C57BL/6J mice results from mutations in two independently segregating, autosomal recessive genes. Synthesis of melatonin from serotonin in the pineal gland requires the enzymes N-acetyltransferase (NAT) and hydroxyindole-O-methyltransferase (HIOMT). Pineal glands from C57BL/6J mice have neither NAT nor HIOMT activity. These results suggest that the two genes involved in melatonin deficiency are responsible for the absence of normal NAT and HIOMT enzyme activity.