scispace - formally typeset
Search or ask a question

Showing papers by "Georgia Institute of Technology published in 1991"


Journal ArticleDOI
TL;DR: In this article, a conceptual model of entrepreneurship as an organizational-level phenomenon is presented, which is intended to depict the organizational system elements that relate to entrepreneurial activities and is based on the concept of organizational systems.
Abstract: This article outlines a conceptual model of entrepreneurship as an organizational-level phenomenon. The model is intended to depict the organizational system elements that relate to entrepreneurial...

3,681 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the contribution of three conceptually distinct aspects or components-processing efficiency, storage capacity, and coordination effectiveness, to age-related differences in measures of working memory.
Abstract: Two studies, involving a total of 460 adults between 18 and 87 years of age, were conducted to determine which of several hypothesized processing components was most responsible for age-related declines in working memory functioning. Significant negative correlations between age and measures of working memory (i.e., from -.39 to -.52) were found in both studies, and these relations were substantially attenuated by partialing measures hypothesized to reflect storage capacity, processing efficiency, coordination effectiveness, and simple comparison speed. Because the greatest attenuation of the age relations occurred with measures of simple processing speed, it was suggested that many of the age differences in working memory may be mediated by age-related reductions in the speed of executing elementary operations. Working memory is generally defined as the preservation of information while simultaneously processing the same or other information. It is distinguished from other forms of memory because the assumption that it reflects both processing and storage implies that it plays an important role in many cognitive tasks (e.g., Baddeley, 1986; Carpenter & Just, 1989; Salthouse, 1990). An illustration of the hypothesized functioning of working memory in one cognitive task, mental arithmetic, is presented in Figure 1. The left column in this figure indicates the operations to be performed, and the right column represents the intermediate products that must be temporarily stored while carrying out those operations. This figure is useful because it graphically illustrates both the importance and the complexity of working memory. That is, it is clear from this example that effective storage of information is essential in order for the successful performance of certain cognitive tasks. Figure 1 also suggests that it may be fruitful to think of working memory not as a single discrete structure, but rather as a dynamic interchange among three conceptually distinct aspects or components-processing efficiency, storage capacity, and coordination effectiveness. Processing is represented by the series of operations in the left column, storage is represented by the entries in the right column, and coordination can be assumed to correspond both to the sequencing of operations and to the arrows portraying the exchange of information between processing and storage. A primary purpose of this article was to investigate the contribution of these three hypothesized components to age-related differences in measures of working memory. Each of the components has been hypothesized to be an important source of adult age differences by one or more researchers, but few definitive conclusions have been possible because the currently available evidence is both weak and inconsistent. To illustrate,

1,335 citations


Book
01 Jun 1991
TL;DR: The authors reviewed the evidence of age-related differences in cognitive functioning and then evaluated the major explanations proposed to account for the negative relations between age and cognition that have been established, concluding that progress has been made in explaining cognitive aging phenomena, plus recommendations for research practices that might contribute to greater progress in future.
Abstract: The phenomenon of age-related cognitive decline has long been controversial, both in terms of mere existence, and with respect to how it is explained. Some researchers have dismissed it as an artifact of declining health or lower levels of education, and others have attributed it to general changes occurring in the external environment. Still other interpretations have been based on the "use it or lose it" principle -- known as the Disuse Hypothesis -- or on the idea that there are qualitative differences in either the structure or the process of cognition across the adult years. Perhaps the most popular approach at present relies on the information-processing perspective and attempts to identify the critical processing component most responsible for age-related differences in cognition. The primary purposes of this book are first to review the evidence of age-related differences in cognitive functioning and then to evaluate the major explanations proposed to account for the negative relations between age and cognition that have been established. Included is a discussion of theoretical dimensions and levels of scientific theorizing assumed to be helpful in understanding and evaluating alternative perspectives on cognitive aging. The various perspectives are then covered in detail and analyzed. The text concludes with observations about the progress that has been made in explaining cognitive aging phenomena, plus recommendations for research practices that might contribute to greater progress in the future.

1,048 citations


Book
17 Aug 1991
TL;DR: This chapter discusses Conceptual Design, Logical Design, and Design Tools for Database Design, as well as Joint Data and Functional Analysis, and Improving the Quality of a Database Schema.
Abstract: I. CONCEPTUAL DATABASE DESIGN. 1. An Introduction to Database Design. 2. Data Modeling Concepts. 3. Methodologies for Conceptual Design. 4. View Design. 5. View Integration. 6. Improving the Quality of a Database Schema. 7. Schema Documentation and Maintenance. II. FUNCTIONAL ANALYSIS FOR DATABASE DESIGN. 1. Functional Analysis Using the Dataflow Model. 2. Joint Data and Functional Analysis. 3. Case Study. III. LOGICAL DESIGN AND DESIGN TOOLS. 1. High-Level Logical Design Using the Entity-Relationship Model. 2. Logical Design for the Relational Model. 3. Logical Design for the Network Model. 4. Logical Design for the Hierarchical Model. 5. Database Design Tools. Index. 0805302441T04062001

1,018 citations


Journal ArticleDOI
28 Sep 1991
TL;DR: In this paper, a direct induction machine torque control method based on predictive, deadbeat control of the torque and flux is presented, where the stator voltage required to cause the torque to be equal to their respective reference values is calculated.
Abstract: A direct induction machine torque control method based on predictive, deadbeat control of the torque and flux is presented. By estimating the synchronous speed and the voltage behind the transient reactance, the change in torque and flux over the switching period is calculated. The stator voltage required to cause the torque and flux to be equal to their respective reference values is calculated. Space vector PWM is used to define the inverter switching state. An alternative approach to deadbeat control for use in the transient or pulse-dropping mode is also presented. An alternative modulation scheme is presented in which transient performance is improved by specifying the inverter switching states and then calculating the required switched instants to maintain deadbeat control of the flux while reducing the torque error during the entire switching interval. A similar approach is used for a transient in the flux. The implementation of the control scheme using DSP-based hardware is described, with complete experimental results given. >

890 citations


Journal ArticleDOI
TL;DR: The fundamentals of microbial kinetics and continuous culture models are presented and the effect of temperature and inhibitors on the intrinsic kinetic rates is discussed, and Stoichiometric and bioenergetic considerations are reviewed.
Abstract: The fundamentals of microbial kinetics and continuous culture models are presented. The kinetics of the anaerobic treatment processes are reviewed recognizing that anaerobic degradation of complex, polymeric organic materials is a combination of series and parallel reactions. Such reactions include hydrolysis, fermentation, anaerobic oxidation of fatty acids, and methanogenesis. The intrinsic rates of each step are reviewed and literature data summarized. Whenever possible, available kinetic information is summarized on the basis of substrate composition (such as carbohydrates, proteins, and lipids). The effect of temperature and inhibitors on the intrinsic kinetic rates is discussed. Stoichiometric and bioenergetic considerations are reviewed. Mass transfer limitations (both external and internal) associated with biofilms and microbial agglomerates, in general, and their effect on the intrinsic kinetic rates are presented. Areas requiring further research are identified.

638 citations


Journal ArticleDOI
TL;DR: In this article, the relative importance of working memory and perceptual comparison speed in the age-related differences in selected measures of cognitive functioning was evaluated, and a common finding across all studies was that the magnitude of the agerelated cognitive differences was greatly reduced by statistically controlling measures, such as working memory, and perceptual comparision speed.
Abstract: Three studies, involving a total of 672 adults between 20 and 84 years of age, were conducted to evaluate the relative importance of working memory and perceptual comparison speed in the age-related differences in selected measures of cognitive functioning. The same measures of working memory and comparison speed were used in each study, but the studies differed in the specific cognitive measures examined. A common finding across all studies was that the magnitude of the age-related cognitive differences was greatly reduced by statistically controlling measures of working memory and perceptual comparison speed. Many of the age differences in process or fluid aspects of cognition therefore appear to be mediated by age-related reductions in working memory, which may in turn be largely mediated by age-related reductions in the speed of executing simple processing operations.

620 citations


Journal ArticleDOI
TL;DR: A number of systems intending to describe the processes contributing to the generation of innovative problem solutions have been proposed as mentioned in this paper, and the common themes appearing in these models are then considered in relation to the use of extant information structures.
Abstract: Although a number of factors condition the success of creative efforts, most investigators recognize the fundamental importance of novel problem solutions. As a result, a number of systems intending to describe the processes contributing to the generation of innovative problem solutions have been proposed. In the present article, earlier models describing the processes contributing to creative problem solutions are reviewed. The common themes appearing in these models are then considered in relation to the use of extant information structures. Certain implications of cognitive information processing for understanding the nature and ontogeny of the creative act are then discussed, along with their potential contributions to the identification and development of creative potential.

583 citations


Journal ArticleDOI
TL;DR: The authors formulate and solve two related control-oriented system identification problems for stable linear shift-invariant distributed parameter plants, each involving identification of a point sample of the plant frequency response from a noisy, finite, output time series obtained in response to an applied sinusoidal input.
Abstract: The authors formulate and solve two related control-oriented system identification problems for stable linear shift-invariant distributed parameter plants In each of these problems the assumed a priori information is minimal, consisting only of a lower bound on the relative stability of the plant, an upper bound on a certain gain associated with the plant, and an upper bound on the noise level The first of these problems involves identification of a point sample of the plant frequency response from a noisy, finite, output time series obtained in response to an applied sinusoidal input with frequency corresponding to the frequency point of interest This problem leads naturally to the second problem, which involves identification of the plant transfer function in H/sub infinity / from a finite number of noisy point samples of the plant frequency response Concrete plans for identification algorithms are provided for each of these two problems >

512 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a study of the business strategies and performance levels of firms with entrepreneurial and conservative strategic postures and find that entrepreneurial firms differ from conservative firms in terms of their growth rates as well as several financial, operating, and marketing-related variables.
Abstract: This article describes a study of the business strategies and performance levels of firms with entrepreneurial and conservative strategic postures. Data were collected from the senior executives of 111 small manufacturing firms. Results indicate that entrepreneurial firms differ from conservative firms in terms of their growth rates as well as several financial, operating, and marketing-related variables. the data suggest that the patterns of strategic behaviour associated with high performance are different for entrepreneurial and conservative firms.

444 citations


Journal ArticleDOI
TL;DR: In this article, various theories describing the vibrational energy transfer in molecules are summarized with a special emphasis on nonlinear resonances, and a large bibliography supplements the text with references from a large number of references.

Journal ArticleDOI
TL;DR: The adaptively restored images have better quality than the nonadaptively restored ones based on visual observations and on an objective criterion of merit which accounts for the noise masking property of the visual system.
Abstract: The development of the algorithm is based on a set theoretic approach to regularization. Deterministic and/or statistical information about the undistorted image and statistical information about the noise are directly incorporated into the iterative procedure. The restored image is the center of an ellipsoid bounding the intersection of two ellipsoids. The proposed algorithm, which has the constrained least squares algorithm as a special case, is extended into an adaptive iterative restoration algorithm. The spatial adaptivity is introduced to incorporate properties of the human visual system. Convergence of the proposed iterative algorithms is established. For the experimental results which are shown, the adaptively restored images have better quality than the nonadaptively restored ones based on visual observations and on an objective criterion of merit which accounts for the noise masking property of the visual system. >

Journal ArticleDOI
TL;DR: In this article, a primal-dual algorithm for linear programming is described, which allows for easy handling of simple bounds on the primal variables and incorporates free variables, which have not previously been included in a primal dual implementation.

Journal ArticleDOI
TL;DR: In this paper, a micromechanical model has been formulated for the post-cracking behavior of a brittle matrix composite reinforced with randomly distributed short fibers, which incorporates the mechanics of pull-out of fibers which are inclined at an angle to the matrix crack plane and which undergo slip-weakening or sliphardening during the pullout process.
Abstract: A micromechanical model has been formulated for the post-cracking behavior of a brittle matrix composite reinforced with randomly distributed short fibers. This model incorporates the mechanics of pull-out of fibers which are inclined at an angle to the matrix crack plane and which undergo slip-weakening or slip-hardening during the pull-out process. In addition, the random location and orientation of fibers are accounted for. Comparisons of model predictions of post-cracking tension-softening behavior with experimental data appear to support the validity of the model. The model is used to examine the effects of fiber length, snubbing friction coefficient and interfacial bond behavior on composite post-cracking tensile properties. The scaling of the bridging fracture toughening with material parameters is discussed.

Journal ArticleDOI
TL;DR: It is demonstrated that EC can discriminate between different types of pulsatile flow environments, and changes in shape and orientation changed less rapidly but cells took on a more elongated shape than their steady flow controls long-term.
Abstract: Endothelial cells (EC) appear to adapt their morphology and function to the in vivo hemodynamic environment in which they reside In vitro experiments indicate that similar alterations occur for cultured EC exposed to a laminar steady-state flow-induced shear stress However, in vivo EC are exposed to a pulsatile flow environment; thus, in this investigation, the influence of pulsatile flow on cell shape and orientation and on actin microfilament localization in confluent bovine aortic endothelial cell (BAEC) monolayers was studied using a 1-Hz nonreversing sinusoidal shear stress of 40 +/- 20 dynes/cm2 (type I), 1-Hz reversing sinusoidal shear stresses of 20 +/- 40 and 10 +/- 15 dynes/cm2 (type II), and 1-Hz oscillatory shear stresses of 0 +/- 20 and 0 +/- 40 dynes/cm2 (type III) The results show that in a type I nonreversing flow, cell shape changed less rapidly, but cells took on a more elongated shape than their steady flow controls long-term For low-amplitude type II reversing flow, BAECs changed less rapidly in shape and were always less elongated than their steady controls; however, for high amplitude reversal, BAECs did not stay attached for more than 24 hours For type III oscillatory flows, BAEC cell shape remained polygonal as in static culture and did not exhibit actin stress fibers, such as occurred in all other flows These results demonstrate that EC can discriminate between different types of pulsatile flow environments(ABSTRACT TRUNCATED AT 250 WORDS)

Journal ArticleDOI
TL;DR: Young and older adults exhibited similar effects of the number of potentially relevant variables and of the numbers of required processing operations, but older adults were generally less accurate than young adults.
Abstract: Eight experiments were conducted in which young adults and older adults were asked to report the latest value of 1 of several continuously changing numeric or spatial variables. Accuracy of reporting the current value of the target variable was lower with increases in the number of potentially relevant variables and with increases in the number of required processing operations. Young and older adults exhibited similar effects of the number of potentially relevant variables (hypothesized to be sensitive to the structural capacity of working memory) and of the number of required processing operations (hypothesized to be sensitive to the operational capacity of working memory), but older adults were generally less accurate than young adults. Working memory is generally distinguished from earlier notions of short-term memory by an emphasis on the simultaneous storage and processing of information in working memory, as opposed to a concern primarily with the storage of information in short-term memory (e.g., Baddeley, 1986; Craik & Rabinowitz, 1984; Daneman & Carpenter, 1980). It has recently been suggested (cf. Carpenter & Just, 1989; Salthouse & Mitchell, 1989) that working memory may have at least two distinct capacities, structural and operational, with the former referring to capacity in terms of "the number of distinct informational units that can be remembered at any given time" and the latter to capacity in terms of "the number of processing operations that can be performed while still preserving the products of earlier operations" (Salthouse & Mitchell, 1989, p. 18). Carpenter and Just (1989) and Salthouse (1982) have proposed analogies to clarify the contrast between static or structural, and dynamic or operational, capacities. A hospital was used as the basis of the analogy by Carpenter and Just, with static capacity likened to the number of surgical theaters and dynamic capacity interpreted in terms of the number of surgical operations that can be performed in a given period. Salthouse (1982, p. 180-181) contrasted the structural capacity of a banquet hall, expressed as the maximum number of diners that could be accommodated at a single time, with the operational capacity of a fast-food restaurant, as reflected by the number of meals that could be served in a specified period. With respect to working memory, the key distinction in both analogies is between capacity expressed as the maximum number of items that can be remembered and capacity reflected in terms of the

Journal ArticleDOI
TL;DR: This article further develops and strengthens the response-model/combined-array approach and recommended examination of control-by-noise interaction plots suggested by the fitted-response model, which can reveal control-factor settings that dampen the effects of individual noise factors.
Abstract: Taguchi's robust-design technique, also known as parameter design, focuses on making product and process designs insensitive (i.e., robust) to hard-to-control variations. In some applications, however, his approach of modeling expected loss and the resulting “product array” experimental format leads to unnecessarily expensive experiments. As an alternative to Taguchi's “loss model/product array” formulation. Welch, Yu, Kang, and Sacks proposed combining control and noise factors in a single array, modeling the response itself rather than expected loss, and then approximating a prediction model for loss based on the fitted-response model. In this article, we further develop and strengthen this response-model/combined-array approach. We recommended examination of control-by-noise interaction plots suggested by the fitted-response model. These plots can reveal control-factor settings that dampen the effects of individual noise factors. We also show that the run savings from using combined arrays are due to t...

Journal ArticleDOI
TL;DR: The algorithms are evaluated with respect to improving automatic recognition of speech in the presence of additive noise and shown to outperform other enhancement methods in this application.
Abstract: The basis of an improved form of iterative speech enhancement for single-channel inputs is sequential maximum a posteriori estimation of the speech waveform and its all-pole parameters, followed by imposition of constraints upon the sequence of speech spectra. The approaches impose intraframe and interframe constraints on the input speech signal. Properties of the line spectral pair representation of speech allow for an efficient and direct procedure for application of many of the constraint requirements. Substantial improvement over the unconstrained method is observed in a variety of domains. Informed listener quality evaluation tests and objective speech quality measures demonstrate the technique's effectiveness for additive white Gaussian noise. A consistent terminating point of the iterative technique is shown. The current systems result in substantially improved speech quality and linear predictive coding (LPC) parameter estimation with only a minor increase in computational requirements. The algorithms are evaluated with respect to improving automatic recognition of speech in the presence of additive noise and shown to outperform other enhancement methods in this application. >

Journal ArticleDOI
TL;DR: Although the general serine protease inhibitor 3,4-dichloroisocoumarin (DCI) inactivated these tryptases poorly, it was the best isocou marin inhibitor for murine granzyme B (kobs/[I] = 3700-4200 M-1 s-1).
Abstract: The active site structures of human Q31 granzyme A, murine granzymes (A, B, C, D, E, and F), and human granzymes (A, B, and 3) isolated from cytotoxic T lymphocytes (CTL) were studied with peptide thioester substrates, peptide chloromethyl ketone, and isocoumarin inhibitors. Human Q31, murine, and human granzyme A hydrolyzed Arg- or Lys-containing thioesters very efficiently with kcat/KM of 10(4)-10(5) M-1 s-1. Murine granzyme B was found to have Asp-ase activity and hydrolyzed Boc-Ala-Ala-Asp-SBzl with a kcat/KM value of 2.3 X 10(5) M-1 s-1. The rate was accelerated 1.4-fold when the 0.05 M NaCl in the assay was replaced with CaCl2. The preparation of granzyme B also had significant activity toward Boc-Ala-Ala-AA-SBzl substrates, where AA was Asn, Met, or Ser [kcat/KM = (4-5) X 10(4) M-1 s-1]. Murine granzymes C, D, and E did not hydrolyze any thioester substrate but contained minor contaminating activity toward Arg- or Lys-containing thioesters. Murine granzyme F had small activity toward Suc-Phe-Leu-Phe-SBzl, along with some contaminating trypsin-like activity. Human Q31 granzyme A, murine, and human granzyme A were inhibited quite efficiently by mechanism-based isocoumarin inhibitors substituted with basic groups (guanidino or isothiureidopropoxy). Although the general serine protease inhibitor 3,4-dichloroisocoumarin (DCI) inactivated these tryptases poorly, it was the best isocoumarin inhibitor for murine granzyme B (kobs/[I] = 3700-4200 M-1 s-1). Murine and human granzyme B were also inhibited by Boc-Ala-Ala-Asp-CH2Cl; however, the inhibition was less potent than that with DCI. DCI, 3-(3-amino-propoxy)-4-chloroisocoumarin, 4-chloro-3-(3-isothiureidopropoxy)isocoumarin, and 7-amino-4-chloro-3-(3-isothiureidopropoxy)isocoumarin inhibited Q31 cytotoxic T lymphocyte mediated lysis of human JY lymphoblasts (ED50 = 0.5-5.0 microM).

Journal ArticleDOI
TL;DR: The tripeptide Boc-Val-Pro-ValP(OPh)2, which has a sequence found in a good trifluoromethyl ketone inhibitor of HLE, is the best inhibitor for HLE and porcine pancreatic elastase and the rates of inactivation of chymotrypsin were decreased 2-5-fold in the presence of the corresponding substrate.
Abstract: Peptidyl derivatives of diphenyl (alpha-aminoalkyl)phosphonates have been synthesized and are effective and specific inhibitors of serine proteases at low concentrations Z-PheP(OPh)2 irreversibly reacts with chymotrypsin (kobsd/[I] = 1200 M-1 s-1) and does not react with two elastases The best inhibitor for most chymotrypsin-like enzymes including bovine chymotrypsin, cathepsin G, and rat mast cell protease II is the tripeptide Suc-Val-Pro-PheP(OPh)2 which corresponds to the sequence of an excellent p-nitroanilide substrate for several chymases The valine derivative Z-ValP(OPh)2 is specific for elastases and reacts with human leukocyte elastase (HLE, 280 M-1 s-1) but not with chymotrypsin The tripeptide Boc-Val-Pro-ValP(OPh)2, which has a sequence found in a good trifluoromethyl ketone inhibitor of HLE, is the best inhibitor for HLE (kobsd/[I] = 27,000 M-1 s-1) and porcine pancreatic elastase (PPE, kobsd/[I] = 11,000 M-1 s-1) The rates of inactivation of chymotrypsin by MeO-Suc-Ala-Ala-Pro-PheP(OPh)2 and PPE and HLE by MeO-Suc-Ala-Ala-Pro-ValP(OPh)2 were decreased 2-5-fold in the presence of the corresponding substrate, which demonstrates active site involvement Only one of two diastereomers of Suc-Val-Pro-PheP(OPh)2 reacts with chymotrypsin (146,000 M-1 s-1), and the enzyme-inhibitor complex had one broad signal at 2598 ppm in the 31P NMR spectrum corresponding to the Ser-195 phosphonate ester Phosphonylated serine proteases are extremely stable since the half-time for reactivation was greater than 48 h for the inhibited elastases and 75-26 h for chymotrypsin(ABSTRACT TRUNCATED AT 250 WORDS)

Proceedings ArticleDOI
01 Jul 1991
TL;DR: A new progressive global illumination method is presented which produces approximate images quickly, and then continues to systematically produce more accurate images, combining the existing methods of progressive refinement radiosity, Monte Carlo path tracing and light ray tracing.
Abstract: A new progressive global illumination method is presented which produces approximate images quickly, and then continues to systematically produce more accurate images. The method combines the existing methods of progressive refinement radiosity, Monte Carlo path tracing and light ray tracing. The method does not place any limitation on surface properties such as ideal Lambertian or mirror-like. To increase efficiency and accuracy, the new concepts of light source reclassification, caustics reconstruction, Monte Carlo path tracing with a radiosity preprocess and an interruptible radiosity solution are introduced. The method presents the user with most useful information about the scene as early as possible by reorganizing the method into a radiosity pass, a high frequency refinement pass and a low frequency refinement pass. The implementation of the method is demonstrated, and sample images are presented.

Journal ArticleDOI
01 Feb 1991
TL;DR: The design concept of a spherical stepper motor capable of three-degrees-of-freedom (DOF) motion in a single joint is presented and an analytical approach based on the permeance formula was used to predict the driving forces generated by a neodymium-iron permanent magnet.
Abstract: The design concept of a spherical stepper motor capable of three-degrees-of-freedom (DOF) motion in a single joint is presented. The ball-joint-like motor has no singularities except at the boundary of the workspace and can perform isotropic manipulation in all three directions. Due to its relatively simple ball-like structure, undesired cross-coupling and centrifugal components of wrist rotor dynamics can be effectively minimized or eliminated. The spherical stepper motor has potential in robotic applications as a three-DOF shoulder or an eyeball, as well as a wrist actuator. In particular, the systematic conceptualization of a spherical stepper is presented, and the feasibility of constructing the spherical stepper is examined. Along with the experimental data, an analytical approach based on the permeance formula was used to predict the driving forces generated by a neodymium-iron permanent magnet. The force-displacement curves provide useful information for rational spherical motor design and control. >

Journal ArticleDOI
TL;DR: This new method minimizes gaps between successive operations in solutions generated by other heuristics to solve the flow-shop scheduling problem by using makespan, mean flow time and mean utilization as the performance measures.

Journal ArticleDOI
TL;DR: Cloud as mentioned in this paper is a general-purpose operating system for distributed environments based on an object-thread model adapted from object-oriented programming, which is a paradigm for structuring distributed operating systems, the potential and implications this paradigm has for users, and research directions for the future.
Abstract: The authors discuss a paradigm for structuring distributed operating systems, the potential and implications this paradigm has for users, and research directions for the future. They describe Clouds, a general-purpose operating system for distributed environments. It is based on an object-thread model adapted from object-oriented programming. >

Posted Content
TL;DR: In this paper, the authors distinguish the measure of risk aversion from the slope coefficient in the linear relationship between the mean excess return on a stock index and its variance, and introduce a statistical model with ARCH disturbances and a time-varying parameter in the mean (TVP ARCH-N).
Abstract: We distinguish the measure of risk aversion from the slope coefficient in the linear relationship between the mean excess return on a stock index and its variance. Even when risk aversion is constant, the latter can vary significantly with the relative share of stocks in the risky wealth portfolio, and with the beta of unobserved wealth on stocks. We introduce a statistical model with ARCH disturbances and a time-varying parameter in the mean (TVP ARCH-N). The model decomposes the predictable component in stock returns into two parts: the time-varying price of volatility and the time-varying volatility of returns. The relative share of stocks and the beta of the excluded components of wealth on stocks are instrumented by macroeconomic variables. The ratio of corporate profit over national income and the inflation rate ore found to be important forces in the dynamics of stock price volatility.

Journal ArticleDOI
TL;DR: The authors explored factors leading to age-related performance differences in consistent mapping (CM) and varied mapping (VM) search tasks and found that the separation of type of search training (CM and VM) as well as memory and visual search components is critical for predicting agerelated performance difference.
Abstract: Young and old Ss were tested in 3 experiments conducted to explore factors leading to age-related performance differences in consistent mapping (CM) and varied mapping (VM) search tasks. The separate and combined influences of memory scanning and visual search on age-related search effects were examined. In both CM letter and CM semantic category search, age interacted with comparison load in visual and hybrid memory-visual search conditions, whereas differential age effects were not present in pure memory search. For VM search, age effects were present only in pure memory search. These data support the view that the separation of type of search training (CM and VM) as well as memory and visual search components is critical for predicting age-related performance differences. The dissociation of the pattern of age effects in memory and visual search suggests that memory and visual search involve different processing mechanisms.

Journal ArticleDOI
TL;DR: In this article, a linear robust fedback control law with constant gain matrix is proposed for the trajectory following problem of a robot manipulator, which makes the resulting error system uniformly ultimately bounded.
Abstract: For the trajectory following problem of a robot manipulator, a simple linear robust fedback control law with constant gain matrix is proposed that makes the resulting error system uniformly ultimately bounded. This control law is very easy to implement by simply choosing a feedback gain according to the coefficients of a polynomial function of the tracking errors which is a bounding function for the terms in the Lagrange-Euler formulation. In the limit as the gain approaches infinity the error system becomes globally asymptotically stable. >

Journal ArticleDOI
TL;DR: In this paper, a light emitting diode was grown on a GaAs substrate, lifted off, aligned and selectively deposited onto a silicon host substrate, and electrically contacted and tested.
Abstract: The authors report the selective and alignable deposition of patterned thin-film epitaxial GaAs/GaAlAs devices onto a host substrate such as silicon for low cost, manufacturable hybrid integrated optoelectronic circuits. The authors use a thin polyimide diaphragm as the transparent transfer medium for these patterned epitaxial devices. Each of these devices or a group of these devices on the polyimide is then optically aligned and selectively deposited onto the host substrate. The use of the polyimide transfer diaphragm also allows both the bottom and the top of the device to be processed while under substrate support. Using this technique, a light emitting diode 50*50 mu m in area and 2 mu m thick was grown on a GaAs substrate, lifted off, aligned and selectively deposited onto a silicon host substrate, and electrically contacted and tested. >

Journal ArticleDOI
TL;DR: The semi-unboundedness property leads to the definition of semi- unbounded fan-in circuit families and suggests new characterizations of LOGCFL on the following models: alternating Turing machines, nondeterministic auxiliary pushdown automata, and bounded fan- in Boolean circuits.