scispace - formally typeset
Search or ask a question

Showing papers by "Clemson University published in 1990"


Posted Content
TL;DR: In this article, a model of growth departs from both the Malthusian and neoclassical approaches by including investments in human capital and assumes that rates of return on human capital investments rise, rather than, decline, as the stock of human capital increases, until the stock becomes large.
Abstract: Our model of growth departs from both the Malthusian and neoclassical approaches by including investments in human capital We assume, crucially, that rates of return on human capital investments rise, rather than, decline, as the stock of human capital increases, until the stock becomes large This arises because the education sector uses human capital note intensively than either the capital producing sector of the goods producing sector This produces multiple steady scares: an undeveloped steady stare with little human capital, low rates of return on human capital investments and high fertility, and a developed steady stats with higher rates of return a large, and, perhaps, growing stock of human capital and low fertility Multiple steady states mean that history and luck are critical determinants of a country's growth experience

1,829 citations


Journal ArticleDOI
TL;DR: In this article, the authors test a model of the relationships among rural resident perceptions of tourism impacts, support for additional tourism development, restrictions on tourism development and support for special tourism taxes.

808 citations


Proceedings ArticleDOI
26 Nov 1990
TL;DR: The authors present a technique that selects from a test suite a representative set of test cases that provides the same measure of coverage as the test suite by means of the data flow testing methodology.
Abstract: As a result of modifications to a program during the maintenance phase, the size of a test suite used for regression testing can become unmanageable. The authors present a technique that selects from a test suite a representative set of test cases that provides the same measure of coverage as the test suite. This selection is performed by the identification of the redundant and obsolete test cases in the test suite. The representative set can be used to reduce the size of the test suite by substituting for the test suite. The representative set can also be used to determine those test cases that should be rerun to test the program after it has been changed. The technique is independent of the testing methodology and only requires an association between each testing requirement and the test cases that satisfy the requirement. The technique is illustrated by means of the data flow testing methodology. Experimental studies are being performed that demonstrate the effectiveness of the technique. >

510 citations


Journal ArticleDOI
TL;DR: The authors empirically examined one motive for takeovers: to change control of firms that make acquisitions that diminish the value of their equity, and found that the acquisitions that reduce equity value the most are those that are later divested either in bust-up takeovers or restructuring programs to thwart the takeover.
Abstract: This paper empirically examines one motive for takeovers: to change control of firms that make acquisitions that diminish the value of their equity. Firms that subsequently become takeover targets make acquisitions that significantly reduce their equity value, and firms that do not become takeover targets make acquisitions that raise their equity value. Within the sample of acquisition by targets, the acquisitions that reduce equity value the most are those that are later divested either in bust-up takeovers or restructuring programs to thwart the takeover. This evidence is consistent with theories advanced by Marris, Manne, and Jensen concerning the disciplinary role played by takeovers.

409 citations


Journal ArticleDOI
TL;DR: The object- oriented design paradigm is the next logical step in a progression that has led from a purely procedural approach to an object-based approach and now to the object-oriented approach.
Abstract: The need to develop and maintain large complex software systems in a competitive and dynamic environment has driven interest in new approaches to software design and development. The problems with the classical waterfall model have been cataloged in almost every software engineering text [19,23]. In response, alternative models such as the spiral [2], and fountain [9] have been proposed.Problems with traditional development using the classical life cycle include no iteration, no emphasis on reuse, and no unifying model to integrate the phases. The difference in point of view between following data flows in structured analysis and building hierarchies of tasks in structured design has always been a major problem [4]. Each system is built from scratch and maintenance costs account for a notoriously large share of total system costs.The object-oriented paradigm addresses each of these issues.A look at the object-oriented software life cycle, as described by Meyer [5], Coad and Yourdon [4], and Henderson-Sellers and Edwards [9], identifies the three traditional activities of analysis, design, and implementation. However, each of the referenced descriptions eliminates the distinct boundaries between the phases.The primary reason for this blurring of boundaries is that the items of interest in each phase are the same: objects. Objects and the relationships between objects are identified in both the analysis and design phases. Objects and relationships identified and documented in the analysis phase serve not only as input to the design phase, but as an initial layer in the design. This continuity provides for a much more seamless interface between the phases. Analysts, designers and programmers are working with a common set of items upon which to build.A second reason for the blurring of these boundaries is that the object-oriented development process is iterative. Henderson-Sellers and Edwards further refine this idea by replacing the waterfall model of software development with a fountain model. Development reaches a high level only to fall back to a previous level to begin the climb once again.As an example of the blurring of the traditional boundaries of the life cycle phases, Coad and Yourdon recommend that classification relationships between objects be captured and documented during the object-oriented analysis (OOA) phase. This classification will be directly reflected in the class inheritance structure developed in the design and in the code. This classification is in no way required in order to document the system requirements. In other words, Coad and Yourdon are recommending a traditional design activity in the analysis phase.The blurring of the traditional design and implementation phases has been fueled by the development of encapsulation and abstraction mechanisms in object-oriented and object-based languages. For example, Meyer claims [14] that Eiffel is both a design and an implementation language. He goes on to say that software design is sometimes mistakenly viewed as an activity totally secluded from actual implementation. From his point of view, much is to be gained from an approach that integrates both activities within the same conceptual framework.The object-oriented design paradigm is the next logical step in a progression that has led from a purely procedural approach to an object-based approach and now to the object-oriented approach. The progression has resulted from a gradual shift in point of view in the development process. The procedural design paradigm utilizes functional decomposition to specify the tasks to be completed in order to solve a problem. The object-based approach, typified by the techniques of Yourdon, Jackson and Booth, gives more attention to data specifications than the procedural approach but still utilizes functional decomposition to develop the architecture of a system. The object-oriented approach goes beyond the object-based technique in the emphasis given to data by utilizing the relationships between objects as a fundamental part of the system architecture.The goal in designing individual software components is to represent a concept in what will eventually be an executable form. The Abstract Data Type (ADT) is the object-based paradigm's technique for capturing this conceptual information. The class is the object-oriented paradigm's conceptual modeling tool. The design pieces resulting from the object-oriented design technique represent a tighter coupling of data and functionality than traditional ADTs. These artifacts of the design process used in conjunction with a modeling-based decomposition approach yield a paradigm, a technique, which is very natural and flexible. It is natural in the sense that the design pieces are closely identified with the real-world concepts which they model. It is flexible in the sense of quickly adapting to changes in the problem specifications.Object-oriented remains a term which is interpreted differently by different people. Before presenting an overview of a set of techniques for the design process, we will give our perspective so the reader may judge the techniques in terms of those definitions. Briefly, we adapt Wegner's [27] definition for object-oriented languages to object-oriented design. The pieces of the design are objects which are grouped into classes for specification purposes. In addition to traditional dependencies between data elements, an inheritance relation between classes is used to express specializations and generalizations of the concepts represented by the classes.As natural and flexible as the object-oriented technique is, it is still possible to produce a bad design when using it. We will consider a number of general design criteria and will discuss how the object-oriented approach assists the designer in meeting these criteria. We will refer to a number of design guidelines developed specifically for the object-oriented design paradigm and will discuss how these properties reinforce the concepts of good design.The paradigm sprang from language, has matured into design, and has recently moved into analysis. The blurring of boundaries between these phases has led us to include topics in this article that are outside the realm of design, but which we consider important to understanding the design process. Since the paradigm sprang from language, we define the concepts basic to object-oriented programming in the following section.

301 citations


Journal ArticleDOI
01 Feb 1990
TL;DR: Experiments show that the overall scheme, while simple to implement, is powerful and reliable enough to permit walking from level to slope or vice versa, and it is argued that the proposed mechanism can be extended to quasi-dynamic and dynamic gaits.
Abstract: A scheme to enable the SD-2 biped robot to climb sloping surfaces is proposed. By means of sensing devices, namely position sensors on the joints and force sensors underneath the heel and toe, the robot is able to detect the transition of the supporting terrain from a flat floor to a sloping surface. An algorithm is developed for the biped robot control system to evaluate the inclination of the supporting foot and the unknown gradient, and a compliant motion scheme is then used to enable the robot to transfer from level walking to climbing the slope. While the robot walks on the slope, the gait synthesis is a simple modification to the one used for level walking. Experiments with the SD-2 biped robot show that the overall scheme, while simple to implement, is powerful and reliable enough to permit walking from level to slope or vice versa. Finally, it is argued that the proposed mechanism can be extended to quasi-dynamic and dynamic gaits. >

227 citations


Journal ArticleDOI
TL;DR: The authors empirically examined one motive for takeovers: to change control of firms that make acquisitions that diminish the value of their equity, and found that the acquisitions that reduce equity value the most are those that are later divested either in bust-up takeovers or restructuring programs to thwart the takeover.
Abstract: This paper empirically examines one motive for takeovers: to change control of firms that make acquisitions that diminish the value of their equity. Firms that subsequently become takeover targets make acquisitions that significantly reduce their equity value, and firms that do not become takeover targets make acquisitions that raise their equity value. Within the sample of acquisitions by targets, the acquisitions that reduce equity value the most are those that are later divested either in bust-up takeovers or restructuring programs to thwart the takeover. This evidence is consistent with theories advanced by Robin Marris (1963), Henry G. Manne (1965), and Michael C. Jensen (1986) concerning the disciplinary role played by takeovers. Copyright 1990 by University of Chicago Press. (This abstract was borrowed from another version of this item.)

201 citations


Journal ArticleDOI
TL;DR: This paper examined whether race directly affects the value of a player in the market for baseball cards and found that consumer discrimination can reduce productivity, but it is often impossible to tell whether differential productivity is the effect of discrimination or of differential ability.
Abstract: Because consumer discrimination can reduce productivity, it is often impossible to tell whether differential productivity is the effect of discrimination or of differential ability. Detailed data for the sports labor market make it possible to separate consumer discrimination from ability. We use a unique approach to determine whether the entertainment value of baseball players is related to their race: we examine whether race directly affects the value of a player in the market for baseball cards. In contrast to studies that use salaries, there is no room for owner or coworker discrimination. Our evidence supports the hypothesis of consumer discrimination.

197 citations


Journal ArticleDOI
TL;DR: Flanking marker models for doubled haploid (DH), recombinant inbred (RI), backcross (BC), F1 testcross (F1TC), DH testCross (DHTC), recombination frequencies between marker and quantitative trait loci, and F2 and F3 progeny are described.
Abstract: High-density restriction fragment length polymorphism (RFLP) and allozyme linkage maps have been developed in several plant species. These maps make it technically feasible to map quantitative trait loci (QTL) using methods based on flanking marker genetic models. In this paper, we describe flanking marker models for doubled haploid (DH), recombinant inbred (RI), backcross (BC), F1 testcross (F1TC), DH testcross (DHTC), recombinant inbred testcross (RITC), F2, and F3 progeny. These models are functions of the means of quantitative trait locus genotypes and recombination frequencies between marker and quantitative trait loci. In addition to the genetic models, we describe maximum likelihood methods for estimating these parameters using linear, nonlinear, and univariate or multivariate normal distribution mixture models. We defined recombination frequency estimators for backcross and F2 progeny group genetic models using the parameters of linear models. In addition, we found a genetically unbiased estimator of the QTL heterozygote mean using a linear function of marker means. In nonlinear models, recombination frequencies are estimated less efficiently than the means of quantitative trait locus genotypes. Recombination frequency estimation efficiency decreases as the distance between markers decreases, because the number of progeny in recombinant marker classes decreases. Mean estimation efficiency is nearly equal for these methods.

191 citations


Journal ArticleDOI
Jonathan Black1, H Sherk1, J Bonini1, W R Rostoker1, F Schajowicz1, J O Galante1 
TL;DR: The case of another patient who had clinical failure of a total hip prosthesis that was made of titaniumbased alloy and ultra-high molecular weight polyethylene and was fixed with polymethylmethacrylate is reported.
Abstract: Metallosis is usually defined as aseptic fibrosis, local necrosis, or loosening of a device secondary to metallic corrosion and release of wear debris’2. It has been an occasional but characteristic clinical finding in patients who have a metal-on-metal design of total hip replacement, such as the McKee-Farrar prosthesis18. The nearly universal adoption of metal-on-polymer designs (following the lead of Charnley’s low-friction arthroplasty) and the more recent introduction of ceramic-on-polymer and ceramic-onceramic22 articulating interfaces have reduced release of metallic debris due to wear sufficiently that this mode of failure of well fixed hip components has been almost eliminated. However, a recent report’ identified metallosis as a contributing factor in the clinical failure of nine total hip replacements that were made of titanium-based alloy and ultrahigh molecular weight polyethylene. Apparently the metallosis was secondary to wear and fretting of the metallic femoral components. We report the case of another patient who had clinical failure of a total hip prosthesis that was made of titaniumbased alloy and ultra-high molecular weight polyethylene and was fixed with polymethylmethacrylate. The failure was due to excessive wear of the metallic femoral head, in the absence of loosening of either component.

186 citations


Journal ArticleDOI
TL;DR: In this article, the authors delineate the link between the carrying capacity and tourism lifecycle concepts and suggest management implications consistent with the synergistic relationship that exists between the two concepts in tourism.

Journal ArticleDOI
TL;DR: The authors reviewed accepted theories of humor and their implications for the field of management and concluded that even though the interest in humor by management has been sporadic, joking behavior remains a pervasive and important topic and has the potential of providing significant insights into management and organizational behavior.

Journal ArticleDOI
TL;DR: In this article, a two-stage algorithm is proposed to estimate power system frequency deviation and its average rate of change during emergency operating conditions that may require load shedding, where an adaptive extended Kalman filter is used to calculate the frequency deviation, magnitude, and phase angle of the voltage phasor.
Abstract: A novel Kalman filtering-based technique is presented for estimating power system frequency deviation and its average rate of change during emergency operating conditions that may require load shedding. This method obtains the optimal estimate of the power system frequency deviation from noisy voltage samples and the best estimate of the mean system frequency deviation and its rate of change while accounting for low-frequency synchronizing oscillations which occur during large disturbances. The proposed technique is a two-stage algorithm which uses an adaptive extended Kalman filter in series with an adaptive linear Kalman filter. The extended Kalman filter calculates the frequency deviation, magnitude, and phase angle of the voltage phasor, which may change during the time period covered by the estimation window. Both the measurement noise variance and the system noise covariance associated with the voltage samples are calculated online. The instantaneous frequency deviation is used as the input to a linear Kalman filter, which models the frequency deviation as a random walk plus a random ramp process. The estimated average rate of frequency decay is represented by the slope of the random ramp. Results for both single and multiple measurements are reported. >

Journal ArticleDOI
TL;DR: In this paper, an approach that accounts for the time-varying nature of the fundamental and harmonic components is developed to detect high-impedance faults accompanied by variations in the 60 Hz and harmonic component.
Abstract: High-impedance faults are accompanied by variations in the 60 Hz and harmonic components. An approach that accounts for the time-varying nature of the fundamental and harmonic components is developed. This approach is based on Kalman-filtering theory to obtain the best estimation of the time variations of the fundamental and harmonic components. Four actual recorded data sets for staged arcing fault on different types of soil are investigated by the proposed Kalman-filtering technique. The time variations of fundamental and low-order harmonic components significantly contribute to high-impedance fault detection using the features presented. >

Journal ArticleDOI
TL;DR: Numerical techniques for modeling computer networks under nonstationary conditions are discussed, and two distinct approaches are presented which allow multiple classes of traffic to be modeled and establishes a framework for the use of optimal control techniques in the design of network control strategies.
Abstract: Numerical techniques for modeling computer networks under nonstationary conditions are discussed, and two distinct approaches are presented. The first approach uses a queuing theory formulation to develop differential equation models which describe the behavior of the network by time-varying probability distributions. In the second approach, a nonlinear differential equation model is developed for representing the dynamics of the network in terms of time-varying mean quantities. This approach allows multiple classes of traffic to be modeled and establishes a framework for the use of optimal control techniques in the design of network control strategies. Numerical techniques for determining the queue behavior as a function of time for both approaches are discussed and their computational advantages are contrasted with simulation. >

Journal ArticleDOI
TL;DR: This paper investigated the underlying causes and the announcement effects of plant closings and found that firms announcing closings have lower earnings than market or industry median; earnings typically improve slightly after the announcement.

Journal ArticleDOI
TL;DR: This linearization scheme provides an equivalent mixed integer linear programming problem which yields a tighter continuous relaxation than that obtainable via the alternative linearization techniques available in the literature.
Abstract: This paper is concerned with a new linearization strategy for a class of zero-one mixed integer programming problems that contains quadratic cross-product terms between continuous and binary variables, and between the binary variables themselves. This linearization scheme provides an equivalent mixed integer linear programming problem which yields a tighter continuous relaxation than that obtainable via the alternative linearization techniques available in the literature. Moreover, the proposed technique provides a unifying framework in the sense that all the alternate methods lead to formulations that are accessible through appropriate surrogates of the constraints of the new linearized formulation. Extensions to various other types of mixed integer nonlinear programming problems are also discussed.

Journal ArticleDOI
TL;DR: This paper examined the relationship between citations of published papers and comments provided by reviewers and editors and found that reviewers' comments demonstrate a positive impact on subsequent citation of papers, while comments made by editors show no such impact.
Abstract: Do referees employed by journals merely screen acceptable from unacceptable manuscripts or are they charged with an additional value-adding responsibility vis-a-vis the papers they review? Drawing from editorial correspondence provided by survey respondents, I address this question by examining the relationship between citations of published papers and comments provided by reviewers and editors. Referees' comments demonstrate a positive impact on subsequent citation of papers, while comments made by editors show no such impact. Value-adding by editors appears to derive principally from efficient matching of papers with reviewers.

Book ChapterDOI
01 Jan 1990
TL;DR: In this article, coherent backscatter radar (CBR) is used to study the structure and dynamics of the troposphere and stratosphere in both the clear and cloudy atmosphere, and it has become accepted in the radar meteorology community as a valuable complement to the weather radar measurement techniques that make use of Rayleigh scattering from hydrometeors.
Abstract: An explosive growth and development in coherent-backscatter radar techniques for studying the structure and dynamics of the troposphere and stratosphere has occurred in the past two decades. Ultra-High Frequency (UHF) and Very High Frequency (VHF) radars can detect echoes caused by Bragg scatter from refractive index structure due to variations in humidity and temperature in both the clear and cloudy atmosphere, and they have now become accepted in the radar meteorology community as a valuable complement to the weather radar measurement techniques that make use of Rayleigh scattering from hydrometeors. The UHF and VHF radars typically operate in the frequency range from the low VHF band (40–50 MHz) to the upper UHF band (3 GHz). In the low VHF band, Bragg scatter from electron density variations in the mesosphere can also be detected.

Journal ArticleDOI
TL;DR: In this paper, the chemistry of heavy polychalcogenides as ligands to transition metal centers is discussed, which involves the coordination of poly-chogenide anions to metal carbonyls.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the efficacy of activated carbon treatment for improving the flavor of soymilk-based yogurt products and to determine the functionality of nonfat dry milk and whey protein concentrate in the manufacture of milk-based and SOY-MIL-YOGA.
Abstract: The study was conducted to investigate the efficacy of the activated carbon treatment for improving the flavor of soymilk-based yogurt products and to determine the functionality of nonfat dry milk and whey protein concentrate in the manufacture of milk-based and soymilk-based yogurt. These products were subjected to a chemical analysis and characterized for physical and sensory properties

Journal ArticleDOI
TL;DR: This paper investigates the impact of adjustments in the design parameters of MPS freezing methods on two performance measures (MPS lot-sizing cost and stability) under stochastic demand conditions in a rolling planning horizon environment given a service level target.
Abstract: Managing the trade-off between achieving a stable master production schedule (MPS) and being responsive to changes in customer requirements is a difficult problem in many firms where providing a high level of customer service is viewed as an important competitive factor. One alternative for managing this trade-off is to freeze an agreed portion of the MPS. This paper investigates the impact of adjustments in the design parameters of MPS freezing methods on two performance measures (MPS lot-sizing cost and stability) under stochastic demand conditions in a rolling planning horizon environment given a service level target. Simulation experiments are reported which indicate that many of the conclusions regarding the design of MPS freezing methods obtained under deterministic demand conditions hold under stochastic demand.

Journal ArticleDOI
TL;DR: In this article, the variability in responses of tall fescue genotypes to endophyte infection for select growth characteristics, plant morphology, and chemical components were determined. But the results were limited.
Abstract: (...) The objectives of this study were to determine variability in responses of tall fescue genotypes to endophyte infection for select growth characteristics, plant morphology, and chemical components. Five genotypes of tall fescue were grown in both endophyte-infected (EI) and noninfected (NI) forms in the greenhouse under different clipping treatments. Tiller and dry matter production, crown depth, and total nonstructural carbohydrates (TNC) were measured (...)

Journal ArticleDOI
TL;DR: Regression analyses confirmed each of these hypotheses and indicated that strength of prediction was not reduced when participants free from hypertension-related complications and medication were tested, and implications for the role of blood pressure in aging research and for longitudinal studies with subjects free from the need for treatment with antihypertensive medications are discussed.
Abstract: Regression analyses using Blood Pressure, Age, and the multiplicative effect of Blood Pressure and Age as predictors of performance (on selected tests from the Halstead-Reitan neuropsychological test battery) were done. Three hypotheses were tested with subjects ranging in age from 20 to 72 years of age: (1) blood pressure values predict neuropsychological test performance over a wide range of hypertensive and normotensive blood pressure values; (2) blood pressure predicts performance within the narrower range of normal and borderline values; (3) blood pressure X age interactions, when observed over this age and education range, are such that negative blood pressure effects on performance are larger for younger than older subjects. Regression analyses confirmed each of these hypotheses and indicated that strength of prediction was not reduced when participants free from hypertension-related complications and medication were tested. Blood pressure X age interactions were seen for Trailmaking-B Test and the Tactile Performance Test-Localization for the primary sample. However, only Blood Pressure main effects were observed for the Average Impairment Rating, the Categories Test, TPT-Memory and TPT-Localization when age, sex, and education were controlled. Implications of these findings for the role of blood pressure in aging research and for longitudinal studies with subjects free from the need for treatment with antihypertensive medications are discussed.

Journal ArticleDOI
TL;DR: A comparative evaluation of state-of-the-art tools and techniques for estimating the reliability of fault-tolerant computing systems is presented, with particular attention to design limitations imposed by underlying model assumptions and the efficiency and accuracy of the solution techniques employed.
Abstract: A comparative evaluation of state-of-the-art tools and techniques for estimating the reliability of fault-tolerant computing systems is presented. The theory of reliability estimation is briefly reviewed. Five current approaches are compared in detail: HARP (hybrid automated reliability predictor), SURE (semi-Markov unreliability range estimator), HEIRESS (hierarchical estimation of interval reliability by skewed sampling), SHARPE (symbolic hierarchical automated reliability and performance evaluator), and SAVE (system availability estimator). Particular attention is given to design limitations imposed by underlying model assumptions, on the one hand, and the efficiency and accuracy of the solution techniques employed, on the other hand. >

Journal ArticleDOI
TL;DR: Picomole quantities of heme were measured by reconstitution of the heme with apo-peroxidase and subsequent determination of peroxid enzyme activity and may be indicative of a ;free heme pool' which exists in the chloroplast.
Abstract: Hemes destined for cytosolic hemoproteins must originate in one of the cellular compartments which have the capacity for heme synthesis, namely the chloroplast or the mitochondria. Since developing chloroplasts from greening cucumber ( Cucumis sativus , cv. Sumter) cotyledons are known to contain complete heme and chlorophyll biosynthetic pathways, they were tested for their capacity export hemes. Picomole quantities of heme were measured by reconstitution of the heme with apo-peroxidase and subsequent determination of peroxidase activity. The assay method was sensitive (as little as 0.7 picomole of heme could be detected in a volume of 100 microliters) and was linear with heme concentration. When intact plastids were incubated with apo-peroxidase, a steady-state rate of efflux between 0.12 and 0.45 picomole heme/minute/milligram plastid protein was measured. The efflux rate was not due to plastid breakage and could be enhanced by incubating with the heme precursor, δ-aminolevulinic acid. Cold acetone extraction removed 47 ± 17 picomoles heme/milligram plastid protein from the total b-type heme pool in the chloroplasts (166 ± 9 picomoles heme/milligram protein, by acid-acetone extraction). The reconstitution technique provided a similar estimate of readily exchangeable heme in the plastid, 37 ± 8 picomoles heme/milligram protein (or 6 micromolar in the plastids). These values may be indicative of a `free heme pool9 which exists in the chloroplast.

Journal ArticleDOI
01 Jun 1990
TL;DR: In this paper, a compliant control scheme and its practical implementation are presented and a control strategy is derived so that the measured interactive force/torque can be used to modify the trajectory of the robot.
Abstract: When two robots execute coordinated motion in industrial application such as assembly, not only is the specification of the desired motion of each robot's end effector required, but also that of the desired contact force exerted on the end effectors. The involvement of the forces is the motivation for a consideration of compliance. The compliant coordination control scheme of two moving robots is presented. One robot is equipped with a force/torque sensing wrist, which provides force feedback signals and permits the fine coordinated motion control of complex assembly tasks. A control strategy is derived so that the measured force/torque can be used to modify the trajectory of one of the robots. This control scheme can be realized in real time by using the proposed computational structure. The system stability is related to the force-displacement conversion gain in the forward path. An experiment that screws a moving nut onto a moving bolt has been carried out as an illustration. >

Journal ArticleDOI
TL;DR: Fat added to the diet, regardless of source, reduced digestibilities of DM, energy, ADF and fatty acids in the rumen but had no effect on total tract digestibility coefficients, while feeding lecithin tended to increase fatty acid digestion in the hindgut.
Abstract: Six Hampshire wethers with ruminal and duodenal cannulas were fed three diets in a replicated 3 X 3 latin square to compare phospholipids with triglycerides for their effects on ruminal digestion. The diets (56% concentrate, 44% bermuda-grass hay, air-dried basis) contained either no added fat (control), 5.2% soybean lecithin or 2.4% corn oil on a DM basis. All diets were isonitrogenous and both fat-supplemented diets had similar fatty acid and energy contents. Fat added to the diet, regardless of source, reduced digestibilities of DM, energy, ADF and fatty acids in the rumen but had no effect on total tract digestibility coefficients. Lecithin slightly increased (P = .06) fatty acid digestion in the hindgut compared to corn oil (91.0 and 87.0%, respectively). Both fat sources decreased (P less than .01) ruminal ammonia concentration and increased (P less than .10) N flow to the duodenum. Added fat also reduced ruminal (P less than .01) and total tract (P less than .05) N digestibilities. Microbial N flow to the hindgut was not affected by diet, but adding fat increased (P less than .06) true efficiency of microbial protein synthesis. Overall, phospholipids from soybean lecithin inhibited ruminal fermentation similarly to triglycerides from corn oil. Despite ruminal degradation of lecithin by microbial phospholipases as shown in other studies, feeding lecithin tended to increase fatty acid digestion in the hindgut.

Journal ArticleDOI
TL;DR: In this article, an experiment was conducted to determine the crop responses to temperature in naturally lighted, temperature and CO 2 -controlled plant growth chambers, and measurements were made of several plant parameters.
Abstract: (...) An experiment was conducted to determine the crop responses to temperature in naturally lighted, temperature- and CO 2 -controlled plant growth chambers. The experiment was conducted at five temperatures and measurements were made of several plant parameters. A preliminary experiment showed that mepiquat chloride (MC) caused a dramatic reduction in plant height and number of main-stem nodes. (...)

Journal ArticleDOI
TL;DR: Variations of free amino acid patterns in the fermented, dried sausages were found which indicated differences in metabolic activity between the cultures, similar to results previously reported for 'European-style' dry sausage processed with prolonged low temperature fermentations.