scispace - formally typeset
Search or ask a question

Showing papers by "Clemson University published in 1994"


Journal ArticleDOI
TL;DR: A review of this literature suggests several trends for applied psychologists as discussed by the authors, and there is a great need for more investigation of strategies for dealing with missing data, especially when data are missing in nonrandom or systematic patterns.
Abstract: There has been conspicuously little research concerning missing data problems in the applied psychology literature. Fortunately, other fields have begun to investigate this issue. These include survey research, marketing, statistics, economics, and biometrics. A review of this literature suggests several trends for applied psychologists. For example, listwise deletion of data is often the least accurate technique to deal with missing data. Other methods for estimating missing data scores may be more accurate and preserve more data for investigators to analyze. Further, the literature reveals that the amount of missing data and the reasons for deletion of data impact how investigators should handle the problem. Finally, there is a great need for more investigation of strategies for dealing with missing data, especially when data are missing in nonrandom or systematic patterns.

989 citations


Journal ArticleDOI
TL;DR: In this article, a simulation of a 20 solar mass "delayed" supernova explosion is presented, where the authors follow the detailed evolution of material moving through the bubble at the late times appropiate to r-process nucleosynthesis.
Abstract: As a neutron star is formed by the collapse of the iron core of a massive star, its Kelvin-Helmholtz evolution is characterized by the release of gravitational binding energy as neutrinos. The interaction of these neutrinos with heated material above the neutron star generates a hot bubble in an atmosphere that is nearly in hydrostatic equilibrium and heated, after approximately 10 s, to an entropy of S/N(sub AS)k greater than or approximately = 400. The neutron-to-proton ratio for material moving outward through this bubble is set by the balance between neutrino and antineutrino capture on nucleons. Because the electron antineutrino spectrum at this time is hotter than the electron neutrino spectrum, the bubble is neutron-rich (0.38 less than or approximately = Y(sub e) less than or approximately = 0.47). Previous work using a schematic model has shown that these conditions are well suited to the production of heavy elements by the r-process. In this paper we have advanced the numerical modeling of a 20 solar mass 'delayed' supernova explosion to the point that we can follow the detailed evolution of material moving through the bubble at the late times appropiate to r-process nucleosynthesis. The supernova model predicts a final kinetic energy for the ejecta of 1.5 x 10(exp 51) ergs and leaves behind a remnant with a baryon mass of 1.50 solar mass (and a gravitational mass of 1.445 solar mass). We follow the thermodynamic and compositional evolution of 40 trajectories in rho(t), T(t), Y(sub e)(t) for a logarithmic grid of mass elements for the last approximately = 0.03 solar mass to be ejected by the proto-neutron star down to the last less than 10(exp -6) solar mass of material expelled at up to approximately = 18 s after core collapse. We find that an excellent fit to the solar r-process abundance distribution is obtained with no adjustable parameters in the nucleosynthesis calculations. Moreover, the abundances are produced in the quantities required to account for the present Galactic abundances. However, at earlier times, this one-dimensional model ejects too much material with entropies S/N(sub A)k approximately 50 and Y(sub e) approximately 0.46. This leads to an acceptable over production of N = 50 nuclei, particularly Sr-88, Y-89, and Zr-90, relative to their solar abundances. We speculate on various means to avoid the early overproduction and/or ejection of N = 50 isotonic nuclei while still producing and ejecting the correct amount of r-process material.

693 citations


Journal ArticleDOI
TL;DR: In this paper, the air gap resistance of high-water vapor-transmitting hydrophilic edible films is corrected to account for the air layer resistance, which can lead to underestimation of water vapor transmission rates.

448 citations


Book ChapterDOI
01 Jan 1994
TL;DR: Examination of strains of L. acidophilus within the six homology groupings of Johnson et al. demonstrated that production of the bacteriocin lactacin B could not be used in classification of neotype L. Acidophilus strains, however, the usefulness of employing sensitivity to lactac in B in classified of dairy lactobacilli is suggested.
Abstract: Numerous reports prior to the discovery of lactacin B in 1983 suggested that Lactobacillus acidophilus produces antimicrobial agents (presumably proteins) inhibitory to undesirable intestinal flora (Vincent et al., 1959; Vakil & Shahani, 1965; Gilliland & Speck, 1977; Shahani & Ayebo, 1980). Of those described, the agent most like a bacteriocin was the broad-spectrum inhibitor lactocidin from aged liver veal agar cultures of intestinal isolates of L. acidophilus (Vincent et al., 1959). Crude lactocidin was extractable from agar cultures adjusted to below pH 5.0 or above pH 7.8, but not to pH 7.0. Unlike lactic acid or hydrogen peroxide, crude lactocidin was non-dialyzable, insensitive to treatment with catalase, and active at pH 7.0 against Gram-negative and Gram-positive cultures, including several pathogens. The activity of lactocidin against Salmonella enteritidis, Escherichia coli, Pseudomonas aeruginosa, and Staphylococcus aureus led Vincent et al (1959) to speculate that L. acidophilus producer strains were essential in controlling undesirable flora in the intestinal tract. However, antibacterial activity at pH 7.0 was lost with further purification (Vincent et al., 1959) and subsequent characterization of lactocidin is noticeably absent from the literature. This report and the controversy regarding broad-spectrum antagonists produced by L. acidophilus led Barefoot & Klaenhammer (1983) to survey the species for production of inhibitory proteins.

424 citations


Journal ArticleDOI
TL;DR: Macrophage response to particulate debris appears to be dependent on particle size, composition, and dose as given by surface area ratio, and inhibition of macrophage DNA synthesis at higher surface area ratios suggests cell damage or death.
Abstract: Particulate wear-debris are detected in histiocytes/macrophages of granulomatous tissues adjacent to loose joint prostheses. Such cell-particle interactions have been simulated in vitro by challenging macrophages with particles dosed according to weight percent, volume percent, and number of particles. Each of these dosage methods has inherent shortcomings due to varying size and density of challenging particles of different compositions. In this study we challenged P388D1 macrophages with titania and polystyrene particles (< 2 microns), with dosage based on the ratio of the surface area of the particles to the surface area of the cells. The effect of size and composition on (1) the bone resorbing activity, (2) fibroblast proliferation, and (3) secretion of IL-1 and PGE2 was determined. Macrophage response to particulate debris appears to be dependent on particle size, composition, and dose as given by surface area ratio. P388D1 macrophages challenged with titania particles released IL-1, but did not stimulate fibroblasts. Inhibition of macrophage DNA synthesis at higher surface area ratios suggests cell damage or death. Particle-stimulated cells increased bone resorption up to 125% of controls but released only basal levels of PGE2. Macrophages stimulated by wear particles are expected to synthesize numerous factors affecting events in the bone-implant interface. Using the concept of surface area ratio allows us to study and compare such cellular responses to wear particles in a standardized manner.

404 citations


Journal ArticleDOI
TL;DR: This paper proposes a technique which first converts the problem into a nonlinear, polynomial mixed-integer zero-one problem by multiplying the constraints with some suitable d-degree polynometric factors involving the n binary variables, and subsequently linearizes the resulting problem through appropriate variable transformations.

380 citations



Journal ArticleDOI
TL;DR: The modern history of cellulose chemistry actually began in 1837 when Anselme Payen chemically identified cellulose from plants as discussed by the authors, and since then, the establishment of its chemical and physical structures has undergone multitudinous periods of struggle.
Abstract: Cellulose as a material has been widely used for centuries in all kinds of practical applications. However, its chemical composition, structure and morphology were also unknown for centuries. The modern history of cellulose chemistry actually began in 1837 when Anselme Payen chemically identified cellulose from plants. Since then, the establishment of its chemical and physical structures has undergone multitudinous periods of struggle. Until the early 1920s, many scientists believed that cellulose was made up of a few small molecules of glucose or cellobiose. Very few scientists accepted the premiss that it was a polymer. The controversial debates were continued for over ten years. Eventually, substantial experimental data provided proof that cellulose is a covalently linked, high-molecular-weight macromolecule. This fact also provided the foundation for the establishment of polymer science. Some of the historical development of chemistry and structures are briefly reviewed, and recent approaches to studying cellulose structures with new instrumentation are discussed.

302 citations


Journal ArticleDOI
TL;DR: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas and it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large.
Abstract: The sequential testing of more than two hypotheses has important applications in direct-sequence spread spectrum signal acquisition, multiple-resolution-element radar, and other areas A useful sequential test which we term the MSPRT is studied in this paper The test is shown to be a generalization of the sequential probability ratio test Under Bayesian assumptions, it is argued that the MSPRT approximates the much more complicated optimal test when error probabilities are small and expected stopping times are large Bounds on error probabilities are derived, and asymptotic expressions for the stopping time and error probabilities are given A design procedure is presented for determining the parameters of the MSPRT Two examples involving Gaussian densities are included, and comparisons are made between simulation results and asymptotic expressions Comparisons with Bayesian fixed sample size tests are also made, and it is found that the MSPRT requires two to three times fewer samples on average >

296 citations


Journal ArticleDOI
TL;DR: A detailed literature search was carried out to define the current knowledge about the biological performance of tantalum as mentioned in this paper, and the results showed that the pure metal appears to be inert both in vivo and in vitro.

288 citations


Journal ArticleDOI
TL;DR: The economics of most modern aquaculture operations require that animals be cultured at high densities, and elevated concentrations of ammonia and nitrite act as stressors in that they stimulate the release of corticosteroid hormones into circulation.
Abstract: The economics of most modern aquaculture operations require that animals be cultured at high densities. A consequence of high‐density aquaculture is the increased probability that the animals will be exposed to elevated concentrations of nitrogenous wastes, particularly ammonia and nitrite. Ammonia toxicity is dependent primarily on the concentration of ammonia and the pH of the environment High concentrations of ammonia will decrease survival, inhibit growth, and cause a variety of physiological dysfunctions. Nitrite toxicity is dependent on the concentration of nitrite and, in many species, the concentration of chloride in the water. Nitrite toxicosis is characterized by reduced survival and growth, methemoglobinemia, and other physiological dysfunctions. Both ammonia and nitrite act as stressors in that they stimulate the release of corticosteroid hormones into circulation. Elevated concentrations of circulating corticosteroids have been linked to impaired immune function and, consequently, de...

Journal ArticleDOI
TL;DR: In this article, the effect of board composition on overall corporate performance while controlling for managerial ownership and other key variables was investigated, showing that there is a significant curvilinear relation between board composition and performance.
Abstract: We investigate the effect of board composition on overall corporate performance while controlling for managerial ownership and other key variables. We recognize that both managerial ownership and board composition may be endogenous to performance, but our work differs from previous in two important respects. First, we measure performance using the market value to book value ratio of common stock equity rather than the more commonly used Tobin's q. Second, recognizing that overall estimates from the IV approach depend greatly on the choice of instruments, we perform sensitivity analysis by using a variety of instruments to proxy for board composition and managerial ownership. Both our OLS and IV estimates indicate a significant curvilinear relation between board composition and performance. However, we find that moderate differences in first-stage regressions, resulting in small changes to first-stage R2s, lead to widely differing overall results. Our results suggest that findings of studies using IV and similar techniques (e.g. two- and three-stage least squares) must be interpreted cautiously.

Journal ArticleDOI
TL;DR: This paper designs and implements integrator backstepping controllers for a brush DC motor driving a one-link robot manipulator and shows that both of these controllers ensure "good" load position tracking despite parametric uncertainty throughout the entire electromechanical system.
Abstract: In this paper, we design and implement integrator backstepping controllers (i.e., adaptive and robust) for a brush DC motor driving a one-link robot manipulator. Through the use of Lyapunov stability-type arguments, we show that both of these controllers ensure "good" load position tracking despite parametric uncertainty throughout the entire electromechanical system. Experimental results are presented to illustrate the performance and feasibility of implementing the nonlinear control algorithms. >

Journal ArticleDOI
TL;DR: A rationalization for the alignment of these indoles with traditional cannabinoids is presented, and maximum activity was found for the 1-butyl, pentyl and hexyl analogs.

Proceedings ArticleDOI
01 Dec 1994
TL;DR: This work presents a new approach to class testing that supports dataflow testing for dataflow interactions in a class, and provides opportunities to find errors in classes that may not be uncovered by black-box testing.
Abstract: The basic unit of testing in an object-oriented program is a class. Although there has been much recent research on testing of classes, most of this work has focused on black-box approaches. However, since black-box testing techniques may not provide sufficient code coverage, they should be augmented with code-based or white-box techniques. Dataflow testing is a code-based testing technique that uses the dataflow relations in a program to guide the selection of tests. Existing dataflow testing techniques can be applied both to individual methods in a class and to methods in a class that interact through messages, but these techniques do not consider the dataflow interactions that arise when users of a class invoke sequences of methods in an arbitrary order. We present a new approach to class testing that supports dataflow testing for dataflow interactions in a class. For individual methods in a class, and methods that send messages to other methods in a the class, our technique is similar to existing dataflow testing techniques. For methods that are accessible outside the class, and can be called in any order by users of the class, we compute dataflow information, and use it to test possible interactions between these methods. The main benefit of our approach is that it facilitates dataflow testing for an entire class. By supporting dataflow testing of classes, we provide opportunities to find errors in classes that may not be uncovered by black-box testing. Our technique is also useful for determining which sequences of methods should be executed to test a class, even in the absence of a specification. Finally, as with other code-based testing techniques, a large portion of our technique can be automated.

Journal ArticleDOI
TL;DR: This article examined the relationship between race, leisure preferences, and class awareness, and hypothesized that blacks and whites who defined themselves as class agnostic were the same as whites who classified as agnostic.
Abstract: This study examines the relationship between race, leisure preferences, and class awareness. Based on interest-group theory of class identification, we hypothesized that blacks and whites who defin...

Journal ArticleDOI
TL;DR: Burbidge et al. as discussed by the authors showed that the heavy nuclei were formed in three distinct nucleosynthetic processes, which they termed the r-, s-, and p-processes.
Abstract: Burbidge et al (1957) and Cameron (1957) laid out the framework for our understanding of the formation of the heavy nuclei (those nuclei with mass number A approx. greater than 70). From systematics in the solar system abundance distribution, Burbidge et al determined that the heavy nuclei were formed in three distinct nucleosynthetic processes, which they termed the r-, s-, and p-processes. That we still use these terms today is a credit to the soundness of this work done 37 years ago. We may understand how Burbidge et al and Cameron arrived at their conclusions from Figure 1. One population of nuclei, the s-nuclei, shows an abundance distribution with peaks near mass numbers 87, 138, and 208. These nuclei are made in a slow neutron-capture process, the s-process. A rapid neutron-capture process, the r-process, is responsible for the r-nuclei, whose abundance distribution shows peaks at mass numbers 80, 130, and 195. The p-process is responsible for production of the rarer, more proton-rich heavy isotopes (the p-nuclei) that cannot be made by neutron capture. The first quantitative evaluations of the ideas of Burbidge et al and Cameron came to light in the early 1960s with work on the s-process (Clayton et al 1961, Seeger et al 1965) and the r-process (Seeger et al 1965). These calculations further elucidated the mechanisms for heavy-element formation and showed the plausibility of the framework developed in the 1950s. Subsequent work has focused on determining the astrophysical sites where the r-, s-, and p-processes occurred with the help of improved nuclear details, stellar models, and abundances. A goal of this paper is to review the recent progress astrophysicists, astronomers, and physicists have made in these directions and to point out the problems that remain in our understanding of the formation of the heavy nuclei. Another, perhaps deeper, goal is to to seek some understanding of why there are three major processes available to nature for synthesis of heavy elements. It is impossible for a single paper to cover all relevant aspects of the r-, s-, and p-processes; therefore, where possible, references to other reviews are given. Readers should turn to these reviews for more details. Nevertheless, it is hoped that the present paper gives some flavor for the rich field of heavy-element synthesis.

Journal ArticleDOI
TL;DR: The differences between black and white subjects' beliefs and perceptions about body size norms may explain, in part, why heavier body weights persist in some cultural groups.
Abstract: Different cultural norms and standards for appropriate female body size might contribute to the disparity in obesity rates between black and white adult females (46.0% and 24.6% respectively). The purpose of this study was to measure adolescents' perceptions of ideal size and social norms regarding female body size as well as adolescents' perceptions of significant others' evaluation and expectations of the adolescents' body size. Subjects included 437 adolescent girls (247 white and 190 black) aged 13 to 19 (x = 14.9, SD = .979) from six randomly selected public schools. The subjects, heights and weights were measured. Responses to a body image questionnaire and a series of nine female body drawings (arranged ordinally, 1 to 9, from thinnest to heaviest) were analyzed using the General Linear Model and Logistic Regression. The female body size considered ideal by black females was significantly larger than the size selected as ideal by white females (x = 3.47 and x = 3.13 respectively, p < 0.001). Black females were two times more likely than white females to describe themselves as thinner than other girls their age (O.R. = 2.01, 95% C.I. 1.34, 3.01) and seven times as likely to say that they were not overweight (O.R. = 7.08, 95% C.I. 3.72, 13.45). White females wanted to be a smaller size than they currently were and felt encouraged by significant others to lose weight or reduce their size. Black females did not indicate as great a desire as whites to be smaller and they tended to feel that their size was considered satisfactory by significant others. Only subjects from the low SES group perceived that significant others wanted them to gain weight. The differences between black and white subjects' beliefs and perceptions about body size norms may explain, in part, why heavier body weights persist in some cultural groups.

Journal ArticleDOI
TL;DR: This article found that fear of snakes, insects, non-indigenous animals, and plants were most frequently cited by students on field trips to wildland areas, and that anxiety reactions from students suggest that interpretive programs for some students may be corrective instead of formative.
Abstract: Interpreters at urban nature centers were asked to recall and list fears expressed by students on field trips to wildland areas. Forty-eight interpreters recalled 564 examples of fearful reactions. Responses were interpreted based on the literature on fear. Fears of snakes, insects, nonindigenous animals, and plants were most frequently cited. Enlightening was the high frequency of “fear of getting lost” and the emotion of disgust expressed in relation to natural objects. Results suggest the need to address not only students' fears but also their feelings of disgust. Anxiety reactions from students suggest that interpretive programs for some students may be corrective instead of formative.

Journal ArticleDOI
TL;DR: In this paper, a search for the most important electron captures and beta-decays after core silicon burning in massive stars is made for the 150 most abundant isotopes in a simplified fashion which generally includes the strongest transitions.
Abstract: A search is made for the most important electron captures and beta-decays after core silicon burning in massive stars. A nuclear statistical equilibrium code is used to compute isotopic abundances. Electron capture and beta-decay rates are estimated for the 150 most abundant isotopes in a simplified fashion which generally includes the strongest transitions. These estimates are made for nuclei in the fp-shell and use techniques similar to Fuller, Fowler, & Newman (1982a), and are compared to them. The general behavior of the rate of change of Y(sub e) is examined. These methods are then used to follow a typical stellar trajectory, seeking the most important weak interactions during the formation of the iron core. Ranked lists of nuclei are given, to prioritize more detailed studies on individual nuclei. Beta-decays are found to ben an important modification to the evolution below the rate of change of Y(sub e) approx. = 0.46 as the core approaches a state of dynamic equilibrium between electron captures and beta-decays.

Journal ArticleDOI
TL;DR: Results indicate that hydrophobicity of the substratum is a primary determinant and is sufficient to induce appressorium formation in M. grisea.
Abstract: Infection by Magnaporthe grisea, the causal agent of rice blast, requires the formation of a melanized, dome-shaped infection cell, called an appressorium. Little is known about the signals and mechanisms regulating this important developmental process. We have previously observed a correlation between hydrophobicity of the contact surface and appressorium formation. To evaluate this thigmotropic response more precisely, we measured appressorium formation on the surfaces of silicon wafers modified to create various degrees of hydrophobicity. We also examined the effects of artificial ridges created on polystyrene surfaces. Hydrophobic surfaces induced a high level of appressorium formation, whereas hydrophilic surfaces did not. Tips of germ-tubes did not respond to ridges of any particular height, but formed appressoria in a random manner. These results indicate that hydrophobicity of the substratum is a primary determinant and is sufficient to induce appressorium formation in M. grisea.

Proceedings ArticleDOI
08 May 1994
TL;DR: The technique of imposing constraint conditions is utilized to coordinate and control the formation and group stability is presented in terms of the bound of the transient response of the formation system.
Abstract: The task of coordinating and controlling a group of small mobile robots may be decomposed into five subtasks: formation of geometric pattern, alignment of each robot's orientation, coordination of the robots in the group, motion realization and stability of the formation in motion. The technique of imposing constraint conditions is utilized to coordinate and control the formation. Group stability is presented in terms of the bound of the transient response of the formation system. >

Journal ArticleDOI
TL;DR: In this paper, an artificial neural network methodology for the classification of waveforms that are captured, as part of a larger scheme to automate the data collection process of recorders, is presented.
Abstract: Owing to the rise in power quality problems, the use of transient recorders to monitor power systems has increased steadily. The triggering strategies used by these transient recorders to capture disturbance waveforms are usually based on the violation of a set of predetermined measurement thresholds. Unfortunately, threshold based triggering strategies are difficult to apply in situations when only waveforms corresponding to a given class of disturbances need to be recorded. This inability of the recorder to automatically discriminate between waveform types tends to burden the user with the task of manually sifting and sorting through a large number of captured waveforms. This paper describes an artificial neural network methodology for the classification of waveforms that are captured, as part of a larger scheme to automate the data collection process of recorders. Two different neural network paradigms are investigated: the more common feedforward network (TDNN), and a modification of that, the time-delay network (TDNN), which has the ability to encode temporal relationships found in the input data and exhibits a translation-shift invariance property. Comparisons of both network paradigms, based on a typical distribution circuit configuration, are also presented. >

Journal ArticleDOI
TL;DR: In this paper, an analysis of insurance claim files from Hurricanes Hugo and Andrew has revealed that most wind damage to houses is restricted to the envelope of the building, which causes the insurance loss to be magnified by a factor ranging from two, at lower wind speeds, to nine at higher speeds.

Proceedings ArticleDOI
01 Aug 1994
TL;DR: This work presents an approach to regression testing that handles two important tasks: selecting tests from the existing test suite that should be rerun, and identifying portions of the code that must be covered by tests.
Abstract: Regression testing is performed on modified software to provide confidence that changed and affected portions of the code behave correctly. We present an approach to regression testing that handles two important tasks: selecting tests from the existing test suite that should be rerun, and identifying portions of the code that must be covered by tests. Both tasks are performed by traversing graphs for the program and its modified version. We first apply our technique to single procedures and then show how our technique is applied at the interprocedural level. Our approach has several advantages over previous work. First, our test select ion technique is safe, selecting every test that may produce different output in the modified program. However, our selection technique chooses smaller test sets than other safe approaches. Second, our approach is the first safe approach to identify coverage requirements, and the first safe approach to do so interprocedurally. Third, our approach handles both structural and nonstructural program modifications, and processes multiple modific~ tions with a single application of the algorithm. Finally, our approach can be automated, and is applicable to all programs for which program dependence graphs can be constructed. *This work was partially supported by a grant from Data General Corporation, and by NSF under Grants CCR-9109531 and CCR-9357811 to Clemson University. Permission to co y without fee all or part of ttis material is E granted provided t at the copies are not made or distributed for direct commercial advantaqe, the ACM copyright notice and the title of the publication and Its date appear, and notice is given that copying is by permission of the Association of Computing Machinery. To copy otherwise, or to republish, requires a fee ancflor specific permission. ISSTA 948/94 Seattle Washington USA @ 1994 ACM 0-89791 -683-Z9410008..$3.5O

Journal ArticleDOI
TL;DR: Results are in agreement with results of this laboratory and others that fibroblasts recognize the dimensions of surface configurations and react accordingly to the uppermost surface area presented to them, but conclusive data can only be obtained from a study of the focal adhesions.
Abstract: Previous studies suggested that surface topographic configurations of 1-3 microns influence cellular behavior and tissue response They did not address which specific aspect of the configurations elicits the cellular response We therefore investigated the effect of the orientation of several surface configurations Seven different textures on polydimethyl siloxane (silicone; Dow Corning Silastic) specimens were used to test the question of whether orientation into (down) or out of the surface (up) affected cellular response to a material The textures were smooth and photoetched configurations of 2 microns up, 2 microns down, 5 microns up, 5 microns down, 10 microns up, and 10 microns down The response of cultured fibroblasts on these surfaces was compared with that of a standard tissue culture material, polyethylene terepthalate (Thermanox) The cell density was measured over a 12-day period with the use of a colorimetric assay The uptake of methylene blue was measured daily and compared as an absorbance in a destaining agent Cells on the 2 and 5 microns up arrays showed increased rates of proliferation and cell density as compared with their down counterparts This would indicate that textures of 2 and 5 microns have a significant influence on cell growth, and that the surface with hills has a greater effect than the surface with wells In contrast, the 10 microns up and 10 microns down arrays did not prove to be statistically different from smooth ones This indicates that the orientation effect is related to the configuration size and that this configuration size is not viewed differently from smooth silicone by the cells The presented data are in agreement with results of this laboratory and others that fibroblasts recognize the dimensions of surface configurations and react accordingly Specifically, they appear to react to the uppermost surface area presented to them, but conclusive data can only be obtained from a study of the focal adhesions

Journal ArticleDOI
TL;DR: It is suggested that the bioavailability of contaminants such as those tested cannot be accurately pre­ dicted in bioassays that expose organisms to aqueous representations of whole sediment.
Abstract: Bioassays have frequently been used as tools to simulate exposure of benthos to sediment-associated contaminants in hazard assessments. Due to the problems involved with estimating bioavailability in whole-sediment bioassays, aqueous frac­ tions such as elutriates and pore water have been substituted for whole-sediment exposures. The objective of this research was to compare and evaluate the bioavailability of representative neutral hydrophobic contaminants in whole sediments and in aque­ ous extracts of whole sediment (elutriate and pore water) in simultaneous bioassays, using three representative indicator spe­ cies, Diporeia spp., Chironomus riparius larvae, and Lumbriculus variegatus. Aqueous extracts of whole sediment did not accurately represent the exposure observed in whole sediment. Generally, the aqueous extracts underexposed organisms com­ pared to whole sediment, even after adjusting accumulation to the fraction of organic carbon in the test media. Accumulation comparisons among whole-sediment, elutriate, and pore-water exposures depended on sampling time. At some sampling times for some contaminants, differences in accumulation between a particular aqueous extract and whole sediment were not signifi­ cant; however, these similarities were not observed for all species at the particular sampling time. Bioaccumulation and con­ taminant clearance data suggest that a number of factors such as the indicator species, exposure media, and chemical/physical properties of individual contaminants are responsible for the accumulation differences observed among the tested media. Nor­ malizing bioaccumulation to the amount of organic carbon in a source compartment adjusted for bioavailability differences of only some contaminants. We suggest that the bioavailability of contaminants such as those tested cannot be accurately pre­ dicted in bioassays that expose organisms to aqueous representations of whole sediment.

Journal ArticleDOI
TL;DR: In this article, a model has been developed relating wearout to breakdown in thin oxides, and the average trap density immediately prior to breakdown was measured to be of the order of low-10/sup 19/cm/sup 3/ in 10 nm thick oxides fabricated on p-type substrates stressed with negative gate voltages.
Abstract: A model has been developed relating wearout to breakdown in thin oxides. Wearout has been described in terms of trap generation inside of the oxide during high voltage stressing prior to breakdown. Breakdown occurred locally when the local density of traps exceeded a critical value and the product of the electric field and the higher leakage currents through the traps exceeded a critical energy density. The measurement techniques needed for determining the density of high-voltage stress generated traps have been described along with the method for coupling the wearout measurements to breakdown distributions. The average trap density immediately prior to breakdown was measured to be of the order of low-10/sup 19//cm/sup 3/ in 10 nm thick oxides fabricated on p-type substrates stressed with negative gate voltages. The model has been used to describe several effects observed during measurements of time-dependent-dielectric-breakdown distributions. The area dependence of breakdown distributions, the differences in the breakdown distributions during constant current and constant voltage stressing, and the multi-modal distributions often observed were simulated using the model. The model contained the provision for incorporation of weak spots in the oxide. >

Journal ArticleDOI
TL;DR: This paper considers the computation of definition-use and use-definition chains that extend across procedure boundaries at call and return sites, and handles the interprocedural effects of the data flow caused by both reference parameters and global variables, while preserving the calling context of called procedures.
Abstract: The dependencies that exist among definitions and uses of variables in a program are required by many language-processing tools. This paper considers the computation of definition-use and use-definition chains that extend across procedure boundaries at call and return sites. Intraprocedural definition and use information is abstracted for each procedure and is used to construct an interprocedural flow graph. This intraprocedural data-flow information is then propagated throughout the program via the interprocedural flow graph to obtain sets of reaching definitions and/or reachable uses for reach interprocedural control point, including procedure entry, exit, call, and return. Interprocedural definition-use and/or use-definition chains are computed from this reaching information. The technique handles the interprocedural effects of the data flow caused by both reference parameters and global variables, while preserving the calling context of called procedures. Additionally, recursion, aliasing, and separate compilation are handled. The technique has been implemented using a Sun-4 Workstation and incorporated into an interprocedural data-flow tester. Results from experiments indicate the practicality of the technique, both in terms of the size of the interprocedural flow graph and the size of the data-flow sets.

Journal ArticleDOI
TL;DR: Particles generated in joint simulators resemble the size and shape of in vivo wear particles from total joint replacement (TJR), however, particles prepared in vitro, whether in simulators or by other means, do not consistently resemble wear debris particles from TJR.