scispace - formally typeset
Search or ask a question

Showing papers by "University of Pennsylvania published in 1986"


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that there is a monotone relation between the expected underpricing of an initial public offering and the uncertainty of investors regarding its value, and they also argue that the resulting under-pricing equilibrium is enforced by investment bankers, who have reputation capital at stake.

2,526 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a transaction cost framework for investigating the entry mode decision in international marketing and provide guidelines for choosing the appropriate mode of entry, given certain characteristics of the firm, the product, and the environment.
Abstract: A “frontier issue” in international marketing is the appropriate choice of entry mode in foreign markets. The objective of this paper is to offer a transaction cost framework for investigating the entry mode decision. This framework provides 1) a theoretical basis for systematically interrelating the literature into propositions, 2) propositions about interactions which resolve the apparently contradictory arguments advanced to date. Specifically, the paper: The entry mode literature is reviewed in the context of these propositions, and guidelines are derived for choosing the appropriate mode of entry, given certain characteristics of the firm, the product, and the environment.

2,346 citations


Journal ArticleDOI
TL;DR: The authors found that the returns on small-firm stocks and low-grade bonds are more highly correlated in January than in the rest of the year with previous levels of asset prices.

1,866 citations


Journal ArticleDOI
TL;DR: The emeraldine base form of polyaniline, which consists of equal numbers of reduced and oxidized repeat units, is doped to the metallic conducting regime by aqueous 1 M HCl as mentioned in this paper.

1,813 citations


Journal ArticleDOI
TL;DR: Guidelines are proposed for the collection, analysis, and description of electromyographic (EMG) data that cover technological issues in EMG recording, social aspects of EMG experimentation, and limits to inferences that can be drawn inEMG research.
Abstract: Guidelines are proposed for the collection, analysis, and description of electromyographic (EMG) data. The guidelines cover technological issues in EMG recording, social aspects of EMG experimentation, and limits to inferences that can be drawn in EMG research. An atlas is proposed for facial EMG electrode placements, and standard EMG terminology is suggested.

1,625 citations


Journal ArticleDOI
TL;DR: In this article, a formal mathematical model for the analysis of informant responses to systematic interview questions is presented and tested in a situation in which the ethnographer does not know how much each informant knows about the cultural domain under consideration nor the answers to the questions.
Abstract: This paper presents and tests a formal mathematical model for the analysis of informant responses to systematic interview questions. We assume a situation in which the ethnographer does not know how much each informant knows about the cultural domain under consideration nor the answers to the questions. The model simultaneously provides an estimate of the cultural competence or knowledge of each informant and an estimate of the correct answer to each question asked of the informant. The model currently handles true-false, multiple-choice, andfill-in-the-blank type question formats. In familiar cultural domains the model produces good results from as few as four informants. The paper includes a table showing the number of informants needed to provide stated levels of confidence given the mean level of knowledge among the informants. Implications are discussed.

1,590 citations


Journal ArticleDOI
TL;DR: The emeraldine salt form of polyaniline can be synthesized electrochemically as a film exhibiting a well defined fibrillar morphology closely resembling that of polyacetylene as mentioned in this paper.
Abstract: The emeraldine salt form of polyaniline, conducting in the metallic regime, can be synthesized electrochemically as a film exhibiting a well defined fibrillar morphology closely resembling that of polyacetylene. Cyclic voltammograms of chemically synthesized and electrochemically synthesized polyaniline are essentially identical. Probable chemical changes which occur and the compounds which are formed when chemically synthesized poly-aniline is electrochemically oxidized and reduced between –0.2 and 1.0 V vs. SCE in aqueous HCl solutions at pH values ranging from –2.12 (6.0 mol dm–3) to 4.0 have been deduced from cyclic voltametric studies. These are shown to be consistent with previous chemical and conductivity studies of emeraldine base and emeraldine salt forms of polyaniline. It is proposed that the emeraldine salt form of polyaniline has a symmetrical conjugated structure having extensive charge delocalization resulting from a new type of doping of an organic polymer–salt formation rather than oxidation which occurs in the p-doping of all other conducting polymer systems.

1,580 citations


Journal ArticleDOI
TL;DR: It is concluded that human cytotrophoblasts differentiate in culture and fuse to form functional syncytiotrophobic cells, similar to that of intact term placentae.
Abstract: Highly purified functional cytotrophoblasts have been prepared from human term placentae by adding a Percoll gradient centrifugation step to a standard trypsin-DNase dispersion method. The isolated mononuclear trophoblasts averaged 10 microns in diameter, with occasional cells measuring up to 20-30 microns. Viability was greater than 90%. Transmission electron microscopy revealed that the cells had fine structural features typical of trophoblasts. In contrast to syncytial trophoblasts of intact term placentae, these cells did not stain for hCG, human placental lactogen, pregnancy-specific beta 1-glycoprotein or low mol wt cytokeratins by immunoperoxidase methods. Endothelial cells, fibroblasts, or macrophages did not contaminate the purified cytotrophoblasts, as evidenced by the lack of immunoperoxidase staining with antibodies against vimentin or alpha 1-antichymotrypsin. The cells produced progesterone (1 ng/10(6) cells . 4 h), and progesterone synthesis was stimulated up to 8-fold in the presence of 25-hydroxycholesterol (20 micrograms/ml). They also produced estrogens (1360 pg/10(6) cells . 4 h) when supplied with androstenedione (1 ng/ml) as a precursor. When placed in culture, the cytotrophoblasts consistently formed aggregates, which subsequently transformed into syncytia within 24-48 h after plating. Time lapse cinematography revealed that this process occurred by cell fusion. The presumptive syncytial groups were proven to be true syncytia by microinjection of fluorescently labeled alpha-actinin, which diffused completely throughout the syncytial cytoplasm within 30 min. Immunoperoxidase staining of cultured trophoblasts between 3.5 and 72 h after plating revealed a progressive increase in cytoplasmic pregnancy-specific beta 1-glycoprotein, hCG, and human placental lactogen concomitant with increasing numbers of aggregates and syncytia. At all time points examined, occasional single cells positive for these markers were identified. RIA of the spent culture media for hCG revealed a significant increase in secreted hCG, paralleling the increase in hCG-positive cells and syncytia identified by immunoperoxidase methods. We conclude that human cytotrophoblasts differentiate in culture and fuse to form functional syncytiotrophoblasts.

1,546 citations


Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate the inadequacy of traditional measures, that are based on a firm's profitability, for evaluating its strategic performance and demonstrate the importance of other measures, such as the satisfaction of all of the firm's stakeholders (and not merely its stockholders), as important discriminators of strategic performance.
Abstract: This paper demonstrates the inadequacy of traditional measures, that are based on a firm's profitability, for evaluating its strategic performance. Two other measures, one that attempts to assess the quality of a firm's transformations (and not merely its outcomes) and the other that attempts to measure the satisfaction of all of the firm's stakeholders (and not merely its stockholders), are shown here to be important discriminators of strategic performance. The performances of seven ‘excellent’ firms from the computer industry, featured in the recent book by Peters and Waterman, are contrasted with that of seven ‘non-excellent’ firms from the same industry, to develop a framework for measuring strategic performance.

1,326 citations


Journal ArticleDOI
TL;DR: In this article, the authors report the results of an empirical investigation based on data obtained from a random sample of 100 U.S. manufacturing firms, providing new findings bearing on each of these questions.
Abstract: To what extent would the rate of development and introduction of inventions decline in the absence of patent protection? To what extent do firms make use of the patent system, and what differences exist among firms and industries and over time in the propensity to patent? These questions are in need of much more study. This paper, which reports the results of an empirical investigation based on data obtained from a random sample of 100 U.S. manufacturing firms, provides new findings bearing on each of these questions.

1,240 citations


Journal ArticleDOI
TL;DR: It is concluded that genetic influences have an important role in determining human fatness in adults, whereas the family environment alone has no apparent effect.
Abstract: We examined the contributions of genetic factors and the family environment to human fatness in a sample of 540 adult Danish adoptees who were selected from a population of 3580 and divided into four weight classes: thin, median weight, overweight, and obese. There was a strong relation between the weight class of the adoptees and the body-mass index of their biologic parents - for the mothers, P less than 0.0001; for the fathers, P less than 0.02. There was no relation between the weight class of the adoptees and the body-mass index of their adoptive parents. Cumulative distributions of the body-mass index of parents showed similar results; there was a strong relation between the body-mass index of biologic parents and adoptee weight class and no relation between the index of adoptive parents and adoptee weight class. Furthermore, the relation between biologic parents and adoptees was not confined to the obesity weight class, but was present across the whole range of body fatness - from very thin to very fat. We conclude that genetic influences have an important role in determining human fatness in adults, whereas the family environment alone has no apparent effect.

Journal ArticleDOI
01 Apr 1986-Nature
TL;DR: The interaction of the purified CSAT antigen with these cytoskeletal components is investigated, and an interaction specifically between theCSAT antigen and talin is demonstrated.
Abstract: Many observations suggest the presence of transmembrane linkages between the cytoskeleton and the extracellular matrix. In fibroblasts both light and electron microscopic observations reveal a co-alignment between actin filaments at the cell surface and extracellular fibronectin1–3. These associations are seen at sites of cell matrix interaction, frequently along stress fibres and sometimes where these bundles of microfilaments terminate at adhesion plaques (focal contacts). Non-morphological evidence also indicates a functional linkage between the cytoskeleton and extracellular matrix. Addition of fibronectin to transformed cells induces flattening of the cells and a reorganization of the actin cytoskeleton, with the concomitant appearance of arrays of stress fibres4–6. Conversely, disruption of the actin cytoskeleton by treatment with cytochalasin B leads to release of fibronectin from the cell surface7. As yet, there is no detailed knowledge of the molecules involved in this transmembrane linkage, although several proteins have been suggested as candidates in the chain of attachment between bundles of actin filaments and the cytoplasmic face of the plasma membrane: these include vinculin8, α-actinin9 and talin10, each one having been identified at regions where bundles of actin filaments interact with the plasma membrane and underlying cell-surface fibronectin10–13. Recently, the cell-substrate attachment (CSAT) antigen14 has been identified as a plasma membrane receptor for fibronectin15, raising the possibility that this glycoprotein complex may serve as a bridge between fibronectin and one or more of the underlying cytoskeletal components mentioned. Here we have investigated the interaction of the purified CSAT antigen with these cytoskeletal components, and we demonstrate an interaction specifically between the CSAT antigen and talin.

Journal ArticleDOI
TL;DR: This chapter examines relapse by integrating knowledge from the disorders of alcoholism, smoking, and obesity in an attempt to emphasize in a prototypical manner the overlap in etiological mechanisms and treatment rationales for disorders with powerful, underlying biological self-regulation components.
Abstract: This chapter examines relapse by integrating knowledge from the disorders of alcoholism, smoking, and obesity in an attempt to emphasize in a prototypical manner the overlap in etiological mechanisms and treatment rationales for disorders with powerful, underlying biological self-regulation components. Commonalities across these areas suggest at least three basic stages of behavior change: motivation and commitment, initial change, and maintenance. A distinction is made between the terms lapse and relapse, with lapse referring to the process (slips or mistakes) that may or may not lead to an outcome (relapse). The natural history of relapse is discussed, as are the consequences of relapse for patients and the professionals who treat them. Information on determinants and predictors of relapse is evaluated, with the emphasis on the interaction of individual, environmental, and physiological factors. Methods of preventing relapse are proposed and are targeted to the three stages of change. Specific research needs in these areas are discussed.

Journal ArticleDOI
01 Oct 1986-Nature
TL;DR: The supramolecular organization of the native nuclear lamina and the structure and assembly properties of purified lamins are analysed, and it is shown that the lamins constitute a previously unrecognized class of IF polypeptides.
Abstract: The nuclear lamina, a protein meshwork lining the nucleoplasmic surface of the inner nuclear membrane, is thought to provide a framework for organizing nuclear envelope structure and an anchoring site at the nuclear periphery for interphase chromatin. In several higher eukaryotic cells, the lamina appears to be a polymer comprised mainly of one to three immunologically related polypeptides of relative molecular mass (Mr) 60,000-75,000 (60-70K) termed lamins. Three lamins (A, B, and C) are typically present in mammalian somatic cells. Previous studies on nuclear envelopes of rat liver and Xenopus oocytes suggested that the lamina has a fibrillar or filamentous substructure. Interestingly, protein sequences recently deduced for human lamins A and C from complementary DNA clones indicate that both of these polypeptides contain a region of approximately 350 amino acids very similar in sequence to the coiled-coil alpha-helical rod domain that characterizes all intermediate-type filament (IF) proteins. Here we analyse the supramolecular organization of the native nuclear lamina and the structure and assembly properties of purified lamins, and show that the lamins constitute a previously unrecognized class of IF polypeptides.

Journal ArticleDOI
18 Jul 1986-Cell
TL;DR: The name integrin is proposed for this protein complex to denote its role as an integral membrane complex involved in the transmembrane association between the extracellular matrix and the cytoskeleton.

Journal ArticleDOI
TL;DR: In this article, the authors characterized subgame-perfect equilibria for a market in which the seller quotes a price each period, assuming zero costs, positive interest rate, continuum of buyers, and some technical conditions.

Journal ArticleDOI
TL;DR: The Untouchable as Himself: Ideology, identity and pragmatism among the Lucknow Chamars as mentioned in this paper, by V. E. Daniel and V. S. Khandekar.
Abstract: Fluid Signs: Being a Person the Tamil Way. E. VALENTINE DANIEL. The Untouchable as Himself: Ideology, Identity and Pragmatism among the Lucknow Chamars. RAVINDRA S. KHARE. The Intimate Enemy: Loss and Recovery of Self Under Colonialism. ASHIS NANDY.

Journal ArticleDOI
TL;DR: It is suggested that middle managers with low or negative commitment to the strategies formulated by senior management create significant obstacles to effective implementation.
Abstract: This article suggests that middle managers with low or negative commitment to the strategies formulated by senior management create significant obstacles to effective implementation.

Journal ArticleDOI
TL;DR: The main conclusions appear independent of the idealizations of the initial model, introduce a novel kind of parallel selection for peptides catalyzing connected sequences of reactions, depend upon a new kind of minimal critical complexity whose properties are definable, and suggest that the emergence of self replicating systems may be a self organizing collective property of critically complex protein systems in prebiotic evolution.

Posted Content
TL;DR: In this paper, the authors study temporal volatility patterns in seven nominal dollar spot exchange rates, all of which display strong evidence of autoregressive conditional heteroskedasticity (ARCH).
Abstract: We study temporal volatility patterns in seven nominal dollar spot exchange rates, all of which display strong evidence of autoregressive conditional heteroskedasticity (ARCH). We first formulate and estimate univariate models, the results of which are subsequently used to guide specification of a multivariate model. The key element of our multivariate approach is exploitation of factor structure, which facilitates tractable estimation via a substantial reduction in the number of parameters to be estimated. Such a latent-variable model is shown to provide a good description of multivariate exchange rate movements: the ARCH effects capture volatility clustering, and the factor structure captures commonality in volatility movements across exchange rates.

Journal ArticleDOI
TL;DR: In this paper, a set of propositions is developed that focus on the competitive factors influeing high technology innovation among business organizations, focusing on the diffusion of high-technology innovation among organizations.
Abstract: This article takes as its central concern the diffusion of high technology innovation among business organizations. A set of propositions is developed that focuses on the competitive factors influe...

Journal ArticleDOI
TL;DR: In this paper, direct investment is incorporated into a simple general equilibrium model of international trade and the analysis focuses on an attempt to endogenize the internalization decision, arguing that a reasonable approach assumes that arm's length contracts must be "simple" so that "complex" arrangements require internalization.
Abstract: Direct investment is incorporated into a simple general equilibrium model of international trade. The analysis focuses on an attempt to endogenize the internalization decision. It is argued that a reasonable approach assumes that arm's length contracts must be "simple" so that "complex" arrangements require internalization. The model relates direct investment to the degree of underlying uncertainty and to fundamental trade determinants, such as relative factor endowments. The behavior of the model contrasts sharply with that of the Markusen-Helpman model, which takes internalization for granted.

Journal ArticleDOI
TL;DR: In this article, a unit root null hypothesis for the errors affecting a classical regression model against the non-stationary (including explosive) alternative hypothesis is developed, and the test statistic is simplified in order that it could be viewed as a von Neumann type ratio and exact significance points are tabulated.
Abstract: This paper provides a framework for testing for a unit root in an observed time series against some alternatives considered previously by Anderson (1948). Some new tests for the unit root null hypothesis for the errors affecting a classical regression model against the non-stationary (including explosive) alternative hypothesis are developed. The previous results of Sargan and Bhargava (1983) and the new test statistics are then applied to test the simple random walk and the random walk with a constant drift null hypotheses against stationary and non-stationary one-sided alternatives. In each case, the test statistic is simplified in order that it could be viewed as a von Neumann type ratio and the exact significance points are tabulated. Finally, the unit root null hypotheses are tested using U.S. data on the velocity of money and the Michigan PSID.

Book
01 Jan 1986
TL;DR: A computation that is guaranteed to take at most cn3 time for input of size n will be thought of as an ‘easy’ computation, and one that needs at most n10 time is also easy.
Abstract: An algorithm is a method for solving a class of problems on a computer. The complexity of an algorithm is the cost, measured in running time, or storage, or whatever units are relevant, of using the algorithm to solve one of those problems. This book is about algorithms and complexity, and so it is about methods for solving problems on computers and the costs (usually the running time) of using those methods. Computing takes time. Some problems take a very long time, others can be done quickly. Some problems seem to take a long time, and then someone discovers a faster way to do them (a ‘faster algorithm’). The study of the amount of computational effort that is needed in order to perform certain kinds of computations is the study of computational complexity. Naturally, we would expect that a computing problem for which millions of bits of input data are required would probably take longer than another problem that needs only a few items of input. So the time complexity of a calculation is measured by expressing the running time of the calculation as a function of some measure of the amount of data that is needed to describe the problem to the computer. For instance, think about this statement: ‘I just bought a matrix inversion program, and it can invert an n × n matrix in just 1.2n3 minutes.’ We see here a typical description of the complexity of a certain algorithm. The running time of the program is being given as a function of the size of the input matrix. A faster program for the same job might run in 0.8n3 minutes for an n × n matrix. If someone were to make a really important discovery (see section 2.4), then maybe we could actually lower the exponent, instead of merely shaving the multiplicative constant. Thus, a program that would invert an n × n matrix in only 7n2.8 minutes would represent a striking improvement of the state of the art. For the purposes of this book, a computation that is guaranteed to take at most cn3 time for input of size n will be thought of as an ‘easy’ computation. One that needs at most n10 time is also easy. If a certain calculation on an n × n matrix were to require 2n minutes, then that would be a ‘hard’ problem. Naturally some of the computations that we are calling ‘easy’ may take a very long time to run, but still, from our present point of view the important distinction to maintain will be the polynomial time guarantee or lack of it. The general rule is that if the running time is at most a polynomial function of the amount of input data, then the calculation is an easy one, otherwise it’s hard. Many problems in computer science are known to be easy. To convince someone that a problem is easy, it is enough to describe a fast method for solving that problem. To convince someone that a problem is hard is hard, because you will have to prove to them that it is impossible to find a fast way of doing the calculation. It will not be enough to point to a particular algorithm and to lament its slowness. After all, that algorithm may be slow, but maybe there’s a faster way. Matrix inversion is easy. The familiar Gaussian elimination method can invert an n ×n matrix in time at most cn3. To give an example of a hard computational problem we have to go far afield. One interesting one is called the ‘tiling problem.’ Suppose* we are given infinitely many identical floor tiles, each shaped like a regular hexagon. Then we can tile the whole plane with them, i.e., we can cover the plane with no empty spaces left over. This can also be done if the tiles are identical rectangles, but not if they are regular pentagons. In Fig. 0.1 we show a tiling of the plane by identical rectangles, and in Fig. 0.2 is a tiling by regular hexagons. That raises a number of theoretical and computational questions. One computational question is this. Suppose we are given a certain polygon, not necessarily regular and not necessarily convex, and suppose we have infinitely many identical tiles in that shape. Can we or can we not succeed in tiling the whole plane? That elegant question has been proved* to be computationally unsolvable. In other words, not only do we not know of any fast way to solve that problem on a computer, it has been proved that there isn’t any

Journal ArticleDOI
TL;DR: A broad framework within which one can examine the issue of standardization, suggesting that it might be a perfect strategy for some products, some companies, and some situations, but totally inappropriate for others as mentioned in this paper.
Abstract: Global marketing might be likened to prescriptions that apply to all situations. There is, however, a broad framework within which one can examine the issue of standardization, suggesting that it might be a perfect strategy for some products, some companies, and some situations, but totally inappropriate for others.

Journal ArticleDOI
TL;DR: Depressive symptoms and explanatory styles were found to be quite stable over the year and explanatory style both correlated with concurrent levels of depression and school achievement and predicted later changes in depression during the year.
Abstract: In this longitudinal study, the depressive symptoms, life events, and explanatory styles of 168 school children were measured five times during the course of 1 year. Measures of school achievement were obtained once during the year. Depressive symptoms and explanatory styles were found to be quite stable over the year. As predicted by the reformulated learned helplessness theory, explanatory style both correlated with concurrent levels of depression and school achievement and predicted later changes in depression during the year. Depression also predicted later explanatory styles. The implications of these results for intervention with children with depressive symptoms or school achievement problems are discussed.


Journal ArticleDOI
TL;DR: In this article, a 2D Gabor filter was used for texture discrimination in the striate cortex of the human brain. And the performance of the computer models suggests that cortical neurons with Gabor like receptive fields may be involved in preattentive texture discrimination.
Abstract: A 2D Gabor filter can be realized as a sinusoidal plane wave of some frequency and orientation within a two dimensional Gaussian envelope. Its spatial extent, frequency and orientation preferences as well as bandwidths are easily controlled by the parameters used in generating the filters. However, there is an "uncertainty relation" associated with linear filters which limits the resolution simultaneously attainable in space and frequency. Daugman (1985) has determined that 2D Gabor filters are members of a class of functions achieving optimal joint resolution in the 2D space and 2D frequency domains. They have also been found to be a good model for two dimensional receptive fields of simple cells in the striate cortex (Jones 1985; Jones et al. 1985). The characteristic of optimal joint resolution in both space and frequency suggests that these filters are appropriate operators for tasks requiring simultaneous measurement in these domains. Texture discrimination is such a task. Computer application of a set of Gabor filters to a variety of textures found to be preattentively discriminable produces results in which differently textured regions are distinguished by first-order differences in the values measured by the filters. This ability to reduce the statistical complexity distinguishing differently textured region as well as the sensitivity of these filters to certain types of local features suggest that Gabor functions can act as detectors of certain "texton" types. The performance of the computer models suggests that cortical neurons with Gabor like receptive fields may be involved in preattentive texture discrimination.

Journal ArticleDOI
TL;DR: In this article, the authors discuss the details of their analysis of the mathematical and structural properties of quasicrystals and discuss the computation of the diffraction pattern of a quasilattice, using as an example the case of icosahedral orientational symmetry.
Abstract: In a recent paper, we introduced the concept of quasicrystals [Phys. Rev. Lett. 53, 2477 (1984)], a new class of ordered atomic structures. Quasicrystals have long-range quasiperiodic translational order and long-range orientational order. In the present paper and the following one, we discuss the details of our analysis of the mathematical and structural properties of quasicrystals. We begin with a general overview of our analysis. We then discuss our computation of the diffraction pattern of a quasilattice, using as an example the case of icosahedral orientational symmetry. We demonstrate that two quasilattices with the same orientational symmetry and quasiperiodicity which are not locally isomorphic will have diffraction patterns with different peak intensities. Finally, we describe some examples of computer modeling of atomic quasicrystals.

Journal ArticleDOI
TL;DR: The authors reported the findings of the latest of a series of studies conducted to determine the effects of task type and participation pattern on language classroom interaction, finding that a task with a requirement for information exchange is crucial to the generation of conversational modification of classroom interaction.
Abstract: This article reports the findings of the latest of a series of studies conducted to determine the effects of task type and participation pattern on language classroom interaction. The results of this study are compared to those of an earlier investigation (Pica & Doughty, 1985a) in regard to optional and required information exchange tasks across teacher-directed, small-group, and dyad interactional patterns. The evidence suggests that a task with a requirement for information exchange is crucial to the generation of conversational modification of classroom interaction. This finding is significant in light of current theory, which argues that conversational modification occurring during interaction is instrumental in second language acquisition. Furthermore, the finding that group and dyad interaction patterns produced more modification than did the teacher-fronted situation suggests that participation pattern as well as task type have an effect on the conversational modification of interaction.