scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1984"


Journal ArticleDOI
TL;DR: The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs as mentioned in this paper.
Abstract: In management contexts, mathematical programming is usually used to evaluate a collection of possible alternative courses of action en route to selecting one which is best. In this capacity, mathematical programming serves as a planning aid to management. Data Envelopment Analysis reverses this role and employs mathematical programming to obtain ex post facto evaluations of the relative efficiency of management accomplishments, however they may have been planned or executed. Mathematical programming is thereby extended for use as a tool for control and evaluation of past accomplishments as well as a tool to aid in planning future activities. The CCR ratio form introduced by Charnes, Cooper and Rhodes, as part of their Data Envelopment Analysis approach, comprehends both technical and scale inefficiencies via the optimal value of the ratio form, as obtained directly from the data without requiring a priori specification of weights and/or explicit delineation of assumed functional forms of relations between inputs and outputs. A separation into technical and scale efficiencies is accomplished by the methods developed in this paper without altering the latter conditions for use of DEA directly on observational data. Technical inefficiencies are identified with failures to achieve best possible output levels and/or usage of excessive amounts of inputs. Methods for identifying and correcting the magnitudes of these inefficiencies, as supplied in prior work, are illustrated. In the present paper, a new separate variable is introduced which makes it possible to determine whether operations were conducted in regions of increasing, constant or decreasing returns to scale in multiple input and multiple output situations. The results are discussed and related not only to classical single output economics but also to more modern versions of economics which are identified with "contestable market theories."

14,941 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore how people participate in computer-mediated communication and how computerization affects group efforts to reach consensus, and they find that participants are more likely to report negative effects of computer mediated communication on their mental health.
Abstract: As more and more people use computers for communicating, the behavioral and societal effects of computer-mediated communication are becoming critical research topics. This article describes some of the issues raised by electronic communication, illustrates one empirical approach for investigating its social psychological effects, and discusses why social psychological research might contribute to a deeper understanding of computer-mediated communication specifically and of computers and technological change in society more generally. One objective of our research is to explore how people participate in computer-mediated communication and how computerization affects group efforts to reach consensus. In experiments, we have shown differences in participation, decisions, and interaction among groups meeting face to face and in simultaneous computer-linked discourse and communication by electronic mail. We discuss these results and the design of subsequent research to highlight the many researchable social psychological issues raised by computing and technological change. Computer technologies are improving so swiftly these days that few of us comprehend even a small part of the change. Computers are transforming work and, in some cases, lives. Whether eager for this or resistant, many people believe the organizational, social, and personal effects of computers will be deeply felt (De Sola Poole, 1977; Hiltz & Turoff, 1978; Kling, 1980). Today, no one can predict in any detail the nature of the transformations that computers will bring, but one aspect of life that will certainly be affected is communication. The use of electronic mail and messages, long-distance blackboards, computer bulletin boards, instantaneously transferable data banks, and simultaneous computer conferences is reportedly advancing "like an avalanche" (Stockton, 1981; also see Kraemer, 1981). The U.S. federal judiciary, for example, is using electronic mail to speed the circulation of appellate opinion drafts among panels of judges (Weis, 1983). Computer conferences are being used for such legal proceedings as admission of evidence, trial scheduling, giving parties access to documents, and expert interrogation (Bentz & Potrykus, 1976; "Party-Line Plea," 1981). Other government agencies, such as the Department of Defense, as well as private firms, such as Westinghouse Corporation and Xerox Corporation, and some universities, use computer-mediated communication extensively for both routine transfer of data and nonroutine interpersonal communication and project work (e.g., Licklider & Vezza, 1978; U.S. Department of Commerce, 1977; Wang Corporation, 1982). Computer-mediated communication was once confined to technical users and was considered somewhat arcane. This no longer holds true. Computer-mediated communication is a key component of the emerging technology of computer networks. In networks, people can exchange, store, edit, broadcast, and copy any written document. They can send data and messages instantaneously, easily, at low cost, and over long distances. Two or more people can look at a document and revise it together, consult with each other on critical matters without meeting together or setting up a telephone conference, or ask for and give assistance interactively (Hiltz & Turoff, 1978; Williams, 1977). Networks, and hence computer-mediated communications, are proliferating at a tremendous rate. In addition to the older long-distance networks that connect thousands of scientists, professionals, and managers (e.g., the Department of Defense's ARPANET, GTE's TELENET), there are more and more local-area networks that link up computers within a region, city, or organization (e.g., Nestar System's CLUSTERBUS, Xerox's ETHERNET, Ford Aerospace's FLASHNET, and Wang Laboratories' WANGNET). Stimulating this growth are the decreasing costs and the advantages of networks over stand-alone systems, such as sharing high-speed printers and access to a common interface for otherwise incompatible equipment. The future of this technology cannot be foretold, but it is far from arcane. The functions and impact of computer-mediated communication are still poorly understood. Critical information (such as who uses it for what purposes) October 1984 • American Psychologist Copyright 1984 by the American Psychological Aisociation, Inc. Vol. 39, No. 10, 1123-1134 1123 is lacking, and the social psychological significance is controversial (see, e.g., Turoff, 1982). Computers could make communication easier, just as the canning of perishables and the development of can openers made food preparation easier, or they could have much more complex implications. For instance, access to electronic communication may change the flow of information within organizations, altering status relations and organizational hierarchy. When a manager can receive electronic mail from 10,000 employees, what happens to existing controls over participation and information? When people can publish and distribute their own electronic newspaper at no cost, does the distribution of power change too? When communication is rapid and purely textual, do working groups find it easier or harder to resolve conflict? These unanswered questions illustrate that, although the technology may be impressive, little systematic research exists on its psychological, social, and cultural significance. Given such conditions it seems sensible to try to understand the fundamental behavioral, social, and organizational processes that surround computer-mediated communication. We believe that ideas and approaches from social psychology and other areas of behavioral science can be applied to these questions. This article is meant to describe some of the issues raised by electronic communication; to illustrate, from our own work, one empirical approach for investigating them; and to show why social psychological research might contribute to a deeper understanding of electronic communication specifically and of computers and technological change in society more generally. We begin by citing some existing research on computer-mediated communication. Most of this research addresses the technical capabilities of the electronic technologies. Next, we consider the possible social psychological impact, and we discuss some hypotheses and some possible implications for the outcomes of communication. Finally, we describe some of our own experiments on social psychological aspects of computer-mediated communication, using these to indicate potential lines of future research.

2,187 citations


Journal ArticleDOI
TL;DR: In this article, a mathematical model for communicating sequential processes is given, and a number of its interesting and useful properties are stated and proved, and the possibilities of nondetermimsm are fully taken into account.
Abstract: A mathematical model for communicating sequential processes is given, and a number of its interesting and useful properties are stated and proved. The possibilities of nondetermimsm are fully taken into account.

1,193 citations


Journal ArticleDOI
TL;DR: In this article, the usual formula for transition probabilities in nonrelativistic quantum mechanics is generalized to yield conditional probabilities for selected sequences of events at several different times, called consistent histories, through a criterion which ensures that, within limits which are explicitly defined within the formalism, classical rules for probabilities are satisfied.
Abstract: The usual formula for transition probabilities in nonrelativistic quantum mechanics is generalized to yield conditional probabilities for selected sequences of events at several different times, called “consistent histories,” through a criterion which ensures that, within limits which are explicitly defined within the formalism, classical rules for probabilities are satisfied The interpretive scheme which results is applicable to closed (isolated) quantum systems, is explicitly independent of the sense of time (ie, past and future can be interchanged), has no need for wave function “collapse,” makes no reference to processes of measurement (though it can be used to analyze such processes), and can be applied to sequences of microscopic or macroscopic events, or both, as long as the mathematical condition of consistency is satisfied When applied to appropriate macroscopic events it appears to yield the same answers as other interpretative schemes for standard quantum mechanics, though from a different point of view which avoids the conceptual difficulties which are sometimes thought to require reference to conscious observers or classical apparatus

922 citations


Journal ArticleDOI
TL;DR: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function and connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.
Abstract: A super-polynomial lower bound is given for the size of circuits of fixed depth computing the parity function. Introducing the notion of polynomial-size, constant-depth reduction, similar results are shown for the majority, multiplication, and transitive closure functions. Connections are given to the theory of programmable logic arrays and to the relativization of the polynomial-time hierarchy.

915 citations


Journal ArticleDOI
TL;DR: In this article, the relation between the most productive scale size (mpss) for paparticular input and output mixes and returns to scale for multiple-inputs multiple-outputs situations is explicitly developed.

906 citations


Book ChapterDOI
01 Jan 1984
TL;DR: A survey of cost accounting and managerial control practices and their relevance to the changing nature of industrial competition in the 1980s can be found in this paper, where the authors advocate a return to field-based research to discover the innovative practices being introduced by organizations successfully adapting to the new organization and technology of manufacturing.
Abstract: This paper surveys the development of cost accounting and managerial control practices and assesses their relevance to the changing nature of industrial competition in the 1980s. The paper starts with a review of cost accounting developments from 1850 through 1915, including the demands imposed by the origin of the railroad and steel enterprises and the subsequent activity from the scientific management movement. The DuPont Corporation (1903) and the reorganization of General Motors (1920) provided the opportunity for major innovations in the management control of decentralized operations, including the ROI criterion for evaluation of performance and formal budgeting and incentive plans. More recent developments have included discounted cash flow analysis and the application of management science and multiperson decision theory models. The cost accounting and management control procedures developed more than 60 years ago for the mass production of standard products with high direct labor content may no longer be appropriate for the planning and control decisions of contemporary organizations. Also, problems with using profits as the prime criterion for motivating and evaluating short-term performance are becoming apparent. This paper advocates a return to field-based research to discover the innovative practices being introduced by organizations successfully adapting to the new organization and technology of manufacturing.

893 citations


Journal ArticleDOI
TL;DR: A formal approach to the synthesis of compliant-motion strategies from geometric descriptions of assembly operations and explicit estimates of errors in sensing and control is described.
Abstract: Active compliance enables robots to carry out tasks in the presence of significant sensing and control errors. Compliant motions are quite difficult for humans to specify, however. Furthermore, robot programs are quite sensitive to details of geometry and to error characteristics and must, therefore, be constructed anew for each task. These factors motivate the search for automatic synthesis tools for robot program ming, especially for compliant motion. This paper describes a formal approach to the synthesis of compliant-motion strategies from geometric descriptions of assembly operations and explicit estimates of errors in sensing and control. A key aspect of the approach is that it provides criteriafor correct ness of compliant-motion strategies.

825 citations


Journal ArticleDOI
TL;DR: This paper showed that consumers familiar with the product category demonstrate stronger brand organization for the new information and that familiarity facilitates learning when consumers rate each alternative, but when consumers are instructed to choose one alternative, an "inverted u" relationship between familiarity and learning results is found.
Abstract: Does product familiarity improve shoppers' ability to learn new product information? We examine an earlier study which indicated that greater familiarity increased learning during a new purchase decision. Our reanalysis confirms that the effect depends strongly upon decision strategy. Familiarity facilitates learning when consumers rate each alternative, but when consumers are instructed to choose one alternative, an “inverted u” relationship between familiarity and learning results. Our new analyses also show that consumers familiar with the product category demonstrate stronger brand organization for the new information.

819 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze the issue of market power in the context of markets for transferable property rights and present a model that explains how a single firm with market power might exercise its influence.
Abstract: The appeal of using markets as a means of allocating scarce resources stems in large part from the assumption that a market will approximate the competitive ideal. When competition is not a foregone conclusion, the question naturally arises as to how a firm might manipulate the market to its own advantage. This paper analyzes the issue of market power in the context of markets for transferable property rights. First, a model is developed that explains how a single firm with market power might exercise its influence. This is followed by an examination of the model in the context of a particular policy problem--the control of particulate sulfates in the Los Angeles region.

748 citations


Journal ArticleDOI
TL;DR: In this paper, investment incentive problems associated with debt financing are modeled and characterised, and the decision problem of residual claimants is explicity formulated and their investment policies are characterized; the use of conversion features and warrants to control distortionary incentives is also analyzed.

Journal ArticleDOI
TL;DR: The Committee on Nomenclature of the Society for Analytical Cytology presents guidelines for the analysis of DNA content by cytometry, which cover: staining of DNA; cytogenetic and cytometric terminology; DNA index; resolution of measurements; and cytometry standards.
Abstract: The Committee on Nomenclature of the Society for Analytical Cytology presents guidelines for the analysis of DNA content by cytometry. These guidelines cover: staining of DNA; cytogenetic and cytometric terminology; DNA index; resolution of measurements; and cytometric standards.

Journal ArticleDOI
TL;DR: In this paper, the authors show that elected officials in the United States appear to represent relatively extreme support coalitions rather than the interests of middle-of-the-road voters, and that the distribution of senators is consistent with the hypothesis that, in the long run, both parties have an equal chance of winning any seat in the Senate.
Abstract: Elected officials in the United States appear to represent relatively extreme support coalitions rather than the interests of middle-of-the-road voters. This contention is supported by analysis of variance of liberal-conservative positions in the United States Senate from 1959 to 1980. Within both the Democratic and the Republican parties, there is considerable variation in liberal-conservative positions, but two senators from the same state and party tend to be very similar. In contrast, senators from the same state but from different parties are highly dissimilar, suggesting that each party represents an extreme support coalition in the state. Moreover, the distribution of senators is now consistent with the hypothesis that, in the long run, both parties have an equal chance of winning any seat in the Senate. This result suggests that there is now competition between equally balanced but extreme support coalitions throughout most of the United States.

Journal ArticleDOI
TL;DR: The authors showed that participants in a test of sentence interpretation by German-, Italian-, and English-speaking adults showed that they relied overwhelming on word order, using a first-noun strategy in NVN and a second-neighbor strategy in VNN and NNV sentences.

Journal ArticleDOI
TL;DR: A simple algorithm for computing a homology score for Escherichia coli promoters based on DNA sequence alone found that promoter strength could be predicted to within a factor of +/-4.1 in KBk2 over a range of 10(4) in the same parameter.
Abstract: We describe a simple algorithm for computing a homology score for Escherichia coli promoters based on DNA sequence alone. The homology score was related to 31 values, measured in vitro, of RNA polymerase selectivity, which we define as the product KBk2, the apparent second order rate constant for open complex formation. We found that promoter strength could be predicted to within a factor of +/-4.1 in KBk2 over a range of 10(4) in the same parameter. The quantitative evaluation was linked to an automated (Apple II) procedure for searching and evaluating possible promoters in DNA sequence files.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the Nash equilibria to a game where a discrete public good is to be provided, and they found that the Nash equilibrium with a refund is a superset of those without a refund.

Journal ArticleDOI
TL;DR: In this article, the authors examined the spatial equilibrium in political competition when established parties choose their platforms competitively while rationally anticipating entry of a vote-maximizing third party.
Abstract: This paper examines spatial equilibrium in political competition when established parties choose their platforms competitively while rationally anticipating entry of a vote-maximizing third party. The resulting equilibrium is substantially different from the Hotelling "median" equilibrium. Established parties are spatially separated and third parties will generally lose the election. This provides one theoretical explanation for the stability of two-party systems. Namely that non-cooperative behavior between established parties can effectively prevent third parties from winning.

Journal ArticleDOI
TL;DR: This paper investigated when people attend to information that is inconsistent with their expectations about another person and found that attention to inconsistent information potentially increases the perceiver's sense of prediction and control, so it should increase under outcome dependency.
Abstract: Two studies investigated when people attend to information that is inconsistent with their expectations about another person. It was hypothesized that people sometimes ignore information inconsistent with their expectations, but that outcome dependency would increase people's attention to inconsistent information. When the perceiver's outcomes depend on the other person, the perceiver may be more motivated to have a sense of prediction and control, rather than only motivated to maintain the expectation. Attention to inconsistent information potentially increases the perceiver's sense of prediction and control, so it should increase under outcome dependency. Attention to consistent information should be relatively unaffected by outcome dependency. These hypotheses were supported in two studies: In both, outcome dependency increased attention to inconsistent information, but did not influence attention to consistent information. In the second study, think-aloud protocols revealed that outcome-dependent subjects made more dispositional comments while attending to inconsistent information, and generated both facilitative and inhibitory dispositional attributions for the inconsistent information. This suggests that whether they integrated the inconsistency or not, they responded with more thought about the other person's stable characteristics. The results bear on previous work showing situational attributions for inconsistency and previous models of meaning change in impression formation.

Journal ArticleDOI
TL;DR: The processes by which subjects write LISP functions to meet problem specifications has been modeled in a simulation program called GRAPES (Goal Restricted Production System), which simulates the top-down, depth-first flow of control exhibited by subjects and produces code very similar to subject code.

Journal ArticleDOI
TL;DR: This paper shows that the greedy algorithm finds a solution with value at least 1/(1 + α) times the optimum value, where α is a parameter which represents the ‘total curvature’ of Z, and can prove the optimality of the greedy algorithms even in instances where Z is not additive.

Journal ArticleDOI
TL;DR: In this paper, the authors generalize the approach to a 3D one-legged machine that runs and balances on an open floor without physical support, and decompose control of the machine into three separate parts: one part that controls forward running velocity, another part controlling attitude of the body, and a third part controlling hopping height.
Abstract: In order to explore the balance in legged locomotion, we are studying systems that hop and run on one springy leg. Pre vious work has shown that relatively simple algorithms can achieve balance on one leg for the special case of a system that is constrained mechanically to operate in a plane (Rai bert, in press; Raibert and Brown, in press). Here we general ize the approach to a three-dimensional (3D) one-legged machine that runs and balances on an open floor without physical support. We decompose control of the machine into three separate parts: one part that controls forward running velocity, one part that controls attitude of the body, and a third part that controls hopping height. Experiments with a physical 3D one-legged hopping machine showed that this control scheme, while simple to implement, is powerful enough to permit hopping in place, running at a desired rate, and travel along a simple path. These algorithms that control locomotion in 3D are direct generalizations of those in 2D, with surpris...

Journal ArticleDOI
TL;DR: A general basis function and hyperspace description of SDFs is provided, a derivation showing the generality of the correlation matrix observation space is advanced, and a unified SDF filter synthesis technique is detail for five different types of pattern recognition problem.
Abstract: A most attractive approach to distortion-invariant pattern recognition uses a synthetic discriminant function (SDF) as the matched spatial filter in a correlator. In this paper, we (1) provide a general basis function and hyperspace description of SDFs, (2) advance a derivation showing the generality of the correlation matrix observation space that we use in our filter synthesis, and (3) detail a unified SDF filter synthesis technique for five different types of pattern recognition problem.

Journal ArticleDOI
TL;DR: The use of orthogonal collocation is explored to reduce the dynamic optimization problem to an equality constrained nonlinear program (NLP) and the NLP is solved using a strategy that simultaneously converges and optimizes the algebraic model.

Journal ArticleDOI
TL;DR: The processes by which subjects write LISP functions to meet problem specifications has been modeled in a simulation program called GRAPES (Goal Restricted Production System), which simulates the top-down, depth-first flow of control exhibited by subjects and produces code very similar to subject code.

Journal ArticleDOI
TL;DR: The authors investigated preschoolers' knowledge of counting principles by examining their ability to discriminate between features that are essential for correct counting and feature that are typically present but unessential, and found that children who knew the word/object correspondence principle presumably would reject counts that violated it more often than counts that conformed to it.
Abstract: Preschoolers clearly are adept in executing the standard correct counting procedure. Whether they know the principles underlying the procedure is less clear, however. In the present experiments, preschoolers' knowledge of counting principles was investigated by examining their ability to discriminate between features that are essential for correct counting and features that are typically present but unessential. The standard counting procedure was analyzed into one essential feature, word/ object correspondence, and four optional features: counting adjacent objects consecutively, pointing once to each object, starting at an end of a row, and proceeding in a left to right direction. The experimenter asked 3to 5-year-olds to judge acceptable or unacceptable a puppet's counting that either violated the essential feature, that violated one or more unessential features, or that conformed to the standard correct procedure. Children who knew the word/object correspondence principle presumably would reject counts that violated it more often than counts that conformed to it. Each child's skill in counting rows of objects also was assessed. Skill in executing the standard counting procedure was found to precede knowledge of the underlying principle. Fourand 5-year-olds knew that word/object correspondence was essential, although a high percentage of them did not know that other typical features were unessential. An analysis of probable environmental input and of the features' utility in separating already-counted from to-be-counted objects was proposed to account for the relative probabilities that children knew that each of the five features of standard counting was essential or optional.

Journal ArticleDOI
TL;DR: In this article, conditions under which equilibrium exists in a model where freely mobile households choose community of residence and amount of housing consumption, and vote on the level of public goods provision, and discuss the implications of the conditions and their role in assuring existence of equilibrium.

Journal ArticleDOI
TL;DR: A multiple resolution representation for the two-dimensional gray-scale shapes in an image is defined by detecting peaks and ridges in the difference of lowpass (DOLP) transform and the principles for determining the correspondence between symbols in pairs of such descriptions are described.
Abstract: This paper defines a multiple resolution representation for the two-dimensional gray-scale shapes in an image. This representation is constructed by detecting peaks and ridges in the difference of lowpass (DOLP) transform. Descriptions of shapes which are encoded in this representation may be matched efficiently despite changes in size, orientation, or position. Motivations for a multiple resolution representation are presented first, followed by the definition of the DOLP transform. Techniques are then presented for encoding a symbolic structural description of forms from the DOLP transform. This process involves detecting local peaks and ridges in each bandpass image and in the entire three-dimensional space defined by the DOLP transform. Linking adjacent peaks in different bandpass images gives a multiple resolution tree which describes shape. Peaks which are local maxima in this tree provide landmarks for aligning, manipulating, and matching shapes. Detecting and linking the ridges in each DOLP bandpass image provides a graph which links peaks within a shape in a bandpass image and describes the positions of the boundaries of the shape at multiple resolutions. Detecting and linking the ridges in the DOLP three-space describes elongated forms and links the largest peaks in the tree. The principles for determining the correspondence between symbols in pairs of such descriptions are then described. Such correspondence matching is shown to be simplified by using the correspondence at lower resolutions to constrain the possible correspondence at higher resolutions.

Journal ArticleDOI
TL;DR: In this paper, a unified approach for the tentative specification of the order of mixed stationary and nonstationary ARMA models is proposed, where an iterative regression procedure is given to produce consistent estimates of the autoregressive parameters and an extended sample autocorrelation function based on these consistent estimates is then defined and used for order determination.
Abstract: A unified approach for the tentative specification of the order of mixed stationary and nonstationary ARMA models is proposed. For the ARMA models, an iterative regression procedure is given to produce consistent estimates of the autoregressive parameters. An extended sample autocorrelation function based on these consistent estimates is then defined and used for order determination. One of the advantages of this new approach is that it eliminates the need to determine, usually rather arbitrarily, the order of differencing to produce stationarity in modeling time series. Comparisons with other existing identification methods are discussed, and several samples are given.

Journal ArticleDOI
TL;DR: Viability of cells was 50-60% immediately after scrape-loading and was 90% for those cells remaining after 24 h of culture, and useful for loading of diverse kinds of macromolecules.
Abstract: We describe a simple method for loading exogenous macromolecules into the cytoplasm of mammalian cells adherent to tissue culture dishes. Culture medium was replaced with a thin layer of fluorescently labeled macromolecules, the cells were harvested from the substrate by scraping with a rubber policeman, transferred immediately to ice cold media, washed, and then replated for culture. We refer to the method as "scrape-loading." Viability of cells was 50-60% immediately after scrape-loading and was 90% for those cells remaining after 24 h of culture. About 40% of adherent, well-spread fibroblasts contained fluorescent molecules 18 h after scrape-loading of labeled dextrans, ovalbumin, or immunoglobulin-G. On average, 10(7) dextran molecules (70,000-mol wt) were incorporated into each fibroblast by scrape-loading in 10 mg/ml dextran. The extent of loading depended on the concentration and molecular weight of the dextrans used. A fluorescent analog of actin could also be loaded into fibroblasts where it labeled stress fibers. HeLa cells, a macrophage-like cell line, 1774A.1, and human neutrophils were all successfully loaded with dextran by scraping. The method of scrape-loading should be applicable to a broad range of adherent cell types, and useful for loading of diverse kinds of macromolecules.

Journal ArticleDOI
TL;DR: In this article, the authors examined the determination of risk premiums in foreign exchange markets and found that the conditional expectation of the risk premium is a nonlinear function of the forward premium.