scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1975"


Journal ArticleDOI
TL;DR: In this paper, a mathematical framework is developed to study the mechanical behavior of material surfaces, and the tensorial nature of surface stress is established using the force and moment balance laws using a linear theory with non-vanishing residual stress.
Abstract: A mathematical framework is developed to study the mechanical behavior of material surfaces. The tensorial nature of surface stress is established using the force and moment balance laws. Bodies whose boundaries are material surfaces are discussed and the relation between surface and body stress examined. Elastic surfaces are defined and a linear theory with non-vanishing residual stress derived. The free-surface problem is posed within the linear theory and uniqueness of solution demonstrated. Predictions of the linear theory are noted and compared with the corresponding classical results. A note on frame-indifference and symmetry for material surfaces is appended.

2,641 citations


Journal ArticleDOI
TL;DR: In this article, the authors introduce a class of robust estimators of the parameters of a stochastic utility function, called maximum score estimators, which require only weak distributional assumptions for consistency.

857 citations


Journal ArticleDOI
TL;DR: The problem of finding all maximal elements of V with respect to the partial ordering is considered and the computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n).
Abstract: H. T. KUNG Carnegze-Mellon Un~verszty, P2ttsburgh, Pennsylvanza F. LUCCIO Unwerszht d~ P~sa, P~sa, Italy F. P. PREPARATA University of Ilhno~s, Urbana, Illinois ASSTRACT. Let U1 , U2, . . . , Ud be totally ordered sets and let V be a set of n d-dimensional vectors In U~ X Us. . X Ud . A partial ordering is defined on V in a natural way The problem of finding all maximal elements of V with respect to the partial ordering ~s considered The computational com- plexity of the problem is defined to be the number of required comparisons of two components and is denoted by Cd(n). It is tnwal that C~(n) = n - 1 and C,~(n) _ flog2 n!l for d _> 2

856 citations


Journal ArticleDOI
TL;DR: In this paper, the Hartree-Fock (UHF) Hamiltonian is used as the unperturbed system and the technique is described as unrestricted Mo-Plesset to second order (UMP2).
Abstract: Mo–Plesset theory, in which electron correlation energy is calculated by perturbation techniques, is used in second order to calculate energies of the ground states of atoms up to neon. The unrestricted Hartree–Fock (UHF) Hamiltonian is used as the unperturbed system and the technique is then described as unrestricted Mo–Plesset to second order (UMP2). Use of large Gaussian basis sets suggests that the limiting UMP2 energies with a complete basis of s, p, and d functions account for 75–84% of the correlation energy. Preliminary estimates of the contributions of basis functions with higher angular quantum numbers indicate that full UMP2 limits give even more accurate total energies.

782 citations


Journal ArticleDOI
TL;DR: The tower of Hanoi problem is used to show that, even in simple problem environments, numerous distinct solution strategies are available, and different subjects may learn different strategies, and the analysis underscores the importance of subject-by-subject analysis of “what is learned” in understanding human behavior in problem-solving situations.

662 citations


Journal ArticleDOI
TL;DR: A new series of calculations in the theory of spinodal decomposition is presented, based on a simple ansatz for the two-point distribution function which leads to closure of the hierarchy of equations of motion for the high-order correlation functions.
Abstract: We present a new series of calculations in the theory of spinodal decomposition. The computational scheme is based on a simple ansatz for the two-point distribution function which leads to closure of the hierarchy of equations of motion for the high-order correlation functions. The resulting theory is accurate throughout the spinodal region of the phase diagram, including at the boundaries of this region where the spinodal mechanism is difficult to distinguish from nucleation and growth. The computational scheme is worked out in detail for parameters approximating those of the three-dimensional, kinetic, spin-exchange Ising model with nearest-neighbor interactions. Numerical agreement with recent Monte Carlo data appears to be satisfactory.

617 citations



Journal ArticleDOI
TL;DR: In this paper, a continuous time model for optimal consumption, portfolio and life insurance rules, for an investor with an arbitrary but known distribution of lifetime, is derived as a generalization of the model by Merton.

416 citations


Journal ArticleDOI
TL;DR: A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0–1 points satisfying a given linear inequality.
Abstract: A necessary and sufficient condition is given for an inequality with coefficients 0 or 1 to define a facet of the knapsack polytope, i.e., of the convex hull of 0---1 points satisfying a given linear inequality. A sufficient condition is also established for a larger class of inequalities (with coefficients not restricted to 0 and 1) to define a facet for the same polytope, and a procedure is given for generating all facets in the above two classes. The procedure can be viewed as a way of generating cutting planes for 0---1 programs.

411 citations



Journal ArticleDOI
TL;DR: An algorithm for general context-free recognition is given that requires less than n3 time asymptotically for input strings of length n.

Journal ArticleDOI
TL;DR: In this article, a boundary-integral equation based on the Green's function is proposed for an infinite anisotropic plate containing an exact crack. But the results of the exact crack model are not as accurate as previously published isotropic results.
Abstract: A numerical procedure based on the boundary-integral equation method, is formulated using the fundamental solution (Green's function) for an infinite anisotropic plate containing an exact crack. The boundary-integral equation developed can be solved numerically for the mode 1 and mode 2 stress intensity factors by approximating boundary data on the surface of an arbitrary body, excluding the crack surface. Thus the efficiency and generality of the boundary-integral equation method and the precision of exact crack model analyses are combined in a direct manner. The numerical results reported herein are as accurate as previously published isotropic results. The effects of material anisotropy are reported for center and double-edge cracked geometries. A path independent integral for obtaining mode 1 and mode 2 stress intensity factors directly for arbitrary loading is reported.

Journal ArticleDOI
TL;DR: The issues of the system organization of the HSII system are dealt with, which include a convenient modular structure for incorporating new knowledge into the system at any level, and a system structure suitable for execution on a parallel processing system.
Abstract: Hearsay II (HSII) is a system currently under development at Carnegie-Mellon University to study the connected speech understanding problem. It is similar to Hearsay I (HSI) in that it is based on the hypothesize-and-test paradigm, using cooperating independent knowledge sources communicating with each other through a global data structure (blackboard). It differs in the sense that many of the limitations and shortcomings of HSI are resolved in HSII. The main new features of the Hearsay II system structure are: 1) the representation of knowledge as self-activating, asynchronous, parallel processes, 2) the representation of the partial analysis in a generalized three-dimensional network (the dimensions being level of representation (e.g., acoustic, phonetic, phonemic, lexical, syntactic), time, and alternatives) with contextual and structural support connections explicitly specified, 3) a convenient modular structure for incorporating new knowledge into the system at any level, and 4) a system structure suitable for execution on a parallel processing system. The main task domain under study is the retrieval of daily wire-service news stories upon voice request by the user. The main parametric representations used for this study are 1/3-octave filter-bank and linear-predictive coding (LPC)-derived vocal tract parameters [10], [11]. The acoustic segmentation and labeling procedures are parameter-independent [7]. The acoustic, phonetic, and phonological components [23] are feature-based rewriting rules which transform the segmental units into higher level phonetic units. The vocabulary size for the task is approximately 1200 words. This vocabulary information is used to generate word-level hypotheses from phonetic and surface-phonemic levels based on prosodic (stress) information. The syntax for the task permits simple English-like sentences and is used to generate hypotheses based on the probability of occurrence of that grammatical construct [19]. The semantic model is based on the news items of the day, analysis of the conversation, and the presence of certain content words in the partial analysis. This knowledge is to be represented as a production system. The system is expected to be operational on a 16-processor minicomputer system [3] being built at Carnegie-Mellon University. This paper deals primarily with the issues of the system organization of the HSII system.

Journal ArticleDOI
TL;DR: Although the results for adults and children are similar except for the overall rates, the nature of the isomorphism between children and adults is unclear.

Journal ArticleDOI
TL;DR: In this paper, the early stages of spinodal decomposition in age hardening Cu-Ti alloys have been studied by electron microscopy, and a sequence of microstructures was used to show that continuous phase separation was the mechanism of decomposition.

Journal ArticleDOI
TL;DR: This paper used rational voting behavior as an organizing device to develop a framework within which to consider the effect of economic aggregates on voters and found that the effect on the participation rate is much clearer than the effects on either party.
Abstract: This paper uses rational voting behavior as an organizing device to develop a framework within which to consider the effect of economic aggregates on voters. Unlike most previous studies, ours permits the voter to vote for candidates of either party or to abstain. A principal finding is that the effect of the main economic aggregates on the participation rate is much clearer than the effects on either party. Our results deny that an incumbent administration can affect the control of Congress by stimulating the economy. Voters appear to make judgments about inflation, unemployment and economic growth. We investigated on the basis of long-term, not short-term performance.

ReportDOI
03 Sep 1975
TL;DR: An organization is presented for implementing solutions to knowledge-based AI problems using the hypothesize-and-test paradigm as the basis for cooperation among many diverse and independent knowledge sources.
Abstract: An organization is presented for implementing solutions to knowledge-based AI problems. The hypothesize-and-test paradigm is used as the basis for cooperation among many diverse and independent knowledge sources (KS's). The KS's are assumed individually to be errorful and incomplete. A uniform and integrated multi-level structure, the blackboard, holds the current state of the system. Knowledge sources cooperate by creating, accessing, and modifying elements in the blackboard. The activation of a KS is data-driven, based on the occurrence of patterns in the blackboard which match templates specified by the knowledge source. Each level in the blackboard specifies a different representation of the problem space; the sequence of levels forms a loose hierarchy in which the elements at each level can approximately be described as abstractions of elements at the next lower level. This decomposition can be thought of as an a prion framework of a plan for solving the problem; each level is a generic stage in the plan. The elements at each level in the blackboard are hypotheses about some aspect of that level. The internal structure of an hypothesis consists of a fixed set of attributes; this set is the same for hypotheses at all levels of representation in the blackboard. These attributes are selected to serve as mechanisms for implementing the data-directed hypothesize-and-test paradigm and for efficient goal-directed scheduling of KS's. Knowledge sources may create networks of structural relationships among hypotheses. These relationships, which are explicit in the blackboard, serve to represent inferences and deductions made by the KS's about the hypotheses; they also allow competing and overlapping partial solutions to be handled in an integrated manner. The Hearsay II speech-understanding system is an implementation of this organization; it is used here as an example for descriptive purposes.

Journal ArticleDOI
TL;DR: An extensive empirical study is presented that reports that Wagner-Whitin planning horizons were found for a reasonably small subset of problems within 500 periods, while planning hor Horizons were found universally by the modified procedure, and the Zabel procedure was intermediate in power.
Abstract: Forward algorithms that solve successively longer finite horizon problems and that possess good stopping rules such as a planning horizon seem better suited to the needs of a manager facing a partial information environment than the more common procedure of selecting a horizon [0, T] in advance. In this light, the Wagner and Whitin forward algorithm with a planning horizon procedure for the dynamic lot size model goes far beyond computational savings. Building on additional results due to Zabel, we develop new planning horizon procedures and near planning horizon procedures. A brief sketch of how to develop similar results for the production smoothing problem suggests that the basic methodology developed possesses some generality. We present an extensive empirical study that reports that Wagner-Whitin planning horizons were found for a reasonably small subset of problems within 500 periods, while planning horizons, or at least near planning horizons, were found universally by the modified procedure. The Zabel procedure was intermediate in power. The number of periods until a "near" horizon seems to be given empirically by \documentclass{aastex} \usepackage{amsbsy} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{bm} \usepackage{mathrsfs} \usepackage{pifont} \usepackage{stmaryrd} \usepackage{textcomp} \usepackage{portland,xspace} \usepackage{amsmath,amsxtra} \pagestyle{empty} \DeclareMathSizes{10}{9}{7}{6} \begin{document} $t = 5\sqrt{2K/hD}$ \end{document} for the linear stationary case, where K is the setup cost, h the holding cost, and D the average demand rate. Extensions useful for a broad subset of the general concave cost case are also given.

Journal ArticleDOI
TL;DR: In this paper, a survey study of elastic charge exchange reactions is presented, where the charge exchange cross sections are evaluated in a coupled-channel framework with optical potentials obtained from the Kisslinger and Laplacian models for the pi-nucleon off-shell scattering amplitude.

Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of Chow's test and Farley-Hinich test when the disturbance variance is known and conclude that the relative performance of the two tests does not depend appreciably on whether the variance is unknown.

Journal ArticleDOI
TL;DR: In this paper, the diffusion coating of nickel with aluminum was studied by a two-step aluminizing pack process involving initially an influx of aluminum at the surface (step 1) and later a partial honiogenization of the aluminum-rich region under conditions of zero surface flux (step 2).
Abstract: The diffusion coating of nickel with aluminum was studied by a two-step aluminizing pack process involving initially an influx of aluminum at the surface (step 1) and later a partial honiogenization of the aluminum-rich region under conditions of zero surface flux (step 2). The process was studied in the temperature range from 870 to 1000°C. Step 1 was characterized mainly as the rapid, parabolic growth (after an initial transient period) of the Ni2Al3 phase (γ) as a surface layer with concurrent growth of a thinner NiAl (δ) layer. Step 2 was characterized mainly as the rapid loss of the aluminum gradient in the γ layer followed by parabolic growth of the δ layer primarily by the solution of they phase. Mathematical models were developed, in which numerical methods and computer techniques as well as closed-form solutions were utilized. The models yielded growth rate predictions in agreement with the experimental data and were used to define the critical parameters controlling growth kinetics for the aluminide layers formed during this process.

Journal ArticleDOI
TL;DR: Based upon 15-min inhalation periods by humans and the animal findings, the concentration judged acceptable for 8 hr/day, 5 days/week of inhalation was 0.44 mg/liter (90 ppm), which was well tolerated by human volunteers in sensory threshold studies.


Journal ArticleDOI
TL;DR: The results indicate that whenever there exists modest intercorrelations among the variables, the proposed method is at least as good as the best alternative (a regression method) while being considerably faster and simpler computationally.
Abstract: A method for dealing with the problem of missing observations in multivariate data is developed and evaluated. The method uses a transformation of the principal components of the data to estimate missing entries. The properties of this method and four alternative methods are investigated by means of a Monte Carlo study of 42 computer-generated data matrices. The methods are compared with respect to their ability to predict correlation matrices as well as missing entries. The results indicate that whenever there exists modest intercorrelations among the variables (i.e., average off diagonal correlation above .2) the proposed method is at least as good as the best alternative (a regression method) while being considerably faster and simpler computationally. Models for determining the best alternative based upon easily calculated characteristics of the matrix are given. The generality of these models is demonstrated using the previously published results of Timm.

Journal ArticleDOI
TL;DR: In this article, the geometry and vibrational frequencies of the water dimer were calculated using ab initio LCAO-SCF theory using a split valence basis set.

Journal ArticleDOI
TL;DR: The subunit composition of the Ca 2+ -sensitive actomyosin was interesting in that no components corresponding to skeletal muscle troponin were obvious and it is suggested that the activity of gizzard actomyOSin is regulated by a protein on the thin filaments with a subunit weight of ~130,000.

Journal ArticleDOI
TL;DR: This article used data on candidate trips as well as estimates of the time candidates spend in states to conclude that much of the apparent empirical support for Brams and Davis's 3/2's hypothesis is an artifact of the candidates' consideration of each state's relative competitiveness and the statistical relationship between size and competitiveness.
Abstract: This essay addresses the question: Why does the Electoral College bias campaign resource allocations in favor of large states? Using data on candidate trips as well as estimates of the time candidates spend in states, we conclude, first, that much of the apparent empirical support for Brams and Davis's 3/2's hypothesis is an artifact of the candidates' consideration of each state's relative competitiveness and the statistical relationship between size and competitiveness. There is some evidence, however, for a residual bias. That is, after controlling for each state's competitiveness, campaign allocations still appear to favor larger states—at least for the two competitive elections of 1960 and 1968. We attribute that bias to corner solutions to the candidate's maximization problem and to the effects of sequential campaign planning. Thus, while we do not dispute the existence of bias over the course of the entire campaign, the data are consistent with a modified (albeit complex) proportional rule that each candidate applies sequentially during the campaign. Our conclusion is that the unit rule feature of the Electoral College, rather than weighted voting, is the predominant cause of bias.

Journal ArticleDOI
TL;DR: Acceptance of an attribution pattern linking negative moods to the approach of menstruation and the likelihood of internal and external attributions were examined in a questionnaire study in which cycle phase, mood, and environment were varied.
Abstract: Acceptance of an attribution pattern linking negative moods (depression, irritability) to the approach of menstruation and the likelihood of internal and external attributions were examined in a questionnaire study in which cycle phase (pre- versus postmenstrual), mood (positive versus negative), and environment (pleasant versus unpleasant) were varied Subjects' rating indicated that (a) biology was judged important for explaining negative moods occurring premenstrually: (b) inconsistency between mood and environment produced more internal (personality) attributions, while consistency enhanced external attributions; and (c) emotionally expressive behavior was thought to reflect underlying personality dispositions despite extenuating situational factors (assumed personal causation) The theoretical relevance of the findings to a new conceptualization of premenstrual emotionality and to an attributional chain relating female self-concept and premenstrual tension is discussed


Journal ArticleDOI
TL;DR: In this paper, the isothermal decomposition of metastable β-phase in Ti-14 wt pct Mo-6 wt polytope alloys has been studied by electron microscopy and two distinct types of α-phase are observed as decomposition products.
Abstract: The isothermal decomposition of metastable β-phase in Ti-14 wt pct Mo-6 wt pct Al and Ti-11.6 wt pct Mo alloys has been studied by electron microscopy. Two distinct types of α-phase are observed as decomposition products. Type 1 α obeys the well-known Burgers orientation with respect to the β-phase matrix. Type 2 α, which is characterized by arced reflections in the electron diffraction patterns, does not obey the Burgers orientation relation, but rather can be described as being in a {10–12} 〈10–11〉 twin orientation with respect to Burgers orientation α. The Type 1 α forms initially at aging temperatures in the range of 400°C to 650°C, and is replaced ty Type 2 α after longer holding times at the aging temperature. The Type 1 α precipitates form as monolithic plates in the ternary alloy and as monolithic needles in the binary. The Type 2 precipitates in the ternary alloy occur as very fine, contiguous particles in a colony-like arrangement wherein the colony has a plate morphology. The Type 2 precipitates in the binary alloy form as a peripheral shell around the Type 1 needle. Two possible mechanisms describing the formation of Type 2 α from Type 1 α are considered. The first of these is mechanical twinning of Type 1 α plates, and the second is a nucleation and growth mechanism which is driven by a composition difference between Types 1 and 2. The former is concluded to be less likelv than the latter.