scispace - formally typeset
Search or ask a question

Showing papers by "Brown University published in 1993"


Journal ArticleDOI
TL;DR: The authors analyzes the role of wealth distribution in macroeconomics through investment in human capital and shows that the initial distribution of wealth affects aggregate output and investment both in the short and in the long run, as there are multiple steady states.
Abstract: This paper analyzes the role of wealth distribution in macroeconomics through investment in human capital. It is shown that in the presence of credit markets' imperfections and indivisibilities in investment in human capital, the initial distribution of wealth affects aggregate output and investment both in the short and in the long run, as there are multiple steady states. This paper therefore provides an additional explanation for the persistent differences in per-capita output across countries. Furthermore, the paper shows that cross-country differences in macroeconomic adjustment to aggregate shocks can be attributed, among other factors, to differences in wealth and income distribution across countries.

4,062 citations


Journal ArticleDOI
09 Dec 1993-Nature
TL;DR: The cloning of complementary DNA encoding an extracellular Ca2+ -sensing receptor from bovine parathyroid is reported with pharmacological and functional properties nearly identical to those of the native receptor.
Abstract: Maintenance of a stable internal environment within complex organisms requires specialized cells that sense changes in the extracellular concentration of specific ions (such as Ca2+). Although the molecular nature of such ion sensors is unknown, parathyroid cells possess a cell surface Ca(2+)-sensing mechanism that also recognizes trivalent and polyvalent cations (such as neomycin) and couples by changes in phosphoinositide turnover and cytosolic Ca2+ to regulation of parathyroid hormone secretion. The latter restores normocalcaemia by acting on kidney and bone. We now report the cloning of complementary DNA encoding an extracellular Ca(2+)-sensing receptor from bovine parathyroid with pharmacological and functional properties nearly identical to those of the native receptor. The novel approximately 120K receptor shares limited similarity with the metabotropic glutamate receptors and features a large extracellular domain, containing clusters of acidic amino-acid residues possibly involved in calcium binding, coupled to a seven-membrane-spanning domain like those in the G-protein-coupled receptor superfamily.

2,542 citations


Journal ArticleDOI
TL;DR: Using empirically scaled computer simulation models of continuous traits evolving along phylogenetic trees to obtain null distributions of F statistics for ANCOVA of comparative data sets is proposed.
Abstract: Biologists often compare average phenotypes of groups of species defined cladistically or on behavioral, ecological, or physiological criteria (e.g., carnivores vs. herbivores, social vs. nonsocial species, endotherms vs. ectotherms). Hypothesis testing typically is accomplished via analysis of variance (ANOVA) or covariance (ANCOVA; often with body size as a covariate). Because of the hierarchical nature of phylogenetic descent, however, species may not represent statistically independent data points, degrees of freedom may be inflated, and significance levels derived from conventional tests cannot be trusted. As one solution to this degrees of freedom problem, we propose using empirically scaled computer simulation models of continuous traits evolving along «known» phylogenetic trees to obtain null distributions of F statistics for ANCOVA of comparative data sets

1,188 citations


Posted Content
TL;DR: In this article, the authors examined a novel mechanism linking fertility and growth, where household fertility is determined by relative wages of women and men, and showed that higher fertility raises the level of capital per worker which in turn, since capital is more complementary to women's labor input than men's, raises women's relative wages.
Abstract: This paper examines a novel mechanism linking fertility and growth. Household fertility is determined by relative wages of women and men. Increasing women's wages reduces fertility by raising the cost of children relatively more than household income. Lower fertility raises the level of capital per worker which in turn, since capital is more complementary to women's labor input than men's, raises women's relative wages. This positive feedback leads to the possibility of multiple steady-state equilibria. Countries with low initial capital may converge to a development trap with high fertility, low capital, and low relative wages for women.

1,183 citations


Journal ArticleDOI
TL;DR: In this paper, the Navier-Stokes equations permit the presence of an externally imposed body force that may vary in space and time, and the velocity is used to iteratively determine the desired value.

1,119 citations


Journal ArticleDOI
01 Apr 1993-Sleep
TL;DR: Data support involvement of a biological factor in the adolescent phase preference delay and indicate that the current understanding of adolescent sleep patterns may need revision.
Abstract: Many teenagers go to bed and wake up significantly later than younger children, a developmental progression thought to reflect adolescent psychosocial processes. To determine whether biological processes may underlie a delay of phase preference in adolescents, 183 sixth-grade boys and 275 sixth-grade girls completed questionnaires for morningness/eveningness (M/E) and pubertal status. School environment and birth order were also evaluated. A significant relationship of pubertal status to M/E was found in girls, with a similar though nonsignificant trend in boys. No relationship between M/E and psychosocial factors was found. These data support involvement of a biological factor in the adolescent phase preference delay and indicate that our current understanding of adolescent sleep patterns may need revision.

987 citations


Journal ArticleDOI
TL;DR: In this paper, the average settling velocity in homogeneous turbulence of a small rigid spherical particle, subject to a Stokes drag force, has been shown to differ from that in still fluid owing to a bias from the particle inertia.
Abstract: The average settling velocity in homogeneous turbulence of a small rigid spherical particle, subject to a Stokes drag force, has been shown to differ from that in still fluid owing to a bias from the particle inertia (Maxey 1987). Previous numerical results for particles in a random flow field, where the flow dynamics were not considered, showed an increase in the average settling velocity. Direct numerical simulations of the motion of heavy particles in isotropic homogeneous turbulence have been performed where the flow dynamics are included. These show that a significant increase in the average settling velocity can occur for particles with inertial response time and still-fluid terminal velocity comparable to the Kolmogorov scales of the turbulence. This increase may be as much as 50% of the terminal velocity, which is much larger than was previously found. The concentration field of the heavy particles, obtained from direct numerical simulations, shows the importance of the inertial bias with particles tending to collect in elongated sheets on the peripheries of local vortical structures. This is coupled then to a preferential sweeping of the particles in downward moving fluid. Again the importance of Kolmogorov scaling to these processes is demonstrated. Finally, some consideration is given to larger particles that are subject to a nonlinear drag force where it is found that the nonlinearity reduces the net increase in settling velocity.

966 citations


Journal ArticleDOI
TL;DR: Multiple risk was important in longitudinal prediction, even after prior measurement of child IQ was accounted for; the pattern of risk was less important than the total amount of risk present in the child's context.
Abstract: Intelligence scores of children in a longitudinal study were assessed at 4 and 13 years and related to social and family risk factors. A multiple environmental risk score was calculated for each child by counting the number of high-risk conditions from 10 risk factors: mother's behavior, mother's developmental beliefs, mother's anxiety, mother's mental health, mother's educational attainment, family social support, family size, major stressful life events, occupation of head of household, and disadvantaged minority status. Multiple risk scores explained one-third to one-half of IQ variance at 4 and 13 years. The stability between 4- and 13-year environmental risk scores (r = .77) was not less than the stability between between 4- and 13-year IQ scores (r = .72). Effects remained after SES and race, or maternal IQ, were partialled; multiple risk was important in longitudinal prediction, even after prior measurement of child IQ was accounted for; the pattern of risk was less important than the total amount of risk present in the child's context.

833 citations


Proceedings Article
29 Nov 1993
TL;DR: In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route efficiently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically.
Abstract: This paper describes the Q-routing algorithm for packet routing, in which a reinforcement learning module is embedded into each node of a switching network. Only local communication is used by each node to keep accurate statistics on which routing decisions lead to minimal delivery times. In simple experiments involving a 36-node, irregularly connected network, Q-routing proves superior to a nonadaptive algorithm based on precomputed shortest paths and is able to route efficiently even when critical aspects of the simulation, such as the network load, are allowed to vary dynamically. The paper concludes with a discussion of the tradeoff between discovering shortcuts and maintaining stable policies.

779 citations


Journal ArticleDOI
TL;DR: In this paper, the authors make an analogy between images and statistical mechanics systems, where pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system.
Abstract: We make an analogy between images and statistical mechanics systems. Pixel gray levels and the presence and orientation of edges are viewed as states of atoms or molecules in a lattice-like physical system. The assignment of an energy function in the physical system determines its Gibbs distribution. Because of the Gibbs distribution, Markov random field (MRF) equivalence, this assignment also determines an MRF image model. The energy function is a more convenient and natural mechanism for embodying picture attributes than are the local characteristics of the MRF. For a range of degradation mechanisms, including blurring, non-linear deformations, and multiplicative or additive noise, the posterior distribution is an MRF with a structure akin to the image model. By the analogy, the posterior distribution defines another (imaginary) physical system. Gradual temperature reduction in the physical system isolates low-energy states (‘annealing’), or what is the same thing, the most probable states under the Gib...

764 citations


Journal ArticleDOI
TL;DR: For the S&P 100 index options, the most actively traded contract in the United States, the authors found that implied volatility has virtually no correlation with future volatility, and it does not incorporate the information contained in recent observed volatility.
Abstract: Implied volatility is widely believed to be informationally superior to historical volatility, because it is the "markets" forecast of future volatility. But for S&P 100 index options, the most actively traded contract in the United States, we find implied volatility. In aggregate and across subsamples separated by maturity and strike price, implied volatility has virtually no correlation with future volatility, and it does not incorporate the information contained in recent observed volatility. Article published by Oxford University Press on behalf of the Society for Financial Studies in its journal, The Review of Financial Studies.

Journal ArticleDOI
TL;DR: In this article, a numerical micromechanical study of void nucleation is presented, where constitutive relations are specified independently for the matrix, the void-nucleating particles and the interface.
Abstract: In a numerical micromechanical study of void nucleation, a framework is used where constitutive relations are specified independently for the matrix, the void-nucleating particles and the interface. Plane strain analyses are carried out for a doubly periodic array of circular cylindrical particles. The particles are taken to be rigid and the elastic-plastic deformations of the matrix are described in terms of continuum crystalline plasticity, using a planar crystal model that allows for three slip systems. Comparison is made with void-nucleation predictions based on a corresponding flow theory of plasticity with isotropic hardening. The crystal model can give rise to shear localization at the particle-matrix interface and shear localization, which leads to large localized strains in the matrix, is found to inhibit decohesion. The role of the triaxiality of the stress state in determining whether decohesion or localization occurs first is investigated. A parameteric study is also carried out for a crystal matrix using two descriptions of the interface shear behaviour; one is periodic in the shear displacement across the interface, while the other allows for shear decohesion.

Journal ArticleDOI
TL;DR: A family of statistical models termed random regression models were used that provide a more realistic approach to analysis of longitudinal psychiatric data and indicated that both person-specific effects and serial correlation play major roles in the longitudinal psychiatric response process.
Abstract: L studies have a prominent role in psychiatric research; however, statistical methods for analyzing these data are rarely commensurate with the effort involved in their acquisition. Frequently the majority of data are discarded and a simple end-point analysis is performed. In other cases, so called repeated-measures analysis of variance procedures are used with little regard to their restrictive and often unrealistic assumptions and the effect of missing data on the statistical properties of their estimates. We explored the unique features of longitudinal psychiatric data from both statistical and conceptual perspectives. We used a family of statistical models termed random regression models that provide a more realistic approach to analysis of longitudinal psychiatric data. Random regression models provide solutions to commonly observed problems of missing data, serial correlation, time-varying covariates, and irregular measurement occasions, and they accommodate systematic person-specific deviations from the average time trend. Properties of these models were compared with traditional approaches at a conceptual level. The approach was then illustrated in a new analysis of the National Institute of Mental Health Treatment of Depression Collaborative Research Program dataset, which investigated two forms of psychotherapy, pharmacotherapy with clinical management, and a placebo with clinical management control. Results indicated that both person-specific effects and serial correlation play major roles in the longitudinal psychiatric response process. Ignoring either of these effects produces misleading estimates of uncertainty that form the basis of statistical tests of hypotheses.

Book
20 May 1993
TL;DR: This dissertation addresses the problem of designing algorithms for learning in embedded systems using Sutton's techniques for linear association and reinforcement comparison, while the interval estimation algorithm uses the statistical notion of confidence intervals to guide its generation of actions.
Abstract: This dissertation addresses the problem of designing algorithms for learning in embedded systems. This problem differs from the traditional supervised learning problem. An agent, finding itself in a particular input situation must generate an action. It then receives a reinforcement value from the environment, indicating how valuable the current state of the environment is for the agent. The agent cannot, however, deduce the reinforcement value that would have resulted from executing any of its other actions. A number of algorithms for learning action strategies from reinforcement values are presented and compared empirically with existing reinforcement-learning algorithms. The interval-estimation algorithm uses the statistical notion of confidence intervals to guide its generation of actions in the world, trading off acting to gain information against acting to gain reinforcement. It performs well in simple domains but does not exhibit any generalization and is computationally complex. The cascade algorithm is a structural credit-assignment method that allows an action strategy with many output bits to be learned by a collection of reinforcement-learning modules that learn Boolean functions. This method represents an improvement in computational complexity and often in learning rate. Two algorithms for learning Boolean functions in k-DNF are described. Both are based on Valiant's algorithm for learning such functions from input-output instances. The first uses Sutton's techniques for linear association and reinforcement comparison, while the second uses techniques from the interval estimation algorithm. They both perform well and have tractable complexity. A generate-and-test reinforcement-learning algorithm is presented. It allows symbolic representations of Boolean functions to be constructed incrementally and tested in the environment. It is highly parametrized and can be tuned to learn a broad range of function classes. Low-complexity functions can be learned very efficiently even in the presence of large numbers of irrelevant input bits. This algorithm is extended to construct simple sequential networks using a set-reset operator, which allows the agent to learn action strategies with state. These algorithms, in addition to being studied in simulation, were implemented and tested on a physical mobile robot.

Journal ArticleDOI
TL;DR: Although the results are less favorable than those reported by centers with large institutional experiences with this disease and are inferior to those of the Japanese and other Eastern centers, they suggest potential for increasing survival by upstaging through earlier diagnosis and using resectional techniques to more adequately control local regional disease.
Abstract: Objective The major purpose of this study was to document the modes of presentation, diagnostic methods, clinical management, and outcome of gastric cancer as reported by tumor registries of US hospitals and cancer programs approved by the American College of Surgeons. Summary background data Gastric cancer continues to diminish in the US, but the stage of disease and survival outcome after surgical resection is unchanged despite increased availability and sophistication of diagnostic techniques. This is in contrast to the marked improvement in survival outcome in Japanese and other Eastern series over the last decades. Possible reasons for the improved Japanese results have been earlier detection secondary to active diagnostic surveillance of the population and widespread adoption of aggressive surgical resection emphasizing wide-field node (R2) dissection. Although selected US centers using the Japanese approach report better survival data, the approach has not been widely adapted by US treatment centers. Methods Tumor registries at American College of Surgeons (ACS) approved hospitals were mailed a study protocol in 1987. They were instructed to review 25 consecutive patients with gastric cancer treated in 1982 (long-term study) and 25 patients treated in 1987 (short-term study). A detailed protocol included significant history, diagnostic results, staging, pathology findings, and treatment results. The data forms on 18,365 patients were returned and analyzed (11,264 patients in the long-term study and 7101 patients in the short-term study). Results Of 18,365 patients, 63% were males. The median ages were 68.4 years in males and 71.9 years in females. There was a history of gastric ulcer in 25.5% of the patients. Lesion location was upper third in 31%, middle third in 14%, distal third in 26%, and entire stomach in 10% of patients (and the site was unknown in 19%). Gastric resection was performed for 80% of upper third cancers and 85% of distal third cancers; 50% of patients with total gastric involvement had gastric resection. The extent of gastric resection varied according to location. For lower third lesions, subtotal gastrectomy was done in 55% of the cases, extended resection in 21%, and total gastrectomy in 6%. For proximal lesions, 29% had subtotal, 4.6% had total, and 41% had extended gastrectomies (including esophagus), and 13.6% had dissection of celiac nodes. The operative mortality rate was 7.2%. Staging (American Joint Committee on Cancer [AJCC]) was as follows: I, 17%; II, 17%; III, 36%; and IV, 31%. The overall survival rate reflecting deaths from all causes was 14% among 10,891 patients diagnosed in 1982, and it was 19% in patients having resection. The disease specific survival rate was 26%. The survival rate after resection was 19% and 21% for lower and mid third cancers, 10% for upper third cancers, and 4% if the entire stomach was involved. The stage-related survival rates were 50% (stage I), 29% (stage II), 13% (stage III), and 3% (stage IV). Among patients with pathologically clear margins, the survival rate was 35% versus 13% in those with microscopically involved margins, and it was 3% in those with grossly involved margins. Conclusion This report of gastric cancer treatment by American College of Surgeons approved institutions in the US provides an overview of the disease as commonly treated throughout the US. Although the results are less favorable than those reported by centers with large institutional experiences with this disease and are inferior to those of the Japanese and other Eastern centers, they suggest potential for increasing survival by upstaging through earlier diagnosis and using resectional techniques demonstrated to more adequately control local regional disease.

Journal ArticleDOI
TL;DR: This new self-rating scale is a useful tool for assessing pubertal status in settings that require noninvasive measures and significant correlations were found between parents and students for all of the measures for 6th-graders and 5th-grade girls and several measures for 5 fourth-grade boys.

Journal ArticleDOI
TL;DR: It is shown that the anomalously large conductance can be understood in terms of a model in which the anharmonicity of the metal plays an importantmore » role.
Abstract: Measurements of the Kapitza conductance ${\mathrm{\ensuremath{\sigma}}}_{\mathit{K}}$ made using a picosecond optical technique at temperatures between 50 and 300 K are presented for interfaces between metals and dielectrics. The Debye temperatures ${\mathit{FTHETA}}_{\mathit{D}}$ of the metals (Pb, Au, Al, and Ti) were in the range from 102 to 426 K, while those of the dielectrics (${\mathrm{BaF}}_{2}$, sapphire, and diamond) varied from 287 to 2200 K. Conductances measured between materials with ${\mathit{FTHETA}}_{\mathit{D}}$ differing by less than about a factor of 5 were found to be in reasonable agreement with calculations based on a lattice dynamical theory. However, for more widely mismatched solids the measured conductances were found to be greatly in excess of the lattice dynamical calculations. In some of these cases the conductances exceeded even the phonon radiation limit, indicating that much of the heat flow between the solids was via an inelastic channel. It was demonstrated experimentally that the inelastic channel does not involve an interaction between the phonons in the dielectric and electrons in the metal. We show that the anomalously large conductance can be understood in terms of a model in which the anharmonicity of the metal plays an important role.

Journal ArticleDOI
TL;DR: It is demonstrated that oscillations recorded in the LFP in motor cortex during trained motor tasks are not related to the details of movement execution but may be related to aspects of movement preparation.
Abstract: We investigated the occurrence and distribution of oscillatory activity in local field potentials (LFPs) recorded from the frontal motor cortex of behaving monkeys performing skilled voluntary movements. LFPs were recorded simultaneously from up to 12 sites distributed throughout motor cortex while monkeys performed a visually guided, instructed delay task using the wrist or digits. Oscillatory activity between 15 and 50 Hz was evident in the LFP recorded from both primary motor cortex and premotor areas. Oscillations occurred preferentially before the visual cue to initiate movement but were infrequent during movement. Oscillations typically stopped before movement initiation during the wrist task, although they often continued into the initial phases of movement during the digit task. The relationship of oscillations to task performance was consistent across trials over periods of many months, although the amplitude and duration of oscillations varied across trials and days. Interactions between pairs of LFP recordings, evaluated with cross-correlation analysis, revealed synchronous oscillations over long distances (> 7 mm) and across primary motor cortex and premotor recording sites. These studies demonstrate that oscillations recorded in the LFP in motor cortex during trained motor tasks are not related to the details of movement execution but may be related to aspects of movement preparation.

Journal ArticleDOI
TL;DR: In this paper, the Taylor expansion of the Dirichlet Neumann operator in homogeneous powers of the surface elevation η and the velocity potential ϕ is proposed to simulate the water wave problem in a channel for a fluid of finite or infinite depth.

Journal ArticleDOI
TL;DR: Factor analyses conducted on both limbs of the blood alcohol curve supported the proposed factor structure of the Biphasic Alcohol Effects Scale.
Abstract: Alcohol produces stimulant and sedative effects, and both types of effect are thought to influence drinking practices. This article describes the development and preliminary validation of the Biphasic Alcohol Effects Scale (BAES), a self-report, unipolar adjective rating scale designed to measure both stimulant and sedative effects of alcohol. An initial pool of 12 stimulant and 12 sedative items was derived from previous alcohol effect measures, and from descriptors of intoxication generated by subjects during interviews conducted on both the ascending and descending limbs of the blood alcohol curve. This item pool was administered to a sample of sober college students twice, with a 2-week inter-test interval. Items that were difficult to comprehend, or that had high ratings or low test-retest reliability, were eliminated, resulting in a seven-item stimulant subscale and a seven-item sedative subscale. These subscales showed high internal consistency in a sober state, which was not improved by additional item deletion. The data from this study also provided a basis for revising the instructions for the BAES. The new 14-item instrument was then given to 30 male and 12 female nonalcoholics on the ascending and descending limbs of the blood alcohol curve, after the administration of either 0.75 ml/kg alcohol (males) or 0.65 ml/kg alcohol (females). Internal consistency was high for both BAES subscales on both limbs of the blood alcohol curve (Cronbach's alpha = 0.85 to 0.94), and was not improved by additional item deletion. Factor analyses conducted on both limbs of the blood alcohol curve supported the proposed factor structure of the BAES.(ABSTRACT TRUNCATED AT 250 WORDS)

Journal ArticleDOI
TL;DR: To address the question of whether depressed synapses can still be potentiated and vice versa, LTP was saturated with repeated high- frequency tetani, and then LTD was induced with low-frequency stimulation (LFS), indicating that the same synapses whose transmission had been depressed by LFS were capable of subsequently supporting potentiation.
Abstract: Previously we showed that delivering 900 pulses to the Schaffer collateral-CA1 pathway at 1-3 Hz causes a lasting depression of synaptic effectiveness that is input specific and dependent on NMDA receptor activation (Dudek and Bear, 1992a). Here we describe experiments aimed at further characterizing this homosynaptic long-term depression (LTD) and comparing it with long-term potentiation (LTP). To address the question of whether depressed synapses can still be potentiated and vice versa, LTP was saturated with repeated high-frequency tetani, and then LTD was induced with low-frequency stimulation (LFS). A second strong tetanus then restored the potentiation, indicating that the same synapses whose transmission had been depressed by LFS were capable of subsequently supporting potentiation. In a complementary experiment, LTD was induced first and then a strong high-frequency tetanus was delivered. We found that the resulting LTP achieved the same absolute magnitude as that observed in control slices that had received the high-frequency stimulation alone. Next, the postnatal development of LTD was investigated in slices prepared from rats at 6-35 d of age. The consequences of LFS were far more pronounced in slices from young rats. LTD following 900 pulses at 1 Hz measured -45 +/- 4% in CA1 of rats less than 2 weeks old as compared with -20 +/- 4 in animals at 5 weeks postnatal. It was also found that LTD precedes the developmental onset of LTP in CA1. Finally, we addressed the question of whether LTD could be saturated by repeated episodes of LFS in slices prepared from 3-week-old rats.(ABSTRACT TRUNCATED AT 250 WORDS)

Posted Content
TL;DR: The authors examined the relationship between income growth and saving using both cross-country and household data, and found that households with higher income growth save more than households with low growth, but that saving does not necessarily cause growth.
Abstract: We examine the relationship between income growth and saving using both cross-country and household data. At the aggregate level, we find that growth Granger causes saving, but that saving does not Granger cause growth. Using household data, we find that households with predictably higher income growth save more than households with predictably low growth. We argue that standard Permanent Income models of consumption cannot explain these findings, but that a model of consumption with habit formation may. The positive effect of growth on saving implies that previous estimates of the effect of saving on growth may be overstated.

Journal ArticleDOI
04 Jun 1993-Science
TL;DR: Results provide strong support for the view that common principles may govern experience-dependent synaptic plasticity in CA1 and throughout the superficial layers of the mammalian neocortex.
Abstract: Activity-dependent synaptic plasticity in the superficial layers of juvenile cat and adult rat visual neocortex was compared with that in adult rat hippocampal field CA1. Stimulation of neocortical layer IV reliably induced synaptic long-term potentiation (LTP) and long-term depression (LTD) in layer III with precisely the same types of stimulation protocols that were effective in CA1. Neocortical LTP and LTD were specific to the conditioned pathway and, as in the hippocampus, were dependent on activation of N-methyl-D-aspartate receptors. These results provide strong support for the view that common principles may govern experience-dependent synaptic plasticity in CA1 and throughout the superficial layers of the mammalian neocortex.

Journal ArticleDOI
TL;DR: It is argued that the problem of plan recognition, inferring an agent's plan from observations, is largely a problem of inference under conditions of uncertainty and an approach to the plan recognition problem that is based on Bayesian probability theory is presented.

Journal ArticleDOI
TL;DR: In this paper, the existence of periodic solutions of the nonlinear wave equation was proved, provided that the coefficients of the eigenfunction expansion of this equation satisfy a nonlinear functional equation.
Abstract: We prove the existence of periodic solutions of the nonlinear wave equation satisfying either Dirichlet or periodic boundary conditions on the interval [O, π]. The coefficients of the eigenfunction expansion of this equation satisfy a nonlinear functional equation. Using a version of Newton's method, we show that this equation has solutions provided the nonlinearity g(x, u) satisfies certain generic conditions of nonresonance and genuine nonlinearity. © 1993 John Wiley & Sons, Inc.

Journal Article
TL;DR: Understanding the stages of exercise behavior could yield important information for enhancing rates of participation in physical activity in the study of exercise.
Abstract: The stages of change model has shown promise in advancing knowledge about smoking behavior change and other negative addictive behaviors The model was applied to the study of exercise, a healthy rather than unhealthy behavior A stages of exercise behavior questionnaire was administered to a sample of 235 employees Next, the ability of a second questionnaire measuring physical activity behavior to differentiate employees according to stage of exercise was tested Results revealed that 51% of employees were participating in no physical activity (Precontemplation, Contemplatation) and 49% were participating in occasional or regular physical activity (Preparation, Action, Maintenance) Scores on physical activity behavior items significantly differentiated employees among the stages This model developed on smoking and other behaviors can be generalized to exercise behavior Understanding the stages of exercise behavior could yield important information for enhancing rates of participation in physical activity

Journal ArticleDOI
TL;DR: The paired-preference procedure was used in a series of experiments to explore the abilities of infants aged 3 and 4 months to categorize photographic exemplars from natural (adult-defined) basic-level categories and revealed that infants could form categorical representations for dogs and cats that excluded birds.
Abstract: The paired-preference procedure was used in a series of experiments to explore the abilities of infants aged 3 and 4 months to categorize photographic exemplars from natural (adult-defined) basic-level categories. The question of whether the categorical representations that were evidenced excluded members of a related, perceptually similar category was also investigated. Experiments 1–3 revealed that infants could form categorical representations for dogs and cats that excluded birds. Experiment 4 showed that the representation for cats also excluded dogs, but that the representation for dogs did not exclude cats. However, a supplementary experiment showed that the representation for dogs did exclude cats when the variability of the dog exemplars was reduced to match that of the cat exemplars. The results are discussed in terms of abilities necessary for the formation of more complex categorical representations.

Journal ArticleDOI
TL;DR: In this paper, the stability characteristics of various compact fourth and sixth-order spatial operators are assessed with the theory of Gustafsson, Kreiss, and Sundstrom (G-K-S) for the semidiscrete initial boundary value problem.

Journal ArticleDOI
TL;DR: The goals of the paper are to demonstrate that although non-standard, many of the important quantitative and qualitative properties of ordinary differential equations that hold under the standard conditions apply here as well, and to prove convergence for a class of numerical schemes designed to approximate solutions to a given variational inequality.
Abstract: The variational inequality problem has been utilized to formulate and study a plethora of competitive equilibrium problems in different disciplines, ranging from oligopolistic market equilibrium problems to traffic network equilibrium problems. In this paper we consider for a given variational inequality a naturally related ordinary differential equation. The ordinary differential equations that arise are nonstandard because of discontinuities that appear in the dynamics. These discontinuities are due to the constraints associated with the feasible region of the variational inequality problem. The goals of the paper are two-fold. The first goal is to demonstrate that although non-standard, many of the important quantitative and qualitative properties of ordinary differential equations that hold under the standard conditions, such as Lipschitz continuity type conditions, apply here as well. This is important from the point of view of modeling, since it suggests (at least under some appropriate conditions) that these ordinary differential equations may serve as dynamical models. The second goal is to prove convergence for a class of numerical schemes designed to approximate solutions to a given variational inequality. This is done by exploiting the equivalence between the stationary points of the associated ordinary differential equation and the solutions of the variational inequality problem. It can be expected that the techniques described in this paper will be useful for more elaborate dynamical models, such as stochastic models, and that the connection between such dynamical models and the solutions to the variational inequalities will provide a deeper understanding of equilibrium problems.

Journal ArticleDOI
TL;DR: The processor reconfiguration through instruction-set metamorphosis (PRISM) general-purpose architecture, which speeds up computationally intensive tasks by augmenting the core processor's functionality with new operations, is described.
Abstract: The processor reconfiguration through instruction-set metamorphosis (PRISM) general-purpose architecture, which speeds up computationally intensive tasks by augmenting the core processor's functionality with new operations, is described. The PRISM approach adapts the configuration and fundamental operations of a core processing system to the computationally intensive portions of a targeted application. PRISM-1, an initial prototype system, is described, and experimental results that demonstrate the benefits of the PRISM concept are presented. >