scispace - formally typeset
Search or ask a question

Showing papers by "Brown University published in 1988"


Journal ArticleDOI
TL;DR: The revised criteria for the classification of rheumatoid arthritis (RA) were formulated from a computerized analysis of 262 contemporary, consecutively studied patients with RA and 262 control subjects with rheumatic diseases other than RA (non-RA).
Abstract: The revised criteria for the classification of rheumatoid arthritis (RA) were formulated from a computerized analysis of 262 contemporary, consecutively studied patients with RA and 262 control subjects with rheumatic diseases other than RA (non-RA). The new criteria are as follows: 1) morning stiffness in and around joints lasting at least 1 hour before maximal improvement; 2) soft tissue swelling (arthritis) of 3 or more joint areas observed by a physician; 3) swelling (arthritis) of the proximal interphalangeal, metacarpophalangeal, or wrist joints; 4) symmetric swelling (arthritis); 5) rheumatoid nodules; 6) the presence of rheumatoid factor; and 7) radiographic erosions and/or periarticular osteopenia in hand and/or wrist joints. Criteria 1 through 4 must have been present for at least 6 weeks. Rheumatoid arthritis is defined by the presence of 4 or more criteria, and no further qualifications (classic, definite, or probable) or list of exclusions are required. In addition, a "classification tree" schema is presented which performs equally as well as the traditional (4 of 7) format. The new criteria demonstrated 91-94% sensitivity and 89% specificity for RA when compared with non-RA rheumatic disease control subjects.

19,409 citations


Journal ArticleDOI
TL;DR: In this article, the authors propose a conceptual framework that links the technical assessment of risk with psychological, sociological, and cultural perspectives of risk perception and risk-related behavior to amplify or attenuate public responses to the risk or risk event.
Abstract: One of the most perplexing problems in risk analysis is why some relatively minor risks or risk events, as assessed by technical experts, often elicit strong public concerns and result in substantial impacts upon society and economy. This article sets forth a conceptual framework that seeks to link systematically the technical assessment of risk with psychological, sociological, and cultural perspectives of risk perception and risk-related behavior. The main thesis is that hazards interact with psychological, social, institutional, and cultural processes in ways that may amplify or attenuate public responses to the risk or risk event. A structural description of the social amplification of risk is now possible. Amplification occurs at two stages: in the transfer of information about the risk, and in the response mechanisms of society. Signals about risk are processed by individual and social amplification stations, including the scientist who communicates the risk assessment, the news media, cultural groups, interpersonal networks, and others. Key steps of amplifications can be identified at each stage. The amplified risk leads to behavioral responses, which, in turn, result in secondary impacts. Models are presented that portray the elements and linkages in the proposed conceptual framework.

3,016 citations



Journal ArticleDOI
TL;DR: Tight upper and lower bounds are provided for the number of inputs and outputs (I/OS) between internal memory and secondary storage required for five sorting-related problems: sorting, the fast Fourier transform (FFT), permutation networks, permuting, and matrix transposition.
Abstract: We provide tight upper and lower bounds, up to a constant factor, for the number of inputs and outputs (I/OS) between internal memory and secondary storage required for five sorting-related problems: sorting, the fast Fourier transform (FFT), permutation networks, permuting, and matrix transposition. The bounds hold both in the worst case and in the average case, and in several situations the constant factors match. Secondary storage is modeled as a magnetic disk capable of transferring P blocks each containing B records in a single time unit; the records in each block must be input from or output to B contiguous locations on the disk. We give two optimal algorithms for the problems, which are variants of merge sorting and distribution sorting. In particular we show for P = 1 that the standard merge sorting algorithm is an optimal external sorting method, up to a constant factor in the number of I/Os. Our sorting algorithms use the same number of I/Os as does the permutation phase of key sorting, except when the internal memory size is extremely small, thus affirming the popular adage that key sorting is not faster. We also give a simpler and more direct derivation of Hong and Kung's lower bound for the FFT for the special case B = P = O(1).

1,344 citations


Journal ArticleDOI
16 Sep 1988-Science
TL;DR: These findings illustrate how processes in different ecological habitats are coupled and models combining larval circulation with adult interactions can potentially forecast population fluctuations.
Abstract: Organisms living in the marine rocky intertidal zone compete for space This, together with predation, physical disruption, and differing species tolerances to physiological stress, explains the structure of the ecological communities at some sites At other sites the supply of larvae is limiting, and events in the offshore waters, such as wind-driven upwelling, explain the composition of intertidal communities Whether the community ecology at a site is governed by adult-adult interactions within the site, or by limitations to the supply of larvae reaching the site, is determined by the regional pattern of circulation in the coastal waters Models combining larval circulation with adult interactions can potentially forecast population fluctuations These findings illustrate how processes in different ecological habitats are coupled

1,187 citations


Book
01 May 1988
TL;DR: The set is recommended for both the commercial and research knowledge-based–systems practitioner and provides the background necessary to evaluate knowledge-acquisition tools such as NEXTRA, Test Bench, and AutoIntelligence (IntelligenceWare).
Abstract: ion for Knowledge Acquisition” by T. Bylander and B. Chadrasekaran. Chandrasekaran’s papers are usually illuminating, and this one does not fail: He and Bylander re-examine such traditional beliefs as knowledge should be uniformly represented and controlled and the knowledge base should be separated from the inference engine. The final 10 papers in volume 1 discuss generalized learning and ruleinduction techniques. They are interesting and informative, particularly “Generalization and Noise” by Y. Kodratoff and M. Manango, which discusses symbolic and numeric rule induction. Most rule-induction techniques focus on the use of examples and numeric analysis such as repertory grids. Kodratoff’s and Manango’s exploration of how the two complement each other is refreshing. Because of their technical nature and the amount of work it would take to put their content to use, most of the papers in this section of the volume are more appropriate for a specialized or research-oriented group. For those just getting involved in knowledge-based–systems development, Knowledge Acquisition Tools for Expert Systems is the more useful volume. In addition to discussing the tools themselves, most of the papers contain details of the knowledgeacquisition techniques that are automated, thus providing much of the same information which is available in the first volume. As an added benefit, they also often discuss the underlying architectures for solving domain-specific problems. For instance, the details of the medical diagnostic architecture laid out in “Design for Acquisition: Principles of Knowledge System Design to Facilitate Knowledge Acquisition” by T. R. Gruber and P. R. Cohen are almost as useful as the discussion of how to build a knowledge-acquisition system. Volume 2 is particularly germane given the rise in commercial interest about automated knowledge acquisition following this year’s introduction of Neuron Data’s NEXTRATM product and last year’s introduction of Test Bench by Texas Instruments. Test Bench is actually discussed in “A Mixed-Initiative Workbench for Knowledge Acquisition” by G. S. Kahn, E. H. Breaux, P. De Klerk, and R. L. Joseph. This volume provides the background necessary to evaluate knowledge-acquisition tools such as NEXTRA, Test Bench, and AutoIntelligence (IntelligenceWare). The vendors of knowledge-based–systems development tools, for example, Inference, IntelliCorp, Aion, AI Corp., and IBM, would do well to pay heed to these books because they point the way to removing the knowledge bottleneck from knowledge-based–systems development. Overall, the papers in both volumes are comprehensive and well integrated, a sometimes difficult state to achieve when compiling a collection of papers resulting from a small conference. The collection is comparable to Anna Hart’s Knowledge Acquisition for Expert Systems (McGraw-Hill, 1986), but it is broader in scope and not as structured. The arrangement of the papers is marred only by an overly brief index. Few readers can be expected to read a collection from beginning to end, and a better index would facilitate more enlightened use. Less important—but nevertheless distracting—is the large number of typographical errors in both volumes. In conclusion, the set is recommended for both the commercial and research knowledge-based–systems practitioner. Reading the volumes in reverse order might be more useful to the commercial developer given the extra information available in volume 2. Neurocomputing: Foundations of Research

970 citations


Journal ArticleDOI
TL;DR: A bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs is presented, a first attempt to encompass both theoretical and application-oriented papers from disparate areas.
Abstract: Several data presentation problems involve drawing graphs so that they are easy to read and understand. Examples include circuit schematics and diagrams for information systems analysis and design. In this paper we present a bibliographic survey on algorithms whose goal is to produce aesthetically pleasing drawings of graphs. Research on this topic is spread over the broad spectrum of computer science. This bibliography constitutes a first attempt to encompass both theoretical and application-oriented papers from disparate areas.

959 citations


Proceedings Article
21 Aug 1988
TL;DR: This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex.
Abstract: This paper presents a framework for exploring issues in time-dependent planning: planning in which the time available to respond to predicted events varies, and the decision making required to formulate effective responses is complex. Our analysis of time-dependent planning suggests an approach based on a class of algorithms that we call anytime algorithms. Anytime algorithms can be interrupted at any point during computation to return a result whose utility is a function of computation time. We explore methods for solving time-dependent planning problems based on the properties of anytime algorithms.

945 citations


Journal ArticleDOI
Alan Needleman1
TL;DR: In this paper, the role of material rate dependence in setting the character of governing equations is illustrated in the context of a simple one-dimensional problem, and numerical results are presented that illustrate the localization behavior of slightly rate-dependent solids under both quasi-static and dynamic loading conditions.
Abstract: The role of material rate dependence in setting the character of governing equations is illustrated in the context of a simple one-dimensional problem. For rate-dependent solids, the incremental equilibrium equations for quasi-static problems remain elliptic and wave speeds for dynamic problems remain real, even in the presence of strain-softening. The pathological mesh sensitivity associated with numerical solutions of localization problems for rate-independent solids is eliminated. In effect, material rate dependence implicity introduces a length scale into the governing equations, although the constitutive description does not contain a parameter with the dimensions of length. Numerical results are presented that illustrate the localization behavior of slightly rate-dependent solids under both quasi-static and dynamic loading conditions.

874 citations


Journal ArticleDOI
TL;DR: The cytological diagnosis corroborated the known genetic evidence in 42 plant species and conflicted with the genetic reports in five species, which are discussed, suggesting that biparental inheritance of plastids is rare.
Abstract: We have developed a diagnostic method to screen rapidly for plant species potentially capable of biparental inheritance ofplastid DNA using the DNA fluorochrome 4',6-diamidino-2-phenylindole (DAPI) in conjunction with epifluorescence microscopy. Pollen shed from 235 plant species (including about 50 of agronomic importance) representing 80 families were screened. Putative plastid DNA was detected in the generative and/or sperm cells of pollen from 26 genera (43 species) representing 15 families. Plastid DNA was not detected in the generative or sperm cells of pollen from 192 plant species, thereby strongly suggesting that these species have only maternal inheritance. Our cytological diagnosis corroborated the known genetic evidence in 42 plant species and conflicted with the genetic reports in five species, which are discussed. The data suggest that biparental inheritance of plastids is rare; overall, it may occur in about 14% of flowering plant genera, examples of which are scattered among 19% of the families examined. This methodology also readily reveals whether pollen is bi- or trinucleate.

781 citations


Journal ArticleDOI
TL;DR: In this article, a boundary value problem simulating a periodic array of spherical voids in an isotropically hardening elastic-viscoplastic matrix is analyzed, showing a shift from a general axisymmetric deformation state to a mode of uniaxial straining at which point the plastic deformation localizes to the ligament between neighboring voids.

Journal ArticleDOI
TL;DR: In this article, a geometrically rigourous formulation of J 2 flow theory taking full account of crack-tip blunting is presented. But the focus is on opening dominated load states and the scope is broadened to include finite ligament plasticity and finite deformation effects on near-tip fields.
Abstract: The present investigation is focused on «opening» dominated load states and the scope is broadened to include finite ligament plasticity and finite deformation effects on near-tip fields. We adopt a geometrically rigourous formulation of J 2 flow theory taking full account of crack-tip blunting


Journal ArticleDOI
TL;DR: An empirical relation between the degree of bending and the altered electrophoretic mobility in polyacrylamide gels that allows estimation of protein-induced bends is generated.
Abstract: Protein-induced DNA bending is an important element in the structure of many protein-DNA complexes, including those involved in replication, transcription, and recombination. To understand these structures, the path followed by the DNA in each complex must be established. We have generated an empirical relation between the degree of bending and the altered electrophoretic mobility in polyacrylamide gels that allows estimation of protein-induced bends. This technique has been used to analyze 17 different protein-DNA complexes formed by six proteins including the four proteins involved in lambda site-specific recombination. The simplicity of this technique should make it useful in estimating angles for the construction of models of protein-DNA complexes and readily applicable to many systems where questions of higher-order structure are important for understanding function.


Journal ArticleDOI
TL;DR: Intermedia as mentioned in this paper is a tool designed to support both teaching and research in a university environment, which is an extension of hypertext that incorporates other media in addition to text, and it provides linking capabilities integrated into a desktop user environment.
Abstract: A description is given of Intermedia, a tool designed to support both teaching and research in a university environment. This multiapplication hypermedia system provides linking capabilities integrated into a desktop user environment. Hypermedia is simply an extension of hypertext that incorporates other media in addition to text. To promote consistency, the applications were built with an object-oriented framework. A sample Intermedia session is presented. >

Journal ArticleDOI
01 Nov 1988-Nature
TL;DR: Using computer-generated displays of optical flow, it is shown that humans can perceive their direction of self-motion during stationary fixations, pursuit eye movements, and with displays that simulate the effects of eye movements.
Abstract: Can moving observers distinguish the direction in which they are moving from the direction in which they are looking? Radial patterns of optical flow could be used to perceive the translational direction of self-motion and to guide locomotion. However, these patterns are confounded by eye movements, which has led previous investigators to reject the outflow pattern and to propose alternative strategies. Using computer-generated displays of optical flow, we show here that humans can perceive their direction of self-motion during stationary fixations, pursuit eye movements, and with displays that simulate the effects of eye movements. We conclude that optical flow is sufficient for perceiving the direction of self-motion and provide evidence for a theory based on differential element motion.

Journal ArticleDOI
A. Marchand1, J. Duffy1
TL;DR: In this article, a series of experiments were described in which the local temperature and local strain are measured during the formation of an adiabatic shear band in a low alloy structural steel (HY-100).
Abstract: A series of experiments is described in which the local temperature and local strain are measured during the formation of an adiabatic shear band in a low alloy structural steel (HY-100). The specimen employed consists of a short thin-walled tube and the required rapid deformation rates are imposed by loading the specimen in a torsional Kolsky bar (split-Hopkinson bar). The local temperature is determined by measuring the infrared radiation emanating at twelve neighboring points on the specimen's surface, including the shear band area. Indium-antimonide elements are employed for this purpose to give the temperature history during deformation. In addition, high speed photographs are made of a grid pattern deposited on the specimen's surface, thus providing a measure of the strain distribution at various stages during shear band formation. By testing a number of specimens, it is possible to form a picture of the developing strain localization process, of the temperature history within the forming shear band, and of the consequent loss in the load carrying capacity of the steel. It appears that plastic deformation follows a three stage process which begins with a homogeneous strain state, followed by a generally inhomogeneous strain distribution, and finally by a narrowing of the localization into a fine shear band. It is estimated that the shear band propagates at a speed of about 510 m/s in the material tested. Results also include data on the stress-strain behavior of HY-100 steel over the temperature range —190°C to 250°C and at quasi-static as well as dynamic strain rates.

Posted Content
TL;DR: This paper examined the impact of major demographic changes on the housing market in the United States and found that the entry of the Baby Boom generation into its house-buying years was the major cause of the increase in real housing prices in the l97Os.
Abstract: This paper examines the impact of major demographic changes on the housing market in the United States. The entry of the Baby Boom generation into its house-buying years is found to be the major cause of the increase in real housing prices in the l97Os. Since the Baby Bust generation is now entering its house-buying years, housing demand will grow more slowly in the 1990s than in any time in the past forty years. If the historical relation between housing demand and housing prices continues into the future, real housing prices will fall substantially over the next two decades.

Proceedings ArticleDOI
13 Jan 1988
TL;DR: This work proposes a redundancy elimination algorithm that is global (in that it deals with the entire program), yet able to recognize redundancy among expressions that are lexitally different, and takes advantage of second order effects.
Abstract: Most previous redundancy elilmination algorithms have been of two kinds. The lexical algorithms deal with the entire program, but they can only detect redundancy among computations of lexicatlly identical expressions, where expressions are lexically identical if they apply exactly the same operator to exactly the same operands. The value numbering algorithms,, on the other hand, can recognize redundancy among ex:pressions that are lexically different but that are certain to compute the same value. This is accomplished by assigning special symbolic names called value numbers to expr,essions. If the value numbers of the operands of two expressions are identical, and if the operators applied by the expressions are identical, then the expressions receive the: same value number and are certain to have the same values. Sameness of value numbers permits more extensive optimization than lexical identity, but value numbering algor:ithms have usually been restricted in the past to basic blocks (sequences of computations with no branching) or extended basic blocks (sequences of computations with no joins). We propose a redundancy elimination algorithm that is global (in that it deals with the entire program), yet able to recognize redundancy among expressions that are lexitally different. The al,gorithm also takes advantage of second order effects: transformations based on the discovery that two computations compute the same value may create opportunities to discover that other computations are equivalent. The algorithm applies to programs expressed as reducible [l] [9] control flow gratphs. As the examples in section 7 illustrate, our algorithm optimizes reducible programs much more extensively than previous algorithms. In the special case of a program without loops, the code generated by our algorithm is provably “optimal” in the technical sense explained in section 8. Thiis degree of optimization is

Proceedings ArticleDOI
13 Jan 1988
TL;DR: An algorithm for detecting when two computations produce equivalent values by developing a static property called congruence, which is conservative in that any variables detected to be e:quivalent will in fact be equivalent, but not all equivalences are detected.
Abstract: paper presents an algorithm for detecting when two computations produce equivalent values. The equivalence of programs, and hence the equivalence of values, is in general undecidable. Thus, the best one can hope to do is to give an efficient algorithm that detects a large subclass of all the possible equivalences in a program. Two variables are said to be equivalent at a point p if those variables contain the same values whenever control reaches p during any possible execution of the program. We will not examine all possible executions of the program. Instead, we will develop a static property called congruence. Congruence implies, but is not implied by, equivalence. Our approach is conservative in that any variables detected to be e:quivalent will in fact be equivalent, but not all equivalences are detected. Previous work has shown how to apply a technique c.alled value numbering in basic blocks [CS70]. Value numbering is essentially symbolic execution on straight-line programs (basic blocks). Symbolic execution implies that two expressions are assumed to be equal only when they consist of the same functions and the corresponding arguments of these functions are equal. An expression DAG is associated with each assignment statement. A hashing algorithm assigns a unique integer, the value number, to each different expression tree. Two variables that are assigned the same integer are guaranteed to be equivalent. After the code

Journal ArticleDOI
TL;DR: It is concluded that TGF-beta may function as the effector of an inhibitory paracrine loop that is activated during liver regeneration, perhaps to prevent uncontrolled hepatocyte proliferation.
Abstract: Transforming growth factor beta (TGF-beta) is a growth factor with multiple biological properties including stimulation and inhibition of cell proliferation. To determine whether TGF-beta is involved in hepatocyte growth responses in vivo, we measured the levels of TGF-beta mRNA in normal liver and during liver regeneration after partial hepatectomy in rats. TGF-beta mRNA increases in the regenerating liver and reaches a peak (about 8 times higher than basal levels) after the major wave of hepatocyte cell division and mitosis have taken place and after the peak expression of the ras protooncogenes. Although hepatocytes from normal and regenerating liver respond to TGF-beta, they do not synthesize TGF-beta mRNA. Instead, the message is present in liver nonparenchymal cells and is particularly abundant in cell fractions enriched for endothelial cells. TGF-beta inhibits epidermal growth factor-induced DNA synthesis in vitro in hepatocytes from normal or regenerating liver, although the dose-response curves vary according to the culture medium used. We conclude that TGF-beta may function as the effector of an inhibitory paracrine loop that is activated during liver regeneration, perhaps to prevent uncontrolled hepatocyte proliferation.

Patent
16 Nov 1988
TL;DR: In this paper, the delivery of a neurotransmitter from an implanted, neurotransmitter-secreting cell culture to a target region in a subject is described. And the cell culture is maintained within a biocompatible, semipermeable membrane which permits the diffusion of the neurotransmitter therethrough while excluding viruses, antibodies, and other detrimental agents present in the external environment from gaining access.
Abstract: Methods and devices are disclosed for the delivery of a neurotransmitter from an implanted, neurotransmitter-secreting cell culture to a target region in a subject. The cell culture is maintained within a biocompatible, semipermeable membrane which permits the diffusion of the neurotransmitter therethrough while excluding viruses, antibodies, and other detrimental agents present in the external environment from gaining access. Implantable cell culture devices are disclosed, some of which may be retrieved from the subject, replaced or recharged with new, neurotransmitter-secreting cell cultures, and reimplanted.

Journal ArticleDOI
TL;DR: Using a discrimination task, it is found that heading accuracy improved by an order of magnitude, with 75%-correct thresholds of 0.66 degrees in the highest speed and density condition and 1.2 degrees generally, consistent with Gibson's (1950) original global radial outflow hypothesis for perception of heading during translation.
Abstract: Radial patterns of optical flow produced by observer translation could be used to perceive the direction of self-movement during locomotion, and a number of formal analyses of such patterns have recently appeared. However, there is comparatively little empirical research on the perception of heading from optical flow, and what data there are indicate surprisingly poor performance, with heading errors on the order of 5"-10". We examined heading judgments during translation parallel, perpendicular, and at oblique angles to a random-dot plane, varying observer speed and dot density. Using a discrimination task, we found that heading accuracy improved by an order of magnitude, with 75%-correct thresholds of 0.66* in the highest speed and density condition and 1.2 ° generally. Performance remained high with displays of 63-10 dots, but it dropped significantly with only 2 dots; there was no consistent speed effect and no effect of angle of approach to the surface. The results are inconsistent with theories based on the local focus of outflow, local motion parallax, multiple fixations, differential motion parallax, and the local maximum of divergence. But they are consistent with Gibson's (1950) original global radial outflow hypothesis for perception of heading during translation.

Journal ArticleDOI
TL;DR: This article approaches this issue by asking how people comprehend modified noun phrases of this sort, and suggests that the combination process does require reference to world knowledge.

Journal ArticleDOI
TL;DR: An algorithm for disambiguation that is similar to CLAWS but that operates in linear rather than in exponential time and space, and which minimizes the unsystematic augments is presented.
Abstract: Several algorithms have been developed in the past that attempt to resolve categorial ambiguities in natural language text without recourse to syntactic or semantic level information. An innovative method (called "CLAWS") was recently developed by those working with the Lancaster-Oslo/Bergen Corpus of British English. This algorithm uses a systematic calculation based upon the probabilities of co-occurrence of particular tags. Its accuracy is high, but it is very slow, and it has been manually augmented in a number of ways. The effects upon accuracy of this manual augmentation are not individually known.The current paper presents an algorithm for disambiguation that is similar to CLAWS but that operates in linear rather than in exponential time and space, and which minimizes the unsystematic augments. Tests of the algorithm using the million words of the Brown Standard Corpus of English are reported; the overall accuracy is 96%. This algorithm can provide a fast and accurate front end to any parsing or natural language processing system for English.

Journal ArticleDOI
TL;DR: In this paper, the authors analyzed shipboard gravity data in the vicinity of the southern Mid-Atlantic Ridge at 31-34.5° S. The area of study covers six ridge segments, two major transforms, the Cox and Meteor, and three small offsets or discordant zones.
Abstract: To decipher the distribution of mass anomalies near the earth's surface and their relation to the major tectonic elements of a spreading plate boundary, we have analyzed shipboard gravity data in the vicinity of the southern Mid-Atlantic Ridge at 31–34.5° S. The area of study covers six ridge segments, two major transforms, the Cox and Meteor, and three small offsets or discordant zones. One of these small offsets is an elongate, deep basin at 33.5° S that strikes at about 45° to the adjoining ridge axes. By subtracting from the free-air anomaly the three-dimensional (3-D) effects of the seafloor topography and Moho relief, assuming constant densities of the crust and mantle and constant crustal thickness, we generate the mantle Bouguer anomaly. The mantle Bouguer anomaly is caused by variations in crustal thickness and the temperature and density structure of the mantle. By subtracting from the mantle Bouguer anomaly the effects of the density variations due to the 3-D thermal structure predicted by a simple model of passive flow in the mantle, we calculate the residual gravity anomalies. We interpret residual gravity anomalies in terms of anomalous crustal thickness variations and/or mantle thermal structures that are not considered in the forward model. As inferred from the residual map, the deep, major fracture zone valleys and the median, rift valleys are not isostatically compensated by thin crust. Thin crust may be associated with the broad, inactive segment of the Meteor fracture zone but is not clearly detected in the narrow, active transform zone. On the other hand, the presence of high residual anomalies along the relict trace of the oblique offset at 33.5° S suggests that thin crust may have been generated at an oblique spreading center which has experienced a restricted magma supply. The two smaller offsets at 31.3° S and 32.5° S also show residual anomalies suggesting thin crust but the anomalies are less pronounced than that at the 33.5° S oblique offset. There is a distinct, circular-shaped mantle Bouguer low centered on the shallowest portion of the ridge segment at about 33° S, which may represent upwelling in the form of a mantle plume beneath this ridge, or the progressive, along-axis crustal thinning caused by a centered, localized magma supply zone. Both mantle Bouguer and residual anomalies show a distinct, local low to the west of the ridge south of the 33.5° S oblique offset and relatively high values at and to the east of this ridge segment. We interpret this pattern as an indication that the upwelling center in the mantle for this ridge is off-axis to the west of the ridge.

Journal ArticleDOI
TL;DR: Functional demands, with a note on dental wear, are presented.
Abstract: ( I ) Types of solutions available . . . . . . . . . (2) Increased wear resistance of dental tissues . . . . . . (3) Increased tooth size . . . . . . . . . . . (4) Additional teeth, and a discussion of bilophodonty . . , . , (5) Increased tooth height . . . . . . . . . . (6) Combinations of methods . . . . . . . . . . IV. Conclusion . . . . . . . . . . . . . V. Acknowledgements. . . . . . . . . . . . VI. References . . . . . . . . . . . . . (2) Functional demands, with a note on dental wear

Journal ArticleDOI
TL;DR: In this paper, it was shown that a positive rational bubble can start only on the first date of trading of a stock and that the existence of a rational bubble at any date would imply that the stock has been overvalued relative to market fund managers since the first day of trading.
Abstract: Free disposal of equity, which directly rules out the existence of negative rational bubbles in stock pric es, also imposes theoretical restrictions on the possible existence o f positive rational bubbles. The analysis in this paper shows that a positive rational bubble can start only on the first date of trading of a stock. Thus, the existence of a rational bubble at any date woul d imply that the stock has been overvalued relative to market fundame ntals since the first date of trading, and that prior to the first da te of trading the issuer of the stock and potential stockholders who anticipated the initial pricing of the stock expected that the stock would be overvalued. Copyright 1988 by Royal Economic Society.

Journal ArticleDOI
TL;DR: In this article, microstructural development in a powder metallurgy 2124 aluminum alloy-SiC whisker composite subject to controlled and systematic aging treatments was investigated using analytical transmission electron microscopy, quantitative analysis of precipitate growth, matrix microhardness measurements and studies of changes in electrical conductivity.