scispace - formally typeset
Search or ask a question

Showing papers by "University of Maryland, College Park published in 1996"


Journal ArticleDOI
TL;DR: This paper evaluates the performance both of some texture measures which have been successfully used in various applications and of some new promising approaches proposed recently.

6,650 citations


Proceedings ArticleDOI
03 Sep 1996
TL;DR: A task by data type taxonomy with seven data types and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts) is offered.
Abstract: A useful starting point for designing advanced graphical user interfaces is the visual information seeking Mantra: overview first, zoom and filter, then details on demand. But this is only a starting point in trying to understand the rich and varied set of information visualizations that have been proposed in recent years. The paper offers a task by data type taxonomy with seven data types (one, two, three dimensional data, temporal and multi dimensional data, and tree and network data) and seven tasks (overview, zoom, filter, details-on-demand, relate, history, and extracts).

5,290 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive definition and conceptual model of person-organization fit that incorporates supplementary as well as complementary perspectives on fit is presented, and a distinction is made between the direct measurement of perceived fit and the indirect measurement of actual personorganisation fit, using both cross-and individual-level techniques.
Abstract: This article presents a comprehensive definition and conceptual model of person-organization fit that incorporates supplementary as well as complementary perspectives on fit. To increase the precision of the construct's definition, it is also distinguished from other forms of environmental compatibility, silch as person-group and person-vocation fit. Once defined, commensurate measurement as it relates to supplementary and complementary fit is discussed and recommendations are offered regarding the necessity of its use. A distinction is made between the direct measurement of perceived fit and the indirect measurement of actual person-organization fit, using both cross- and individual-level techniques, and the debate regarding differences scores is reviewed. These definitional and measurement issues frame a review of the existing literature, as well as provide the basis for specific research propositions and suggestions for managerial applications.

4,079 citations


Journal ArticleDOI
TL;DR: Data from three experiments supported the hypothesis that an individual's perception of a particular system's ease of use is anchored to her or his general computer self-efficacy at all times, and objective usability has an impact on ease-of- use perceptions about a specific system only after direct experience with the system.
Abstract: The Technology Acceptance Model (TAM) has been widely used to predict user acceptance and use based on perceived ease of use and usefulness. However, in order to design effective training interventions to improve user acceptance, it is necessary to better understand the antecedents and determinants of key acceptance constructs. In this research, we focus on understanding the determinants of perceived ease of use. Data from three experiments spanning 108 subjects and six different systems supported our hypothesis that an individual's perception of a particular system's ease of use is anchored to her or his general computer self-efficacy at all times, and objective usability has an impact on ease of use perceptions about a specific system only after direct experience with the system. In addition to being an important research issue in user acceptance research, understanding antecedents of perceived ease of use is also important from a practical standpoint since several systems in which millions of dollars are invested are rejected because of poor user interfaces. Moreover, the actual underlying problem might be low computer self-efficacy of the target user group. In such cases, training interventions aimed at improving computer self-efficacy of users may be more effective than improved interface design for increasing user acceptance.

3,288 citations


Journal ArticleDOI
TL;DR: In this article, the authors suggest that implementation effectiveness is a function of the strength of an organization's climate for the implementation of an innovation and the fit of that innovation to targeted users' values.
Abstract: Implementation is the process of gaining targeted organizational members' appropriate and committed use of an innovation. Our model suggests that implementation effectiveness—the consistency and quality of targeted organizational members' use of an innovation—is a function of (a) the strength of an organization's climate for the implementation of that innovation and (b) the fit of that innovation to targeted users' values. The model specifies a range of implementation outcomes (including resistance, avoidance, compliance, and commitment); highlights the equifinality of an organization's climate for implementation; describes within- and between-organizational differences in innovation-values fit; and suggests new topics and strategies for implementation research.

2,006 citations


Journal ArticleDOI
TL;DR: Empirical evidence attests to diverse need for closure effects on fundamental social psychological phenomena, including impression formation, stereotyping, attribution, persuasion, group decision making, and language use in intergroup contexts.
Abstract: A theoretical framework is outlined in which the key construct is the need for (nonspecific) cognitive closure. The need for closure is a desire for definite knowledge on some issue. It represents a dimension of stable individual differences as well as a situationally evocable state. The need for closure has widely ramifying consequences for social-cognitive phenomena at the intrapersonal, interpersonal, and group levels of analysis. Those consequences derive from 2 general tendencies, those of urgency and permanence. The urgency tendency represents an individual's inclination to attain closure as soon as possible, and the permanence tendency represents an individual's inclination to maintain it for as long as possible. Empirical evidence for present theory attests to diverse need for closure effects on fundamental social psychological phenomena, including impression formation, stereotyping, attribution, persuasion, group decision making, and language use in intergroup contexts. The construction of new knowledge is a pervasive human pursuit for both individuals and collectives. From relatively simple activities such as crossing a busy road to highly complex endeavors such as launching a space shuttle, new knowledge is indispensable for secure decisions and reasoned actions. The knowledge-construction process is often involved and intricate. It draws on background notions activated from memory and local information from the immediate context. It entails the extensive testing of hypotheses and the piecing of isolated cognitive bits into coherent wholes. It integrates inchoate sensations with articulate thoughts, detects meaningful signals in seas of ambient noise, and more. Two aspects of knowledge construction are of present interest: its motivated nature and its social character. That knowledge construction has a motivational base should come as no particular surprise. The host of effortful activities it comprises pose considerable demands on resource allocation; hence, it may well require motivation to get under way. Specifically, individuals may desire knowledge on some topics and not others, and they may delimit their constructive endeavors to those particular domains. But what kind of a motivational variable is the "desire for knowledge"? At least two answers readily suggest themselves: Knowledge could be desired because it conveys welcome news in regard to a given concern or because it conveys any definite news (whether welcome or unwelcome) in instances in which such information is required for some purpose. For instance, a mother may desire to know that

1,928 citations


Journal ArticleDOI
TL;DR: Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle and are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Abstract: This paper presents the results of a study in which we empirically investigated the suite of object-oriented (OO) design metrics introduced in (Chidamber and Kemerer, 1994). More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described in (Li and Henry, 1993) where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known OO analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these OO metrics are discussed. Several of Chidamber and Kemerer's OO metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

1,741 citations


Journal ArticleDOI
TL;DR: This review examines recent research on groups and teams, giving special emphasis to research investigating factors that influence the effectiveness of teams at work in organizations, including group composition, cohesiveness, and motivation.
Abstract: This review examines recent research on groups and teams, giving special emphasis to research investigating factors that influence the effectiveness of teams at work in organizations. Several performance-relevant factors are considered, including group composition, cohesiveness, and motivation, although certain topics (e.g. composition) have been more actively researched than others in recent years and so are addressed in greater depth. Also actively researched are certain types of teams, including flight crews, computer-supported groups, and various forms of autonomous work groups. Evidence on basic processes in and the performance effectiveness of such groups is reviewed. Also reviewed are findings from studies of organizational redesign involving the implementation of teams. Findings from these studies provide some of the strongest support for the value of teams to organizational effectiveness. The review concludes by briefly considering selected open questions and emerging directions in group research.

1,698 citations


Journal ArticleDOI
TL;DR: In this paper, the authors explore the relationship between R. W. Shephard's input distance function and D. G. Luenberger's benefit function and point out that the latter can be recognized in a production context as a directional input distance functions which can exhaustively characterize technologies in both price and input space.

1,247 citations


Journal ArticleDOI
23 Feb 1996-Science
TL;DR: In this article, negative thermal ionization mass spectrometry with modified digestion and equilibration techniques was used to determine the rhenium and osmium concentrations and ratios of group IIA, IIIA, IVA, and IVB iron meteorites.
Abstract: Rhenium and osmium concentrations and osmium isotopic ratios of group IIA, IIIA, IVA, and IVB iron meteorites were determined by negative thermal ionization mass spectrometry with modified digestion and equilibration techniques. Precise isochrons are defined for all four groups. An absolute age of 4558 million years is assumed for group IIIA irons, leading to closure ages of 4537 ± 8, 4456 ± 25, and 4527 ± 29 million years for the group IIA, IVA, and IVB irons, respectively. The initial osmium-187/osmium-188 ratios of the IIA, IIIA, and IVA groups, as a function of crystallization age, suggest that the rhenium/osmium ratio of the parental materials to these asteroidal cores was similar to that of ordinary chondrites. Data for IVB irons, in contrast, indicate a different mode of formation and derivation from a distinctly nonchondritic osmium reservoir.

1,206 citations


Journal ArticleDOI
TL;DR: In this article, the authors used satellite data to specify the time-varying phonological properties of FPAR, leaf area index, and canopy greenness fraction, and applied corrections to the source NDVI dataset to account for anomalies in the data time series, the effect of variations in solar zenith angle, data dropouts in cold regions where a temperature threshold procedure designed to screen for clouds also eliminated cold land surface points, and persistent cloud cover in the Tropics.
Abstract: The global parameter fields used in the revised Simple Biosphere Model (SiB2) of Sellers et al. are reviewed. The most important innovation over the earlier SiB1 parameter set of Dorman and Sellers is the use of satellite data to specify the time-varying phonological properties of FPAR, leaf area index. and canopy greenness fraction. This was done by processing a monthly 1° by 1° normalized difference vegetation index (NDVI) dataset obtained farm Advanced Very High Resolution Radiometer red and near-infrared data. Corrections were applied to the source NDVI dataset to account for (i) obvious anomalies in the data time series, (ii) the effect of variations in solar zenith angle, (iii) data dropouts in cold regions where a temperature threshold procedure designed to screen for clouds also eliminated cold land surface points, and (iv) persistent cloud cover in the Tropics. An outline of the procedures for calculating the land surface parameters from the corrected NDVI dataset is given, and a brief d...

Journal ArticleDOI
TL;DR: A revised and expanded version of SharedPlans that reformulates Pollack's (1990) definition of individual plans to handle cases in which a single agent has only partial knowledge and has the features required by Bratman's (1992) account of shared cooperative activity.

Journal ArticleDOI
TL;DR: It is suggested that those who employ the TAM measures should continue using the original (grouped)format in order to best predict and explain user acceptance of information technology.
Abstract: The Technology Acceptance Model (TAM) is widely used by researchers and practitioners to predict and explain user acceptance of information technologies. TAM models system usage intentions and behavior as a function of perceived usefulness and perceived ease of use. The original scales for measuring the TAM constructs have been confirmed to be reliable and valid in several replications and applications spanning a range of technologies and user populations. However, a measurement bias may be present because the TAM instrument physicallygroupstogether the multiple items measuring each individual construct. Many scholars of instrument design would object to such item grouping, instead advocating that items from different constructs beintermixedin order to reduce “carryover” effects among the responses to multiple items targeting a specific construct, which might artificially inflate the observed reliability and validity. Three experiments involving two systems and a total of 708 subjects are reported which address whether such carryover biases are present in the TAM measures. All three studies found that itemgroupingvs. itemintermixinghad no significant effect (positive or negative) either on the high levels of reliability and validity of the TAM scales, or on the path coefficients linking them together. Ironically, open-ended verbal evaluations indicated that subjects were more confused and annoyed when items wereintermixed,suggesting a tendency toward “output interference” effects, which themselves could have a biasing effect. Our findings suggest that those who employ the TAM measures should continue using the original (grouped)format in order to best predict and explain user acceptance of information technology.

Journal ArticleDOI
TL;DR: In this paper, an expert panel was convened to conduct a comprehensive aquatic ecological risk assessment based on several newly suggested procedures and included exposure and hazard subcomponents as well as the overall risk assessment.
Abstract: The triazine herbicide atrazine (2-chloro-4-ethylamino-6-isopropyl-amino-s-triazine) is one of the most used pesticides in North America. Atrazine is principally used for control of certain annual broadleaf and grass weeds, primarily in corn but also in sorghum, sugarcane, and, to a lesser extent, other crops and landscaping. Atrazine is found in many surface and ground waters in North America, and aquatic ecological effects are a possible concern for the regulatory and regulated communities. To address these concerns an expert panel (the Panel) was convened to conduct a comprehensive aquatic ecological risk assessment. This assessment was based on several newly suggested procedures and included exposure and hazard subcomponents as well as the overall risk assessment. The Panel determined that use of probabilistic risk assessment techniques was appropriate. Here, the results of this assessment are presented as a case study for these techniques. The environmental exposure assessment concentrated on monitoring data from Midwestern watersheds, the area of greatest atrazine use in North America. This analysis revealed that atrazine concentrations rarely exceed 20 μg/L in rivers and streams that were the main focus of the aquatic ecological risk assessment. Following storm runoff, biota in lower-order streams may be exposed to pulses of atrazine greater than 20 μg/L, but these exposures are short-lived. The assessment also considered exposures in lakes and reservoirs. The principal data set was developed by the U.S. Geological Survey, which monitored residues in 76 Midwestern reservoirs in 11 states in 1992-1993. Residue concentrations in some reservoirs were similar to those in streams but persisted longer. Atrazine residues were widespread in reservoirs (92% occurrence), and the 90th percentile of this exposure distribution for early June to July was about 5 μg/L. Mathematical simulation models of chemical fate were used to generalize the exposure analysis to other sites and to assess the potential effects of reduction in the application rates. Models were evaluated, modified, and calibrated against available monitoring data to validate that these models could predict atrazine runoff. PRZM-2 overpredicted atrazine concentrations by about an order of magnitude, whereas GLEAMS underpredicted by a factor of 2 to 5. Thus, exposure models were not used to extrapolate to other regions of atrazine use in this assessment. The effects assessment considered both freshwater and saltwater toxicity test results. Phytoplankton were the most sensitive organisms, followed, in decreasing order of sensitivity, by macrophytes, benthic invertebrates, zooplankton, and fish. Atrazine inhibits photophosphorylation but typically does not result in lethality or permanent cell damage in the short term. This characteristic of atrazine required a different model than typically used for understanding the potential impact in aquatic systems, where lethality or nonreversible effects are usually assumed. In addition, recovery of phytoplankton from exposure to 5 to 20 μg/L atrazine was demonstrated. In some mesocosm field experiments, phytoplankton and macrophytes were reduced after atrazine exposures greater than 20 μg/L. However, populations were quickly reestablished, even while atrazine residues persisted in the water. Effects in field studies were judged to be ecologically important only at exposures of 50 μg/L or greater. Mesocosm experiments did not reveal disruption of either ecosystem structure or function at atrazine concentrations typically encountered in the environment (generally 5 μg/L or less). Based on an integration of laboratory bioassay data, field effects studies, and environmental monitoring data from watersheds in high-use areas in the Midwestern United States, the Panel concluded that atrazine does not pose a significant risk to the aquatic environment. Although some inhibitory effects on algae, phytoplankton, or macrophyte production may occur in small streams vulnerable to agricultural runoff, these effects are likely to be transient, and quick recovery of the ecological system is expected. A subset of surface waters, principally small reservoirs in areas with intensive use of atrazine, may be at greater risk of exposure to atrazine. Therefore, it is recommended that site-specific risk assessments be conducted at these sites to assess possible ecological effects in the context of the uses to which these ecosystems are put and the effectiveness and cost-benefit aspect of any risk mitigation measures that may be applied.

Posted Content
TL;DR: In this article, the authors investigated the role of institutional factors in explaining firms' choice of debt maturity in a sample of 30 countries during 1980-91 and found that firms in developing countries use less long-term debt than similar firms in industrial countries.
Abstract: Do firms in developing countries use less long term debt than similar firms in industrial countries? This paper investigates the role of institutional factors in explaining firms' choice of debt maturity in a sample of 30 countries during 1980-91. Demirguc-Kunt and Maksimovic examine the maturity of firm debt in 30 countries during the period 1980-91. They find systematic differences in the use of long-term debt between industrial and developing countries and between small and large firms. In industrial countries, firms have more long-term debt and a greater proportion of their total debt is held as long-term debt. Large firms have more long-term debt, as a proportion of total assets and debt, than smaller firms do. The authors try to explain the variations in debt composition by differences in the effectiveness of legal systems, the development of stock markets and the banking sector, the level of government subsidies, and firm characteristics. In countries with an effective legal system, both large and small firms have more long-term debt relative to assets and their debt is of longer maturity. Both large and small firms in countries with a tradition of common law use less long-term debt, relative to their assets, than do firms in countries with a tradition of civil law. Large firms in common law countries also use less short-term debt. In countries with active stock markets, large firms have more long-term debt and debt of longer maturity. Neither the level of activity nor the size of the market is correlated with financing choices of small firms. By contrast, in countries with large banking sectors, small firms have less short-term debt and their debt is of longer maturity. Variation in the size of the banking sector does not have a corresponding correlation with the capital structures of large firms. Government subsidies to industry increase long-term debt levels of both small and large firms. For all firms, inflation is associated with less use of long-term debt. The authors also find evidence of maturity-matching for both large and small firms. This paper - a product of the Finance and Private Sector Development Division, Policy Research Department - is part of a larger effort in the department to understand the impact of institutional constraints on firms' financing choices. The study was funded by the Bank's Research Support Budget under the research project Term Finance: Theory and Evidence (RPO 679-62).

Book ChapterDOI
TL;DR: In this article, the authors consider a group of roving bandits in an anarchic environment, where there is little incentive to invest or produce, and therefore not much to steal.
Abstract: Consider the interests of the leader of a group of roving bandits in an anarchic environment. In such an environment, there is little incentive to invest or produce, and therefore not much to steal. If the bandit leader can seize and hold a given territory, it will pay him to limit the rate of his theft in that domain and to provide a peaceful order and other public goods. By making it clear that he will take only a given percentage of output — that is, by becoming a settled ruler with a given rate of tax theft — he leaves his victims with an incentive to produce. By providing a peaceful order and other public goods, he makes his subjects more productive. Out of the increase in output that results from limiting his rate of theft and from providing public goods, the bandit obtains more resources for his own purposes than from roving banditry.

Proceedings ArticleDOI
18 Jun 1996
TL;DR: A vision system for the 3-D model-based tracking of unconstrained human movement and initial tracking results from a large new Humans-in-Action database containing more than 2500 frames in each of four orthogonal views are presented.
Abstract: We present a vision system for the 3-D model-based tracking of unconstrained human movement. Using image sequences acquired simultaneously from multiple views, we recover the 3-D body pose at each time instant without the use of markers. The pose-recovery problem is formulated as a search problem and entails finding the pose parameters of a graphical human model whose synthesized appearance is most similar to the actual appearance of the real human in the multi-view images. The models used for this purpose are acquired from the images. We use a decomposition approach and a best-first technique to search through the high dimensional pose parameter space. A robust variant of chamfer matching is used as a fast similarity measure between synthesized and real edge images. We present initial tracking results from a large new Humans-in-Action (HIA) database containing more than 2500 frames in each of four orthogonal views. They contain subjects involved in a variety of activities, of various degrees of complexity, ranging from the more simple one-person hand waving to the challenging two-person close interaction in the Argentine Tango.

Journal ArticleDOI
TL;DR: In this article, the authors introduce the concept of a geochemical switch, in which HS- transforms the marine behavior of a conservative element to that of a particle reactive element, and the action point of the HS- switch is calculated to be, a(HS)- = 10(-3.6) - 10(-4.3).


Book ChapterDOI
TL;DR: In this paper, it was shown that the effective transport of active N and P from land to the shelf through very large rivers is reduced to 292 · 109 moles y-1 of N and 13 · 109moles y -1 of P.
Abstract: Five large rivers that discharge on the western North Atlantic continental shelf carry about 45% of the nitrogen (N) and 70% of the phosphorus (P) that others estimate to be the total flux of these elements from the entire North Atlantic watershed, including North, Central and South America, Europe, and Northwest Africa. We estimate that 61 · 109 moles y-1 of N and 20 · 109 moles y-1 of P from the large rivers are buried with sediments in their deltas, and that an equal amount of N and P from the large rivers is lost to the shelf through burial of river sediments that are deposited directly on the continental slope. The effective transport of active N and P from land to the shelf through the very large rivers is thus reduced to 292 · 109 moles y-1 of N and 13 · 109 moles y-1 of P.

Journal ArticleDOI
TL;DR: This work uses a mathematical framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling) and believes that the formalisms and properties it introduces are convenient and intuitive and contributes constructively to a firmer theoretical ground of software measurement.
Abstract: Little theory exists in the field of software system measurement. Concepts such as complexity, coupling, cohesion or even size are very often subject to interpretation and appear to have inconsistent definitions in the literature. As a consequence, there is little guidance provided to the analyst attempting to define proper measures for specific problems. Many controversies in the literature are simply misunderstandings and stem from the fact that some people talk about different measurement concepts under the same label (complexity is the most common case). There is a need to define unambiguously the most important measurement concepts used in the measurement of software products. One way of doing so is to define precisely what mathematical properties characterize these concepts, regardless of the specific software artifacts to which these concepts are applied. Such a mathematical framework could generate a consensus in the software engineering community and provide a means for better communication among researchers, better guidelines for analysts, and better evaluation methods for commercial static analyzers for practitioners. We propose a mathematical framework which is generic, because it is not specific to any particular software artifact, and rigorous, because it is based on precise mathematical concepts. We use this framework to propose definitions of several important measurement concepts (size, length, complexity, cohesion, coupling). It does not intend to be complete or fully objective; other frameworks could have been proposed and different choices could have been made. However, we believe that the formalisms and properties we introduce are convenient and intuitive. This framework contributes constructively to a firmer theoretical ground of software measurement.

Journal ArticleDOI
TL;DR: In this article, the authors developed the geometry and dynamics of nonholonomic systems using an Ehresmann connection to model the constraints, and showed how the curvature of this connection entered into Lagrange's equations.
Abstract: This work develops the geometry and dynamics of mechanical systems with nonholonomic constraints and symmetry from the perspective of Lagrangian mechanics and with a view to control theoretical applications. The basic methodology is that of geometric mechanics applied to the formulation of Lagrange d'Alembert, generalizing the use of connections and momentum maps associated with a given symmetry group to this case. We begin by formulating the mechanics of nonholonomic systems using an Ehresmann connection to model the constraints, and show how the curvature of this connection enters into Lagrange's equations. Unlike the situation with standard configuration space constraints, the presence of symmetries in the nonholonomic case may or may not lead to conservation laws. However, the momentum map determined by the symmetry group still satisfies a useful differential equation that decouples from the group variables. This momentum equation, which plays an important role in control problems, involves parallel transport operators and is computed explicitly in coordinates. An alternative description using a ``body reference frame'' relates part of the momentum equation to the components of the Euler-Poincar\'{e} equations along those symmetry directions consistent with the constraints. One of the purposes of this paper is to derive this evolution equation for the momentum and to distinguish geometrically and mechanically the cases where it is conserved and those where it is not. An example of the former is a ball or vertical disk rolling on a flat plane and an example of the latter is the snakeboard, a modified version of the skateboard which uses momentum coupling for locomotion generation. We construct a synthesis of the mechanical connection and the Ehresmann connection defining the constraints, obtaining an important new object we call the nonholonomic connection. When the nonholonomic connection is a principal connection for the given symmetry group, we show how to perform Lagrangian reduction in the presence of nonholonomic constraints, generalizing previous results which only held in special cases. Several detailed examples are given to illustrate the theory. September 1994 Revised, March 1995 Revised, June 1995

Journal ArticleDOI
TL;DR: The main new results are that the metric dimension of a graph with n nodes can be approximated in polynomial time within a factor of O(log n), and some properties of graphs with metric dimension two are presented.

Book
01 Jan 1996
TL;DR: Turner as discussed by the authors argues that story, projection, and parable precede grammar, and that language follows from these mental capacities as a consequence, concluding that language is the child of the literary mind.
Abstract: We usually consider literary thinking to be peripheral and dispensable, an activity for specialists: poets, prophets, lunatics, and babysitters. Certainly we do not think it is the basis of the mind. We think of stories and parables from Aesop's Fables or The Thousand and One Nights, for example, as exotic tales set in strange lands, with spectacular images, talking animals, and fantastic plots-wonderful entertainments, often insightful, but well removed from logic and science, and entirely foreign to the world of everyday thought. But Mark Turner argues that this common wisdom is wrong. The literary mind-the mind of stories and parables-is not peripheral but basic to thought. Story is the central principle of our experience and knowledge. Parable-the projection of story to give meaning to new encounters-is the indispensable tool of everyday reason. Literary thought makes everyday thought possible. This book makes the revolutionary claim that the basic issue for cognitive science is the nature of literary thinking. In The Literary Mind, Turner ranges from the tools of modern linguistics, to the recent work of neuroscientists such as Antonio Damasio and Gerald Edelman, to literary masterpieces by Homer, Dante, Shakespeare, and Proust, as he explains how story and projection-and their powerful combination in parable-are fundamental to everyday thought. In simple and traditional English, he reveals how we use parable to understand space and time, to grasp what it means to be located in space and time, and to conceive of ourselves, other selves, other lives, and other viewpoints. He explains the role of parable in reasoning, in categorizing, and in solving problems. He develops a powerful model of conceptual construction and, in a far-reaching final chapter, extends it to a new conception of the origin of language that contradicts proposals by such thinkers as Noam Chomsky and Steven Pinker. Turner argues that story, projection, and parable precede grammar, that language follows from these mental capacities as a consequence. Language, he concludes, is the child of the literary mind. Offering major revisions to our understanding of thought, conceptual activity, and the origin and nature of language, The Literary Mind presents a unified theory of central problems in cognitive science, linguistics, neuroscience, psychology, and philosophy. It gives new and unexpected answers to classic questions about knowledge, creativity, understanding, reason, and invention.

Journal ArticleDOI
TL;DR: In this paper, the authors describe what has been found during 30 years of research by the author and others on the relationship between conscious performance goals and performance on work tasks, and summarize the basic contents of goal setting theory in terms of 14 categories of findings.
Abstract: The article describes what has been found during 30 years of research by the author and others on the relationship between conscious performance goals and performance on work tasks. This approach is contrasted with previous approaches to motivation theory which stressed physiological, external or subconscious causes of action. The basic contents of goal setting theory are summarized in terms of 14 categories of findings. An applied example is provided.

Journal ArticleDOI
TL;DR: In this paper, a nonlinear principal component analysis (NLPCA) method which integrates the principal curve algorithm and neural networks is presented. But when applied to data sets the algorithm does not yield an NLPCA model in the sense of principal loadings.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on one key reason that changes introduced fail to alter the fundamental psychology or "feel" of the organization to its members, and argue that without changing this psychology, there can be no sustained change.


Proceedings ArticleDOI
13 Apr 1996
TL;DR: The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records, and Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.
Abstract: LifeLines provide a general visualization environment for personal histories that can be applied to medical and court records, professional histories and other types of biographical data. A one screen overview shows multiple facets of the records. Aspects, for example medical conditions or legal cases, are displayed as individual time lines, while icons indicate discrete events, such as physician consultations or legal reviews. Line color and thickness illustrate relationships or significance, rescaling tools and filters allow users to focus on part of the information. LifeLines reduce the chances of missing information, facilitate spotting anomalies and trends, streamline access to details, while remaining tailorable and easily transferable between applications. The paper describes the use of LifeLines for youth records of the Maryland Department of Juvenile Justice and also for medical records. User's feedback was collected using a Visual Basic prototype for the youth record. Techniques to deal with complex records are reviewed and issues of a standard personal record format are discussed.

Journal ArticleDOI
TL;DR: This article presents compiler optimizations to improve data locality based on a simple yet accurate cost model and finds performance improvements were difficult to achieve, but improved several programs.
Abstract: In the past decade, processor speed has become significantly faster than memory speed. Small, fast cache memories are designed to overcome this discrepancy, but they are only effective when programs exhibit data locality. In the this article, we present compiler optimizations to improve data locality based on a simple yet accurate cost model. The model computes both temporal and spatial reuse of cache lines to find desirable loop organizations. The cost model drives the application of compound transformations consisting of loop permutation, loop fusion, loop distribution, and loop reversal. To validate our optimization strategy, we implemented our algorithms and ran experiments on a large collection of scientific programs and kernels. Experiments illustrate that for kernels our model and algorithm can select and achieve the best loop structure for a nest. For over 30 complete applications, we executed the original and transformed versions and simulated cache hit rates. We collected statistics about the inherent characteristics of these programs and our ability to improve their data locality. To our knowledge, these studies are the first of such breadth and depth. We found performance improvements were difficult to achieve bacause benchmark programs typically have high hit rates even for small data caches; however, our optimizations significanty improved several programs.