scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1994"


01 Jan 1994
TL;DR: In this article, the authors present a protocol for routing in ad hoc networks that uses dynamic source routing, which adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently.
Abstract: An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any established infrastructure or centralized administration. In such an environment, it may be necessary for one mobile host to enlist the aid of other hosts in forwarding a packet to its destination, due to the limited range of each mobile host’s wireless transmissions. This paper presents a protocol for routing in ad hoc networks that uses dynamic source routing. The protocol adapts quickly to routing changes when host movement is frequent, yet requires little or no overhead during periods in which hosts move less frequently. Based on results from a packet-level simulation of mobile hosts operating in an ad hoc network, the protocol performs well over a variety of environmental conditions such as host density and movement rates. For all but the highest rates of host movement simulated, the overhead of the protocol is quite low, falling to just 1% of total data packets transmitted for moderate movement rates in a network of 24 mobile hosts. In all cases, the difference in length between the routes used and the optimal route lengths is negligible, and in most cases, route lengths are on average within a factor of 1.01 of optimal.

8,614 citations


Journal ArticleDOI
TL;DR: Examination of the scale on somewhat different grounds, however, does suggest that future applications can benefit from its revision, and a minor modification to the Life Orientation Test is described, along with data bearing on the revised scale's psychometric properties.
Abstract: Research on dispositional optimism as assessed by the Life Orientation Test (Scheier & Carver, 1985) has been challenged on the grounds that effects attributed to optimism are indistinguishable from those of unmeasured third variables, most notably, neuroticism. Data from 4,309 subjects show that associations between optimism and both depression and aspects of coping remain significant even when the effects of neuroticism, as well as the effects of trait anxiety, self-mastery, and self-esteem, are statistically controlled. Thus, the Life Orientation Test does appear to possess adequate predictive and discriminant validity. Examination of the scale on somewhat different grounds, however, does suggest that future applications can benefit from its revision. Thus, we also describe a minor modification to the Life Orientation Test, along with data bearing on the revised scale's psychometric properties.

6,395 citations


Journal ArticleDOI
K. Hagiwara, Ken Ichi Hikasa1, Koji Nakamura, Masaharu Tanabashi1, M. Aguilar-Benitez, Claude Amsler2, R. M. Barnett3, Patricia R. Burchat4, C. D. Carone5, C. Caso, G. Conforto6, Olav Dahl3, Michael Doser7, Semen Eidelman8, Jonathan L. Feng9, L. K. Gibbons10, Maury Goodman11, Christoph Grab12, D. E. Groom3, Atul Gurtu7, Atul Gurtu13, K. G. Hayes14, J. J. Herna`ndez-Rey15, K. Honscheid16, Christopher Kolda17, Michelangelo L. Mangano7, David Manley18, Aneesh V. Manohar19, John March-Russell7, Alberto Masoni, Ramon Miquel3, Klaus Mönig, Hitoshi Murayama20, Hitoshi Murayama3, S. Sánchez Navas12, Keith A. Olive21, Luc Pape7, C. Patrignani, A. Piepke22, Matts Roos23, John Terning24, Nils A. Tornqvist23, T. G. Trippe3, Petr Vogel25, C. G. Wohl3, Ron L. Workman26, W-M. Yao3, B. Armstrong3, P. S. Gee3, K. S. Lugovsky, S. B. Lugovsky, V. S. Lugovsky, Marina Artuso27, D. Asner28, K. S. Babu29, E. L. Barberio7, Marco Battaglia7, H. Bichsel30, O. Biebel31, Philippe Bloch7, Robert N. Cahn3, Ariella Cattai7, R. S. Chivukula32, R. Cousins33, G. A. Cowan34, Thibault Damour35, K. Desler, R. J. Donahue3, D. A. Edwards, Victor Daniel Elvira, Jens Erler36, V. V. Ezhela, A Fassò7, W. Fetscher12, Brian D. Fields37, B. Foster38, Daniel Froidevaux7, Masataka Fukugita39, Thomas K. Gaisser40, L. Garren, H.-J. Gerber12, Frederick J. Gilman41, Howard E. Haber42, C. A. Hagmann28, J.L. Hewett4, Ian Hinchliffe3, Craig J. Hogan30, G. Höhler43, P. Igo-Kemenes44, John David Jackson3, Kurtis F Johnson45, D. Karlen, B. Kayser, S. R. Klein3, Konrad Kleinknecht46, I.G. Knowles47, P. Kreitz4, Yu V. Kuyanov, R. Landua7, Paul Langacker36, L. S. Littenberg48, Alan D. Martin49, Tatsuya Nakada50, Tatsuya Nakada7, Meenakshi Narain32, Paolo Nason, John A. Peacock47, Helen R. Quinn4, Stuart Raby16, Georg G. Raffelt31, E. A. Razuvaev, B. Renk46, L. Rolandi7, Michael T Ronan3, L.J. Rosenberg51, Christopher T. Sachrajda52, A. I. Sanda53, Subir Sarkar54, Michael Schmitt55, O. Schneider50, Douglas Scott56, W. G. Seligman57, Michael H. Shaevitz57, Torbjörn Sjöstrand58, George F. Smoot3, Stefan M Spanier4, H. Spieler3, N. J. C. Spooner59, Mark Srednicki60, A. Stahl, Todor Stanev40, M. Suzuki3, N. P. Tkachenko, German Valencia61, K. van Bibber28, Manuella Vincter62, D. R. Ward63, Bryan R. Webber63, M R Whalley49, Lincoln Wolfenstein41, J. Womersley, C. L. Woody48, O. V. Zenin 
Tohoku University1, University of Zurich2, Lawrence Berkeley National Laboratory3, Stanford University4, College of William & Mary5, University of Urbino6, CERN7, Budker Institute of Nuclear Physics8, University of California, Irvine9, Cornell University10, Argonne National Laboratory11, ETH Zurich12, Tata Institute of Fundamental Research13, Hillsdale College14, Spanish National Research Council15, Ohio State University16, University of Notre Dame17, Kent State University18, University of California, San Diego19, University of California, Berkeley20, University of Minnesota21, University of Alabama22, University of Helsinki23, Los Alamos National Laboratory24, California Institute of Technology25, George Washington University26, Syracuse University27, Lawrence Livermore National Laboratory28, Oklahoma State University–Stillwater29, University of Washington30, Max Planck Society31, Boston University32, University of California, Los Angeles33, Royal Holloway, University of London34, Université Paris-Saclay35, University of Pennsylvania36, University of Illinois at Urbana–Champaign37, University of Bristol38, University of Tokyo39, University of Delaware40, Carnegie Mellon University41, University of California, Santa Cruz42, Karlsruhe Institute of Technology43, Heidelberg University44, Florida State University45, University of Mainz46, University of Edinburgh47, Brookhaven National Laboratory48, Durham University49, University of Lausanne50, Massachusetts Institute of Technology51, University of Southampton52, Nagoya University53, University of Oxford54, Northwestern University55, University of British Columbia56, Columbia University57, Lund University58, University of Sheffield59, University of California, Santa Barbara60, Iowa State University61, University of Alberta62, University of Cambridge63
TL;DR: This biennial Review summarizes much of Particle Physics using data from previous editions, plus 2205 new measurements from 667 papers, and features expanded coverage of CP violation in B mesons and of neutrino oscillations.
Abstract: This biennial Review summarizes much of Particle Physics. Using data from previous editions, plus 2205 new measurements from 667 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. This edition features expanded coverage of CP violation in B mesons and of neutrino oscillations. For the first time we cover searches for evidence of extra dimensions (both in the particle listings and in a new review). Another new review is on Grand Unified Theories. A booklet is available containing the Summary Tables and abbreviated versions of some of the other sections of this full Review. All tables, listings, and reviews (and errata) are also available on the Particle Data Group website: http://pdg.lbl.gov.

5,143 citations


Journal ArticleDOI
TL;DR: In this paper, a new account of curiosity is proposed that interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding.
Abstract: Research on curiosity has undergone 2 waves of intense activity. The 1st, in the 1960s, focused mainly on curiosity's psychological underpinnings. The 2nd, in the 1970s and 1980s, was characterized by attempts to measure curiosity and assess its dimensionality. This article reviews these contributions with a concentration on the 1st wave. It is argued that theoretical accounts of curiosity proposed during the 1st period fell short in 2 areas: They did not offer an adequate explanation for why people voluntarily seek out curiosity, and they failed to delineate situational determinants of curiosity. Furthermore, these accounts did not draw attention to, and thus did not explain, certain salient characteristics of curiosity: its intensity, transience, association with impulsivity, and tendency to disappoint when satisfied. A new account of curiosity is offered that attempts to address these shortcomings. The new account interprets curiosity as a form of cognitively induced deprivation that arises from the perception of a gap in knowledge or understanding. Curiosity is the most superficial of all the affections; it changes its object perpetually; it has an appetite which is very sharp, but very easily satisfied; and it has always an appearance of giddiness, rest

1,859 citations


Journal ArticleDOI
TL;DR: Using techniques similar to those involved in abstract interpretation, an abstract model of a program is constructed without ever examining the corresponding unabstracted model, and it is shown how this abstract model can be used to verify properties of the original program.
Abstract: We describe a method for using abstraction to reduce the complexity of temporal-logic model checking. Using techniques similar to those involved in abstract interpretation, we construct an abstract model of a program without ever examining the corresponding unabstracted model. We show how this abstract model can be used to verify properties of the original program. We have implemented a system based on these techniques, and we demonstrate their practicality using a number of examples, including a program representing a pipelined ALU circuit with over 101300 states.

1,398 citations


Journal ArticleDOI
TL;DR: A comprehensive overview of disk array technology and implementation topics such as refining the basic RAID levels to improve performance and designing algorithms to maintain data consistency are discussed.
Abstract: Disk arrays were proposed in the 1980s as a way to use parallelism between multiple disks to improve aggregate I/O performance. Today they appear in the product lines of most major computer manufacturers. This article gives a comprehensive overview of disk arrays and provides a framework in which to organize current and future work. First, the article introduces disk technology and reviews the driving forces that have popularized disk arrays: performance and reliability. It discusses the two architectural techniques used in disk arrays: striping across multiple disks to improve performance and redundancy to improve reliability. Next, the article describes seven disk array architectures, called RAID (Redundant Arrays of Inexpensive Disks) levels 0–6 and compares their performance, cost, and reliability. It goes on to discuss advanced research and implementation topics such as refining the basic RAID levels to improve performance and designing algorithms to maintain data consistency. Last, the article describes six disk array prototypes of products and discusses future opportunities for research, with an annotated bibliography disk array-related literature.

1,371 citations


Journal ArticleDOI
TL;DR: This paper presents a way of specifying types that makes it convenient to define the subtype relation, and discusses the ramifications of this notion of subtyping on the design of type families.
Abstract: The use of hierarchy is an important component of object-oriented design. Hierarchy allows the use of type families, in which higher level supertypes capture the behavior that all of their subtypes have in common. For this methodology to be effective, it is necessary to have a clear understanding of how subtypes and supertypes are related. This paper takes the position that the relationship should ensure that any property proved about supertype objects also holds for its subtype objects. It presents two ways of defining the subtype relation, each of which meets this criterion, and each of which is easy for programmers to use. The subtype relation is based on the specifications of the sub- and supertypes; the paper presents a way of specifying types that makes it convenient to define the subtype relation. The paper also discusses the ramifications of this notion of subtyping on the design of type families.

1,212 citations


Journal ArticleDOI
TL;DR: Three experiments on attitudes about sharing technical work and expertise in organizations are reported and vignette-based measures of attitudes are derived based on research on sensitive topics difficult to study in the field, which show attitudes about information sharing depend on the form of the information.
Abstract: As technology for information access improves, people have more opportunities to share information. A theory of information sharing is advanced and we report the results of three experiments on attitudes about sharing technical work and expertise in organizations. Based on research on sensitive topics difficult to study in the field, we derived vignette-based measures of attitudes. Subjects read a description of an employee's encounter with a previously unhelpful coworker who subsequently requested help-in the form of a computer program or computer advice. The influence of prosocial attitudes and organizational norms is inferred from subjects' support of sharing despite the previous unhelpful behavior of the coworker. Experiments 1 and 3 demonstrated that greater self interest reduces support of sharing, but that a belief in organizational ownership of work encourages and mediates attitudes favoring sharing. Work experience and business schooling contribute to these attitudes. The theory asserts that information as expertise belongs to a special category of information that is part of people's identity and is self-expressive. Experiments 2 and 3 demonstrated that subjects felt computer expertise belonged more to its possessor than the computer program did but would share it more than the program. Hence, attitudes about information sharing depend on the form of the information. Sharing tangible information work may depend on prosocial attitudes and norms of organizational ownership; sharing expertise may depend on people's own self-expressive needs.

1,210 citations


Journal ArticleDOI
TL;DR: The idea of believability has long been studied and explored in literatttre, theater, film, radio drama, and other media and traditional character animators are among those artists who have sought to create believable characters.
Abstract: Joseph Bates here is a notioti in the Arts of \"believable character.\" It does not mean an honest or reliable character, btit otie that provides the illtision of life, thtis permitting the atidience's stispension of disbelief The idea of believability has long been studied and explored in literatttre, theater, film, radio drama, and other media. Traditional character animators are among those artists who have sought to create believable characters, and the Disney animators of the 1930s made great strides toward this goal. The first page of the enormous classic reference work on Disney animaticjn [12] begins with these words:

1,202 citations


Book ChapterDOI
08 May 1994
TL;DR: A new algorithm, D*, is introduced, capable of planning paths in unknown, partially known, and changing environments in an efficient, optimal, and complete manner.
Abstract: The task of planning trajectories for a mobile robot has received considerable attention in the research literature. Most of the work assumes the robot has a complete and accurate model of its environment before it begins to move; less attention has been paid to the problem of partially known environments. This situation occurs for an exploratory robot or one that must move to a goal location without the benefit of a floorplan or terrain map. Existing approaches plan an initial path based on known information and then modify the plan locally or replan the entire path as the robot discovers obstacles with its sensors, sacrificing optimality or computational efficiency respectively. This paper introduces a new algorithm, D*, capable of planning paths in unknown, partially known, and changing environments in an efficient, optimal, and complete manner. >

1,183 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed an adaptive window selection method to select an appropriate window by evaluating the local variation of the intensity and the disparity within the window, which is based on a statistical model of the disparity distribution within a window.
Abstract: A central problem in stereo matching by computing correlation or sum of squared differences (SSD) lies in selecting an appropriate window size. The window size must be large enough to include enough intensity variation for reliable matching, but small enough to avoid the effects of projective distortion. If the window is too small and does not cover enough intensity variation, it gives a poor disparity estimate, because the signal (intensity variation) to noise ratio is low. If, on the other hand, the window is too large and covers a region in which the depth of scene points (i.e., disparity) varies, then the position of maximum correlation or minimum SSD may not represent correct matching due to different projective distortions in the left and right images. For this reason, a window size must be selected adaptively depending on local variations of intensity and disparity. The authors present a method to select an appropriate window by evaluating the local variation of the intensity and the disparity. The authors employ a statistical model of the disparity distribution within the window. This modeling enables the authors to assess how disparity variation, as well as intensity variation, within a window affects the uncertainty of disparity estimate at the center point of the window. As a result, the authors devise a method which searches for a window that produces the estimate of disparity with the least uncertainty for each pixel of an image: the method controls not only the size but also the shape (rectangle) of the window. The authors have embedded this adaptive-window method in an iterative stereo matching algorithm: starting with an initial estimate of the disparity map, the algorithm iteratively updates the disparity estimate for each point by choosing the size and shape of a window till it converges. The stereo matching algorithm has been tested on both synthetic and real images, and the quality of the disparity maps obtained demonstrates the effectiveness of the adaptive window method. >

Journal ArticleDOI
TL;DR: This analysis is focused on the feedforward pathways from the entorhinal cortex to the dentate gyrus (DG) and region CA3 and finds that Hebbian synaptic modification facilitates completion but reduces separation, unless the strengths of synapses from inactive presynaptic units to active postsynaptic units are reduced (LTD).
Abstract: The hippocampus and related structures are thought to be capable of 1) representing cortical activity in a way that minimizes overlap of the representations assigned to different cortical patterns (pattern separation); and 2) modifying synaptic connections so that these representations can later be reinstated from partial or noisy versions of the cortical activity pattern that was present at the time of storage (pattern completion). We point out that there is a trade-off between pattern separation and completion and propose that the unique anatomical and physiological properties of the hippocampus might serve to minimize this trade-off. We use analytical methods to determine quantitative estimates of both separation and completion for specified parameterized models of the hippocampus. These estimates are then used to evaluate the role of various properties and of the hippocampus, such as the activity levels seen in different hippocampal regions, synaptic potentiation and depression, the multi-layer connectivity of the system, and the relatively focused and strong mossy fiber projections. This analysis is focused on the feedforward pathways from the entorhinal cortex (EC) to the dentate gyrus (DG) and region CA3. Among our results are the following: 1) Hebbian synaptic modification (LTP) facilitates completion but reduces separation, unless the strengths of synapses from inactive presynaptic units to active postsynaptic units are reduced (LTD). 2) Multiple layers, as in EC to DG to CA3, allow the compounding of pattern separation, but not pattern completion. 3) The variance of the input signal carried by the mossy fibers is important for separation, not the raw strength, which may explain why the mossy fiber inputs are few and relatively strong, rather than many and relatively weak like the other hippocampal pathways. 4) The EC projects to CA3 both directly and indirectly via the DG, which suggests that the two-stage pathway may dominate during pattern separation and the one-stage pathway may dominate during completion; methods the hippocampus may use to enhance this effect are discussed.

Journal ArticleDOI
TL;DR: This paper found that feelings of harm before an exam induced several kinds of coping after the exam, mostly dysfunctional, while confidence about one's grade was a better predictor of emotions throughout than was coping.
Abstract: After reporting dispositional coping styles, students reported situational coping and 4 classes of affect (from threat, challenge, harm, and benefit appraisals) 2 days before an exam, after the exam but before grades were posted, and after posting of grades. Coping did not predict lower levels of future distress; indeed, some coping seemed to induce feelings of threat. Feelings of harm before the exam induced several kinds of coping after the exam, mostly dysfunctional. Confidence about one's grade was a better predictor of emotions throughout than was coping. Dispositional coping predicted comparable situational coping at low-moderate levels in most cases. Coping dispositions did not reliably predict emotions, however, with these exceptions: Dispositional denial was related to threat, as was dispositional use of social support; dispositional use of alcohol was related to both threat and harm.

Proceedings ArticleDOI
08 Dec 1994
TL;DR: This position paper suggests a new approach based on separate route discovery and route maintenance protocols for routing in ad hoc networks, treating each mobile host as a router.
Abstract: An ad hoc network is a collection of wireless mobile hosts forming a temporary network without the aid of any centralized administration or standard support services. In such an environment, it may be necessary for one mobile host to enlist the aid of others in forwarding a packet to its destination due to the limited propagation range of each mobile host's wireless transmissions. Some previous attempts have been made to use conventional routing protocols for routing in ad hoc networks, treating each mobile host as a router. This position paper points out a number of problems with this design and suggests a new approach based on separate route discovery and route maintenance protocols.

Journal ArticleDOI
TL;DR: In this paper, the authors examine the geographic sources of innovation, focusing specifically on the relationship between product innovation and the underlying "technological infrastructure" of particular places, which is comprised of agglomerations of firms in related manufacturing industries, geographic concentrations of industrial R&D, concentrations of university R &D, and business-service firms.
Abstract: The fate of regions and of nations increasingly depends upon ideas and innovations to facilitate growth. In recent years, geographers have made fundamental contributions to our understanding of the innovation process by exploring the diffusion of innovation, the location of R&D, and the geography of high-technology industry. This paper examines the geographic sources of innovation, focusing specifically on the relationship between product innovation and the underlying “technological infrastructure” of particular places. This infrastructure is comprised of agglomerations of firms in related manufacturing industries, geographic concentrations of industrial R&D, concentrations of university R&D, and business-service firms. Once in place, these geographic concentrations of infrastructure enhance the capacity for innovation, as regions come to specialize in particular technologies and industrial sectors. Geography organizes this infrastructure by bringing together the crucial resources and inputs for ...


Book
14 Mar 1994
TL;DR: In this article, Dawes systematically argues that none of the above is true and explores the debilitating effect these beliefs have on us, and takes issue with many of the treatment methods commonly used in therapy practices.
Abstract: In this book, Robyn Dawes critically examines some of the most cherished clinical assumptions and therapeutic methods now in use. He points out that we have all come under the sway of a "pop psych" view of the world, believing, for example, that self-esteem is an essential precursor to being a productive human being, that events in one's childhood determine one's fate as an adult, and that you have to love yourself before you can love another. Drawing on empirical research, Dawes systematically argues that none of the above is true and explores the debilitating effect these beliefs have on us. In addition, he takes issue with many of the treatment methods commonly used in therapy practices.

Journal ArticleDOI
TL;DR: In this paper, a complete solution to the infinite-horizon, discounted problem of optimal consumption and investment in a market with one stock, one money market (sometimes called a "bond") and proportional transaction costs is provided.
Abstract: A complete solution is provided to the infinite-horizon, discounted problem of optimal consumption and investment in a market with one stock, one money market (sometimes called a "bond") and proportional transaction costs. The utility function may be of the form $c^p/p$, where $p < 0$ or $0 < p < 1$, or may be $\log c$. It is assumed that the interest rate for the money market is positive, the mean rate of return for the stock is larger than this interest rate, the stock volatility is positive and all these parameters are constant. The only other assumption is that the value function is finite; necessary conditions for this are given. In the Appendix (by S. Shreve), the sensitivity of the value function under the assumption $0 < p < 1$ is shown to be of the order of the transaction cost to the 2/3 power. This implies that the liquidity premium associated with small transaction costs is also of the order of the transaction cost to the 2/3 power. Because this power is less than 1, the marginal liquidity premium turns out to be infinite. The analysis of this paper and its Appendix relies on the concept of viscosity solutions to Hamilton-Jacobi-Bellman equations. A self-contained treatment of this subject, adequate for the present application, is provided.

Book
01 Aug 1994
TL;DR: This chapter discusses Classical Conditioning, Skill Acquisition, and Inductive Learning, as well as applications to Education Psychology and Education Reading Instruction Mathematics Instruction.
Abstract: CHAPTER 1: PERSPECTIVES ON LEARNING AND MEMORY: Learning and Adaptation Behaviourist and Cognitive Approaches CHAPTER 2: CLASSICAL CONDITIONING: Overview of Classical Conditioning Neural Basis of Classical Conditioning S-S or S-R Associations? CHAPTER 3: INSTRUMENTAL LEARNING: Overview of Instrumental Conditioning What is Associated? What is the Conditioned Stimulus? CHAPTER 4: REINFORCEMENT AND LEARNING: Overview Reward and Punishment Aversive Control of Behaviour CHAPTER 5: TRANSIENT MEMORIES: Conditioning Research versus Memory Research Sensory Memory CHAPTER 6: ACQUISITION OF MEMORIES: Introduction Practice and Trace Strength CHAPTER 7: RETENTION OF MEMORIES: Overview The Retention Function Spacing Effects Interference Retention of Emotionally Charged Material CHAPTER 8: RETRIEVAL OF MEMORIES: Overview of the Chapter CHAPTER 9: SKILL ACQUISITION: Overview of Skill Acquisition The Cognitive Stage CHAPTER 10: INDUCTIVE LEARNING: Overview of Inductive Learning Concept Acquisition Causal Inference CHAPTER 11: APPLICATIONS TO EDUCATION: The Goals of Education Psychology and Education Reading Instruction Mathematics Instruction.

Journal ArticleDOI
TL;DR: The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction, with greater discrepancies between media conditions for tasks requiring higher levels of coordination.
Abstract: The authors investigated the hypothesis that as group tasks pose greater requirements for member interdependence, communication media that transmit more social context cues will foster group performance and satisfaction. Seventy-two 3-person groups of undergraduate students worked in either computer-mediated or face-to-face meetings on 3 tasks with increasing levels of interdependence: an idea-generation task, an intellective task, and a judgment task. Results showed few differences between computer-mediated and face-to-face groups in the quality of the work completed but large differences in productivity favoring face-to-face groups. Analysis of productivity and of members' reactions supported the predicted interaction of tasks and media, with greater discrepancies between media conditions for tasks requiring higher levels of coordination. Results are discussed in terms of the implications of using computer-mediated communication systems for group work. In this study we tested the following general hypothesis: As group-member interdependence on tasks increases, communication media that transmit more social context cues will have a greater impact on group performance and satisfaction. For tasks with high needs for coordination, groups interacting face-toface will perform better and be more satisfied than will groups interacting in computer-medi ated discussions. For tasks with low needs for coordination, communication media will have little effect on group outcomes. We expected that these different media would be more or less effective for different types of tasks because both tasks and media affect the group-interaction process and the group-interaction process affects group outcomes (Hackman & Morris, 1975; McGrath, 1984). In the present study, we focused on the kinds of computermediated communication systems (CMCSs) in which the primary function is to foster information transfer through typed text (e.g., computer conferences and, to some extent, electronic mail). Many of the results of research on group decision support systems are excluded from this discussion. Because the primary

Journal ArticleDOI
TL;DR: In this paper, the authors show how a cognitive theory can guide the use of structural methods, according to balance theory, and show that there is a balance between individualism and structuralism.
Abstract: We challenge the claimed incommensurability of individualism and structuralism by showing how a cognitive theory can guide the use of structural methods. According to balance theory, there is a str...

Journal ArticleDOI
TL;DR: A framework for compositional verification of finite-state processes based on a subset of the logic CTL for which satisfaction is preserved under composition and a preorder on structures which captures the relation between a component and a system containing the component is described.
Abstract: We describe a framework for compositional verification of finite-state processes. The framework is based on two ideas: a subset of the logic CTL for which satisfaction is preserved under composition, and a preorder on structures which captures the relation between a component and a system containing the component. Satisfaction of a formula in the logic corresponds to being below a particular structure (a tableau for the formula) in the preorder. We show how to do assume-guarantee-style reasoning within this framework. Additionally, we demonstrate efficient methods for model checking in the logic and for checking the preorder in several special cases. We have implemented a system based on these methods, and we use it to give a compositional verification of a CPU controller.

Journal ArticleDOI
TL;DR: This article conducted exploratory studies and mental model interviews to characterize public understanding of climate change and found that respondents regarded global warming as both bad and highly likely, and they tended to confuse stratospheric ozone depletion with the greenhouse effect and weather with climate.
Abstract: A set of exploratory studies and mental model interviews was conducted in order to characterize public understanding of climate change. In general, respondents regarded global warming as both bad and highly likely. Many believed that warming has already occurred. They tended to confuse stratospheric ozone depletion with the greenhouse effect and weather with climate. Automobile use, heat and emissions from industrial processes, aerosol spray cans, and pollution in general were frequently perceived as primary causes of global warming. Additionally, the [open quotes]greenhouse effect[close quotes] was often interpreted literally as the cause of a hot and steamy climate. The effects attributed to climate change often included increased skin cancer and changed agricultural yields. The mitigation and control strategies proposed by interviewees typically focused on general pollution control, with few specific links to carbon dioxide and energy use. Respondents appeared to be relatively unfamiliar with such regulatory developments as the ban on CFCs for nonessential uses. These beliefs must be considered by those designing risk communications or presenting climate-related policies to the public. 20 refs., 4 tabs.

Book
01 Jan 1994
TL;DR: The emergence of experimental economics has been studied extensively in the literature as mentioned in this paper, with a focus on the use of human subjects in experimental design and data analysis of economics experiments, as well as their application in the field of finance.
Abstract: List of figures and tables Preface Acknowledgments 1. Introduction 2. Principles of economics 3. Experimental design 4. Human subjects 5. Laboratory facilities 6. Conducting an experiment 7. Data analysis 8. Reporting your results 9. The emergence of experimental economics Appendices Glossary References Index.

Proceedings Article
01 Jan 1994
TL;DR: Grow-Support is introduced, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization, and which is not robust, and in even very benign cases, may produce an entirely wrong policy.
Abstract: A straightforward approach to the curse of dimensionality in reinforcement learning and dynamic programming is to replace the lookup table with a generalizing function approximator such as a neural net. Although this has been successful in the domain of backgammon, there is no guarantee of convergence. In this paper, we show that the combination of dynamic programming and function approximation is not robust, and in even very benign cases, may produce an entirely wrong policy. We then introduce Grow-Support, a new algorithm which is safe from divergence yet can still reap the benefits of successful generalization.

Journal ArticleDOI
TL;DR: In the past, most innovations have resulted from empiricist procedures; the outcome of each trial yielding knowledge that could not be readily extended to other contexts as discussed by the authors, which is the primary engine of innovation.

Journal ArticleDOI
TL;DR: In this paper, the Gibbs sampler algorithm was used to compare parametric empirical Bayes estimators (PEB) and NPEB estimators in a Monte Carlo study.
Abstract: In this article, the Dirichlet process prior is used to provide a nonparametric Bayesian estimate of a vector of normal means. In the past there have been computational difficulties with this model. This article solves the computational difficulties by developing a “Gibbs sampler” algorithm. The estimator developed in this article is then compared to parametric empirical Bayes estimators (PEB) and nonparametric empirical Bayes estimators (NPEB) in a Monte Carlo study. The Monte Carlo study demonstrates that in some conditions the PEB is better than the NPEB and in other conditions the NPEB is better than the PEB. The Monte Carlo study also shows that the estimator developed in this article produces estimates that are about as good as the PEB when the PEB is better and produces estimates that are as good as the NPEB estimator when that method is better.

Proceedings ArticleDOI
24 Jul 1994
TL;DR: A new algorithm for computing contact forces between solid objects with friction that allows a mix of contact points with static and dynamic friction has proven to be considerably faster, simple, and more reliable than previous approaches.
Abstract: A new algorithm for computing contact forces between solid objects with friction is presented. The algorithm allows a mix of contact points with static and dynamic friction. In contrast to previous approaches, the problem of computing contact forces is not transformed into an optimization problem. Because of this, the need for sophisticated optimization software packages is eliminated. For both systems with and without friction, the algorithm has proven to be considerably faster, simple, and more reliable than previous approaches to the problem. In particular, implementation of the algorithm by nonspecialists in numerical programming is quite feasible.

Journal ArticleDOI
TL;DR: In this paper, the temporal logic model checking algorithm of Clarke, Emerson, and Sistla is modified to represent state graphs using binary decision diagrams (BDD's) and partitioned transition relations.
Abstract: The temporal logic model checking algorithm of Clarke, Emerson, and Sistla (1986) is modified to represent state graphs using binary decision diagrams (BDD's) and partitioned transition relations. Because this representation captures some of the regularity in the state space of circuits with data path logic, we are able to verify circuits with an extremely large number of states. We demonstrate this new technique on a synchronous pipelined design with approximately 5/spl times/10/sup 120/ states. Our model checking algorithm handles full CTL with fairness constraints. Consequently, we are able to express a number of important liveness and fairness properties, which would otherwise not be expressible in CTL. We give empirical results on the performance of the algorithm applied to both synchronous and asynchronous circuits with data path logic. >