scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 2004"


Journal ArticleDOI
TL;DR: The author argues that all 3 variables that assess different aspects of social relationships are associated with health outcomes, that these variables each influence health through different mechanisms, and that associations between these variables and health are not spurious findings attributable to the authors' personalities.
Abstract: The author discusses 3 variables that assess different aspects of social relationships—social support, social integration, and negative interaction. The author argues that all 3 are associated with health outcomes, that these variables each influence health through different mechanisms, and that associations between these variables and health are not spurious findings attributable to our personalities. This argument suggests a broader view of how to intervene in social networks to improve health. This includes facilitating both social integration and social support by creating and nurturing both close (strong) and peripheral (weak) ties within natural social networks and reducing opportunities for negative social interaction. Finally, the author emphasizes the necessity to understand more about who benefits most and least from socialconnectedness interventions.

3,981 citations


Journal ArticleDOI
TL;DR: In this paper, the authors measured cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data.
Abstract: We measure cosmological parameters using the three-dimensional power spectrum P(k) from over 200,000 galaxies in the Sloan Digital Sky Survey (SDSS) in combination with WMAP and other data. Our results are consistent with a "vanilla" flat adiabaticCDM model without tilt (ns = 1), running tilt, tensor modes or massive neutrinos. Adding SDSS information more than halves the WMAP-only error bars on some parameters, tightening 1� constraints on the Hubble parameter from h � 0.74 +0.18 −0.07 to h � 0.70 +0.04 −0.03, on the matter density from m � 0.25 ± 0.10 to m � 0.30 ± 0.04 (1�) and on neutrino masses from < 11 eV to < 0.6 eV (95%). SDSS helps even more when dropping prior assumptions about curvature, neutrinos, tensor modes and the equation of state. Our results are in substantial agreement with the joint analysis of WMAP and the 2dF Galaxy Redshift Survey, which is an impressive consistency check with independent redshift survey data and analysis techniques. In this paper, we place particular emphasis on clarifying the physical origin of the constraints, i.e., what we do and do not know when using different data sets and prior assumptions. For instance, dropping the assumption that space is perfectly flat, the WMAP-only constraint on the measured age of the Universe tightens from t0 � 16.3 +2.3

3,938 citations


Proceedings ArticleDOI
27 Jun 2004
TL;DR: This paper examines (and improves upon) the local image descriptor used by SIFT, and demonstrates that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation.
Abstract: Stable local feature detection and representation is a fundamental component of many image registration and object recognition algorithms. Mikolajczyk and Schmid (June 2003) recently evaluated a variety of approaches and identified the SIFT [D. G. Lowe, 1999] algorithm as being the most resistant to common image deformations. This paper examines (and improves upon) the local image descriptor used by SIFT. Like SIFT, our descriptors encode the salient aspects of the image gradient in the feature point's neighborhood; however, instead of using SIFT's smoothed weighted histograms, we apply principal components analysis (PCA) to the normalized gradient patch. Our experiments demonstrate that the PCA-based local descriptors are more distinctive, more robust to image deformations, and more compact than the standard SIFT representation. We also present results showing that using these descriptors in an image retrieval application results in increased accuracy and faster matching.

3,325 citations


Book
01 Jan 2004
TL;DR: In this article, the authors describe cities and regions as cauldrons of diversit...Cities and regions have long captured the imagination of sociologists, economists, and urbanists.
Abstract: Cities and regions have long captured the imagination of sociologists, economists, and urbanists. From Alfred Marshall to Robert Park and Jane Jacobs, cities have been seen as cauldrons of diversit...

3,270 citations


Journal ArticleDOI
TL;DR: In this paper, a wide variety of extensions have been made to the original formulation of the Lucas-Kanade algorithm and their extensions can be used with the inverse compositional algorithm without any significant loss of efficiency.
Abstract: Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision Applications range from optical flow and tracking to layered motion, mosaic construction, and face coding Numerous algorithms have been proposed and a wide variety of extensions have been made to the original formulation We present an overview of image alignment, describing most of the algorithms and their extensions in a consistent framework We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed We examine which of the extensions to Lucas-Kanade can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot In this paper, Part 1 in a series of papers, we cover the quantity approximated, the warp update rule, and the gradient descent approximation In future papers, we will cover the choice of the error function, how to allow linear appearance variation, and how to impose priors on the parameters

3,168 citations


Journal ArticleDOI
TL;DR: The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT-R, which consists of multiple modules that are integrated to produce coherent cognition.
Abstract: Adaptive control of thought–rational (ACT–R; J. R. Anderson & C. Lebiere, 1998) has evolved into a theory that consists of multiple modules but also explains how these modules are integrated to produce coherent cognition. The perceptual-motor modules, the goal module, and the declarative memory module are presented as examples of specialized systems in ACT–R. These modules are associated with distinct cortical regions. These modules place chunks in buffers where they can be detected by a production system that responds to patterns of information in the buffers. At any point in time, a single production rule is selected to respond to the current pattern. Subsymbolic processes serve to guide the selection of rules to fire as well as the internal operations of some modules. Much of learning involves tuning of these subsymbolic processes. A number of simple and complex empirical examples are described to illustrate how these modules function singly and in concert.

2,732 citations


Journal ArticleDOI
15 Oct 2004-Science
TL;DR: The authors examined the neural correlates of time discounting while subjects made a series of choices between monetary reward options that varied by delay to delivery and demonstrated that two separate systems are involved in such decisions.
Abstract: When humans are offered the choice between rewards available at different points in time, the relative values of the options are discounted according to their expected delays until delivery. Using functional magnetic resonance imaging, we examined the neural correlates of time discounting while subjects made a series of choices between monetary reward options that varied by delay to delivery. We demonstrate that two separate systems are involved in such decisions. Parts of the limbic system associated with the midbrain dopamine system, including paralimbic cortex, are preferentially activated by decisions involving immediately available rewards. In contrast, regions of the lateral prefrontal cortex and posterior parietal cortex are engaged uniformly by intertemporal choices irrespective of delay. Furthermore, the relative engagement of the two systems is directly associated with subjects' choices, with greater relative fronto-parietal activity when subjects choose longer term options.

2,581 citations


Proceedings ArticleDOI
25 Apr 2004
TL;DR: A new interactive system: a game that is fun and can be used to create valuable output that addresses the image-labeling problem and encourages people to do the work by taking advantage of their desire to be entertained.
Abstract: We introduce a new interactive system: a game that is fun and can be used to create valuable output. When people play the game they help determine the contents of images by providing meaningful labels for them. If the game is played as much as popular online games, we estimate that most images on the Web can be labeled in a few months. Having proper labels associated with each image on the Web would allow for more accurate image search, improve the accessibility of sites (by providing descriptions of images to visually impaired individuals), and help users block inappropriate images. Our system makes a significant contribution because of its valuable output and because of the way it addresses the image-labeling problem. Rather than using computer vision techniques, which don't work well enough, we encourage people to do the work by taking advantage of their desire to be entertained.

2,365 citations


Journal ArticleDOI
TL;DR: This work proposes an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm and shows that the effects of appearance variation during fitting can be precomputed (“projected out”) using this algorithm and how it can be extended to include a global shape normalising warp.
Abstract: Active Appearance Models (AAMs) and the closely related concepts of Morphable Models and Active Blobs are generative models of a certain visual phenomenon. Although linear in both shape and appearance, overall, AAMs are nonlinear parametric models in terms of the pixel intensities. Fitting an AAM to an image consists of minimising the error between the input image and the closest model instances i.e. solving a nonlinear optimisation problem. We propose an efficient fitting algorithm for AAMs based on the inverse compositional image alignment algorithm. We show that the effects of appearance variation during fitting can be precomputed (“projected out”) using this algorithm and how it can be extended to include a global shape normalising warp, typically a 2D similarity transformation. We evaluate our algorithm to determine which of its novel aspects improve AAM fitting performance.

1,775 citations


Journal ArticleDOI
TL;DR: In this paper, the authors employed a matrix-based method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions.
Abstract: We measure the large-scale real-space power spectrum P(k) by using a sample of 205,443 galaxies from the Sloan Digital Sky Survey, covering 2417 effective square degrees with mean redshift z ≈ 0.1. We employ a matrix-based method using pseudo-Karhunen-Loeve eigenmodes, producing uncorrelated minimum-variance measurements in 22 k-bands of both the clustering power and its anisotropy due to redshift-space distortions, with narrow and well-behaved window functions in the range 0.02 h Mpc-1 < k < 0.3 h Mpc-1. We pay particular attention to modeling, quantifying, and correcting for potential systematic errors, nonlinear redshift distortions, and the artificial red-tilt caused by luminosity-dependent bias. Our results are robust to omitting angular and radial density fluctuations and are consistent between different parts of the sky. Our final result is a measurement of the real-space matter power spectrum P(k) up to an unknown overall multiplicative bias factor. Our calculations suggest that this bias factor is independent of scale to better than a few percent for k < 0.1 h Mpc-1, thereby making our results useful for precision measurements of cosmological parameters in conjunction with data from other experiments such as the Wilkinson Microwave Anisotropy Probe satellite. The power spectrum is not well-characterized by a single power law but unambiguously shows curvature. As a simple characterization of the data, our measurements are well fitted by a flat scale-invariant adiabatic cosmological model with h Ωm = 0.213 ± 0.023 and σ8 = 0.89 ± 0.02 for L* galaxies, when fixing the baryon fraction Ωb/Ωm = 0.17 and the Hubble parameter h = 0.72; cosmological interpretation is given in a companion paper.

1,734 citations


Journal ArticleDOI
TL;DR: In this article, the bimodality of the distribution from luminous to faint galaxies is traced by fitting double Gaussians to the color functions separated in absolute magnitude bins.
Abstract: We analyze the bivariate distribution, in color versus absolute magnitude (u-r vs. Mr), of a low-redshift sample of galaxies from the Sloan Digital Sky Survey (2400 deg2, 0.004 < z < 0.08, -23.5 < Mr < -15.5). We trace the bimodality of the distribution from luminous to faint galaxies by fitting double Gaussians to the color functions separated in absolute magnitude bins. Color-magnitude (CM) relations are obtained for red and blue distributions (early- and late-type, predominantly field, galaxies) without using any cut in morphology. Instead, the analysis is based on the assumption of normal Gaussian distributions in color. We find that the CM relations are well fitted by a straight line plus a tanh function. Both relations can be described by a shallow CM trend (slopes of about -0.04, -0.05) plus a steeper transition in the average galaxy properties over about 2 mag. The midpoints of the transitions (Mr = -19.8 and -20.8 for the red and blue distributions, respectively) occur around 2 × 1010 ☉ after converting luminosities to stellar mass. Separate luminosity functions are obtained for the two distributions. The red distribution has a more luminous characteristic magnitude and a shallower faint-end slope (M* = -21.5, α = -0.8) compared to the blue distribution (α ≈ -1.3, depending on the parameterization). These are approximately converted to galaxy stellar mass functions. The red distribution galaxies have a higher number density per magnitude for masses greater than about 3 × 1010 ☉. Using a simple merger model, we show that the differences between the two functions are consistent with the red distribution being formed from major galaxy mergers.

Journal ArticleDOI
01 Aug 2004-Brain
TL;DR: The findings suggest that the neural basis of disordered language in autism entails a lower degree of information integration and synchronization across the large-scale cortical network for language processing.
Abstract: Summary The brain activation of a group of high-functioning autistic participants was measured using functional MRI during sentence comprehension and the results compared with those of a Verbal IQ-matched control group. The groups differed in the distribution of activation in two of the key language areas. The autism group produced reliably more activation than the control group in Wernicke’s (left laterosuperior temporal) area and reliably less activation than the control group in Broca’s (left inferior frontal gyrus) area. Furthermore, the functional connectivity, i.e. the degree of synchronization or correlation of the time series of the activation, between the various participating cortical areas was consistently lower for the autistic than the control participants. These findings suggest that the neural basis of disordered language in autism entails a lower degree of information integration and synchronization across the large-scale cortical network for language processing. The article presents a theoretical account of the findings, related to neurobiological foundations of underconnectivity in autism.

Journal ArticleDOI
TL;DR: They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.
Abstract: They are susceptible to a variety of attacks, including node capture, physical tampering, and denial of service, while prompting a range of fundamental research challenges.

Book ChapterDOI
29 Mar 2004
TL;DR: The tool supports almost all ANSI-C language features, including pointer constructs, dynamic memory allocation, recursion, and the float and double data types, and is integrated into a graphical user interface.
Abstract: We present a tool for the formal verification of ANSI-C programs using Bounded Model Checking (BMC). The emphasis is on usability: the tool supports almost all ANSI-C language features, including pointer constructs, dynamic memory allocation, recursion, and the float and double data types. From the perspective of the user, the verification is highly automated: the only input required is the BMC bound. The tool is integrated into a graphical user interface. This is essential for presenting long counterexample traces: the tool allows stepping through the trace in the same way a debugger allows stepping through a program.

Proceedings ArticleDOI
26 Apr 2004
TL;DR: It is demonstrated that the Sybil attack can be exceedingly detrimental to many important functions of the sensor network such as routing, resource allocation, misbehavior detection, etc.
Abstract: Security is important for many sensor network applications. A particularly harmful attack against sensor and ad hoc networks is known as the Sybil attack based on J.R. Douceur (2002), where a node illegitimately claims multiple identities. This paper systematically analyzes the threat posed by the Sybil attack to wireless sensor networks. We demonstrate that the attack can be exceedingly detrimental to many important functions of the sensor network such as routing, resource allocation, misbehavior detection, etc. We establish a classification of different types of the Sybil attack, which enables us to better understand the threats posed by each type, and better design countermeasures against each type. We then propose several novel techniques to defend against the Sybil attack, and analyze their effectiveness quantitatively.

Journal ArticleDOI
TL;DR: Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to or better than the best results achieved using a single smoothing methods and exhaustive parameter search on the test data.
Abstract: Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and to then rank documents by the likelihood of the query according to the estimated language model. A central issue in language model estimation is smoothing, the problem of adjusting the maximum likelihood estimator to compensate for data sparseness. In this article, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collections. Experimental results show that not only is the retrieval performance generally sensitive to the smoothing parameters, but also the sensitivity pattern is affected by the query type, with performance being more sensitive to smoothing for verbose queries than for keyword queries. Verbose queries also generally require more aggressive smoothing to achieve optimal performance. This suggests that smoothing plays two different role---to make the estimated document language model more accurate and to "explain" the noninformative words in the query. In order to decouple these two distinct roles of smoothing, we propose a two-stage smoothing strategy, which yields better sensitivity patterns and facilitates the setting of smoothing parameters automatically. We further propose methods for estimating the smoothing parameters automatically. Evaluation on five different databases and four types of queries indicates that the two-stage smoothing method with the proposed parameter estimation methods consistently gives retrieval performance that is close to---or better than---the best results achieved using a single smoothing method and exhaustive parameter search on the test data.

Proceedings Article
01 Jan 2004
TL;DR: A simple, parsimonious model, the “recursive matrix” (R-MAT) model, which can quickly generate realistic graphs, capturing the essence of each graph in only a few parameters is proposed.
Abstract: How does a ‘normal’ computer (or social) network look like? How can we spot ‘abnormal’ sub-networks in the Internet, or web graph? The answer to such questions is vital for outlier detection (terrorist networks, or illegal money-laundering rings), forecasting, and simulations (“how will a computer virus spread?”). The heart of the problem is finding the properties of real graphs that seem to persist over multiple disciplines. We list such “laws” and, more importantly, we propose a simple, parsimonious model, the “recursive matrix” (R-MAT) model, which can quickly generate realistic graphs, capturing the essence of each graph in only a few parameters. Contrary to existing generators, our model can trivially generate weighted, directed and bipartite graphs; it subsumes the celebrated Erdős-Renyi model as a special case; it can match the power law behaviors, as well as the deviations from them (like the “winner does not take it all” model of Pennock et al. [20]). We present results on multiple, large real graphs, where we show that our parameter fitting algorithm (AutoMAT-fast) fits them very well.

Book ChapterDOI
20 Sep 2004
TL;DR: In this paper, the Enron corpus is used as a new test bed for email folder prediction, and the baseline results of a state-of-the-art classifier (Support Vector Machines) under various conditions.
Abstract: Automated classification of email messages into user-specific folders and information extraction from chronologically ordered email streams have become interesting areas in text learning research. However, the lack of large benchmark collections has been an obstacle for studying the problems and evaluating the solutions. In this paper, we introduce the Enron corpus as a new test bed. We analyze its suitability with respect to email folder prediction, and provide the baseline results of a state-of-the-art classifier (Support Vector Machines) under various conditions, including the cases of using individual sections (From, To, Subject and body) alone as the input to the classifier, and using all the sections in combination with regression weights.

Journal ArticleDOI
TL;DR: Cognitive maturation through adolescence was characterized with a steep initial improvement in performance followed by stabilization in adolescence, and adult-level mature performance began at approximately 15, 14, and 19 years of age for processing speed, response inhibition, and working memory, respectively.
Abstract: To characterize cognitive maturation through adolescence, processing speed, voluntary response suppression, and spatial working memory were measured in 8- to 30-year-old (N = 245) healthy participants using oculomotor tasks. Development progressed with a steep initial improvement in performance followed by stabilization in adolescence. Adult-level mature performance began at approximately 15, 14, and 19 years of age for processing speed, response inhibition, and working memory, respectively. Although processes developed independently, processing speed influenced the development of working memory whereas the development of response suppression and working memory were interdependent. These results indicate that processing speed, voluntary response suppression, and working memory mature through late childhood and into adolescence. How brain maturation specific to adolescence may support cognitive maturation is discussed.

Journal ArticleDOI
TL;DR: Long-term experiments demonstrated that these quantum dots remain fluorescent after at least four months in vivo, using only quantum dots for detection.

Posted Content
TL;DR: A review of recent developments in neuroeconomics and their implications for economics can be found in this article, where a two-dimensional dichotomization of neural processes between automatic and controlled processes and cognitive and affective processes is proposed.
Abstract: We review recent developments in neuroeconomics and their implications for economics. The paper consists of six sections. Following the Introduction, the second section enumerates the different research methods that neuroscientists use evaluates their strengths and limitations for analyzing economic phenomena. The third section provides a review of basic findings in neuroscience that we deemed especially relevant to economics, and proposes a two-dimensional dichotomization of neural processes between automatic and controlled processes on the one hand, and cognitive and affective processes on the other. Section four reviews general implications of neuroscience for economics. Research in neuroscience, for example, raises questions about the usefulness of many economic constructs, such as 'time preference' and 'risk preference'. It also suggests that, contrary to the assumption that humans are likely to possess domain-specific intelligence - to be brilliant when it comes to problems that the brain is well evolved for performing and flat-footed for problems that lie outside of the brains existing specialized functions. Section 5 provides more detailed discussions of four specific applications: intertemporal choice, decision making under risk and uncertainty, game theory, and labor-market discrimination. Section 6 concludes by proposing a distinction between two general approaches in applying neuroscience to economics which we term 'incremental' and 'radical'. The former draws on neuroscience findings to refine existing economic models, while the latter poses more basic challenges to the standard economic understanding of human behavior.

Proceedings ArticleDOI
30 Aug 2004
TL;DR: The causes of packet loss in a 38-node urban multi-hop 802.11b network are analyzed to gain an understanding of their relative importance, of how they interact, and of the implications for MAC and routing protocol design.
Abstract: This paper analyzes the causes of packet loss in a 38-node urban multi-hop 802.11b network. The patterns and causes of loss are important in the design of routing and error-correction protocols, as well as in network planning.The paper makes the following observations. The distribution of inter-node loss rates is relatively uniform over the whole range of loss rates; there is no clear threshold separating "in range" and "out of range." Most links have relatively stable loss rates from one second to the next, though a small minority have very bursty losses at that time scale. Signal-to-noise ratio and distance have little predictive value for loss rate. The large number of links with intermediate loss rates is probably due to multi-path fading rather than attenuation or interference.The phenomena discussed here are all well-known. The contributions of this paper are an understanding of their relative importance, of how they interact, and of the implications for MAC and routing protocol design.

Journal ArticleDOI
TL;DR: The second data release of the Sloan Digital Sky Survey (SDSS) as mentioned in this paper is the most recent data set to be publicly available, which consists of 3.5 million unique objects, 367,360 spectra of galaxies, quasars, stars, and calibrating blank sky patches selected over 2627 deg2 of this area.
Abstract: The Sloan Digital Sky Survey (SDSS) has validated and made publicly available its Second Data Release. This data release consists of 3324 deg2 of five-band (ugriz) imaging data with photometry for over 88 million unique objects, 367,360 spectra of galaxies, quasars, stars, and calibrating blank sky patches selected over 2627 deg2 of this area, and tables of measured parameters from these data. The imaging data reach a depth of r ≈ 22.2 (95% completeness limit for point sources) and are photometrically and astrometrically calibrated to 2% rms and 100 mas rms per coordinate, respectively. The imaging data have all been processed through a new version of the SDSS imaging pipeline, in which the most important improvement since the last data release is fixing an error in the model fits to each object. The result is that model magnitudes are now a good proxy for point-spread function magnitudes for point sources, and Petrosian magnitudes for extended sources. The spectroscopy extends from 3800 to 9200 A at a resolution of 2000. The spectroscopic software now repairs a systematic error in the radial velocities of certain types of stars and has substantially improved spectrophotometry. All data included in the SDSS Early Data Release and First Data Release are reprocessed with the improved pipelines and included in the Second Data Release. Further characteristics of the data are described, as are the data products themselves and the tools for accessing them.

Proceedings ArticleDOI
17 May 2004
TL;DR: The Rainbow framework uses software architectural models to dynamically monitor and adapt a running system and shows that the separation of a generic adaptation infrastructure from system-specific adaptation knowledge makes this reuse possible.
Abstract: Software-based systems today are increasingly expected to dynamically self-adapt to accommodate resource variability, changing user needs, and system faults. Recent work uses closed-loop control based on external models to monitor and adapt system behavior at run time. Taking this externalized approach, the Rainbow framework we have developed uses software architectural models to dynamically monitor and adapt a running system. A key goal and primary challenge of this framework is to support the reuse of adaptation strategies and infrastructure across different systems. We show that the separation of a generic adaptation infrastructure from system-specific adaptation knowledge makes this reuse possible.

Journal ArticleDOI
25 Jun 2004
TL;DR: This formulation is motivated from a document clustering problem in which one has a pairwise similarity function f learned from past data, and the goal is to partition the current set of documents in a way that correlates with f as much as possible; it can also be viewed as a kind of “agnostic learning” problem.
Abstract: We consider the following clustering problem: we have a complete graph on n vertices (items), where each edge (u, v) is labeled either + or − depending on whether u and v have been deemed to be similar or different. The goal is to produce a partition of the vertices (a clustering) that agrees as much as possible with the edge labels. That is, we want a clustering that maximizes the number of + edges within clusters, plus the number of − edges between clusters (equivalently, minimizes the number of disagreements: the number of − edges inside clusters plus the number of + edges between clusters). This formulation is motivated from a document clustering problem in which one has a pairwise similarity function f learned from past data, and the goal is to partition the current set of documents in a way that correlates with f as much as possibles it can also be viewed as a kind of “agnostic learning” problem. An interesting feature of this clustering formulation is that one does not need to specify the number of clusters k as a separate parameter, as in measures such as k-median or min-sum or min-max clustering. Instead, in our formulation, the optimal number of clusters could be any value between 1 and n, depending on the edge labels. We look at approximation algorithms for both minimizing disagreements and for maximizing agreements. For minimizing disagreements, we give a constant factor approximation. For maximizing agreements we give a PTAS, building on ideas of Goldreich, Goldwasser, and Ron (1998) and de la Veg (1996). We also show how to extend some of these results to graphs with edge labels in [−1, +1], and give some results for the case of random noise.

Journal ArticleDOI
TL;DR: Experimental results suggest that 2- to 4-year-old children construct new causal maps and that their learning is consistent with the Bayes net formalism.
Abstract: We propose that children employ specialized cognitive systems that allow them to recover an accurate causal map of the world: an abstract, coherent, learned representation of the causal relations among events. This kind of knowledge can be perspicuously understood in terms of the formalism of directed graphical causal models, or Bayes nets. Children's causal learning and inference may involve computations similar to those for learning causal Bayes nets and for predicting with them. Experimental results suggest that 2- to 4-year-old children construct new causal maps and that their learning is consistent with the Bayes net formalism.

Journal ArticleDOI
TL;DR: Some benefits and challenges of conducting psychological research via the Internet are described and recommendations to both researchers and institutional review boards for dealing with them are offered.
Abstract: As the Internet has changed communication, commerce, and the distribution of information, so too it is changing psychological research. Psychologists can observe new or rare phenomena online and can do research on traditional psychological topics more efficiently, enabling them to expand the scale and scope of their research. Yet these opportunities entail risk both to research quality and to human subjects. Internet research is inherently no more risky than traditional observational, survey, or experimental methods. Yet the risks and safeguards against them will differ from those characterizing traditional research and will themselves change over time. This article describes some benefits and challenges of conducting psychological research via the Internet and offers recommendations to both researchers and institutional review boards for dealing with them. ((c) 2004 APA, all rights reserved)

Journal ArticleDOI
TL;DR: The results demonstrate that incidental emotions can influence decisions even when real money is at stake, and that emotions of the same valence can have opposing effects on such decisions.
Abstract: We examined the impact of specific emotions on the endowment effect, the tendency for selling prices to exceed buying or ''choice'' prices for the same object. As predicted by appraisal-tendency theory, disgust induced by a prior, irrele- vant situation carried over to normatively unrelated economic decisions, reducing selling and choice prices and eliminating the endowment effect. Sadness also carried over, reducing selling prices but increasing choice prices—producing a ''reverse en- dowment effect'' in which choice prices exceeded selling prices. The results demonstrate that incidental emotions can influence decisions even when real money is at stake, and that emotions of the same valence can have opposing effects on such decisions. Two decades of research document the tendency for incidental emo- tion to color normatively unrelated judgments and decisions (for re- views, see Forgas, 1995; Loewenstein & Lerner, 2002; Schwarz,

Journal ArticleDOI
TL;DR: Statistical methods for the analysis of multiple neural spike-train data are reviewed and future challenges for methodology research are discussed.
Abstract: Multiple electrodes are now a standard tool in neuroscience research that make it possible to study the simultaneous activity of several neurons in a given brain region or across different regions. The data from multi-electrode studies present important analysis challenges that must be resolved for optimal use of these neurophysiological measurements to answer questions about how the brain works. Here we review statistical methods for the analysis of multiple neural spike-train data and discuss future challenges for methodology research.

Journal ArticleDOI
TL;DR: It is found not only that many more children learned from direct instruction than from discovery learning, but also that when asked to make broader, richer scientific judgments, the many children who learned about experimental design from direct Instruction performed as well as those few children who discovered the method on their own.
Abstract: In a study with 112 third- and fourth-grade children, we measured the relative effectiveness of discovery learning and direct instruction at two points in the learning process: (a) during the initial acquisition of the basic cognitive objective (a procedure for designing and interpreting simple, unconfounded experiments) and (b) during the subsequent transfer and application of this basic skill to more diffuse and authentic reasoning associated with the evaluation of science-fair posters. We found not only that many more children learned from direct instruction than from discovery learning, but also that when asked to make broader, richer scientific judgments, the many children who learned about experimental design from direct instruction performed as well as those few children who discovered the method on their own. These results challenge predictions derived from the presumed superiority of discovery approaches in teaching young children basic procedures for early scientific investigations.