scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 2003"


Journal ArticleDOI
TL;DR: Generalized Estimating Equations is a good introductory book for analyzing continuous and discrete correlated data using GEE methods and provides good guidance for analyzing correlated data in biomedical studies and survey studies.
Abstract: (2003). Statistical Analysis With Missing Data. Technometrics: Vol. 45, No. 4, pp. 364-365.

6,960 citations


Proceedings Article
21 Aug 2003
TL;DR: An approach to semi-supervised learning is proposed that is based on a Gaussian random field model, and methods to incorporate class priors and the predictions of classifiers obtained by supervised learning are discussed.
Abstract: An approach to semi-supervised learning is proposed that is based on a Gaussian random field model. Labeled and unlabeled data are represented as vertices in a weighted graph, with edge weights encoding the similarity between instances. The learning problem is then formulated in terms of a Gaussian random field on this graph, where the mean of the field is characterized in terms of harmonic functions, and is efficiently obtained using matrix methods or belief propagation. The resulting learning algorithms have intimate connections with random walks, electric networks, and spectral graph theory. We discuss methods to incorporate class priors and the predictions of classifiers obtained by supervised learning. We also propose a method of parameter learning by entropy minimization, and show the algorithm's ability to perform feature selection. Promising experimental results are presented for synthetic data, digit classification, and text classification tasks.

3,908 citations


Journal ArticleDOI
TL;DR: It is proposed that social cohesion around a relationship affects the willingness and motivation of individuals to invest time, energy, and effort in sharing knowledge with others and that the network range, ties to different knowledge pools, increases a person's ability to convey complex ideas to heterogeneous audiences.
Abstract: This research considers how different features of informal networks affect knowledge transfer. As a complement to previous research that has emphasized the dyadic tie strength component of informal...

3,319 citations


Proceedings ArticleDOI
11 May 2003
TL;DR: The random-pairwise keys scheme is presented, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.
Abstract: Key establishment in sensor networks is a challenging problem because asymmetric key cryptosystems are unsuitable for use in resource constrained sensor nodes, and also because the nodes could be physically compromised by an adversary. We present three new mechanisms for key establishment using the framework of pre-distributing a random set of keys to each node. First, in the q-composite keys scheme, we trade off the unlikeliness of a large-scale network attack in order to significantly strengthen random key predistribution's strength against smaller-scale attacks. Second, in the multipath-reinforcement scheme, we show how to strengthen the security between any two nodes by leveraging the security of other links. Finally, we present the random-pairwise keys scheme, which perfectly preserves the secrecy of the rest of the network when any node is captured, and also enables node-to-node authentication and quorum-based revocation.

3,125 citations


Journal ArticleDOI
TL;DR: The context for socially interactive robots is discussed, emphasizing the relationship to other research fields and the different forms of “social robots”, and a taxonomy of design methods and system components used to build socially interactive Robots is presented.

2,869 citations


Journal ArticleDOI
TL;DR: Results revealed strong and negative correlations between relationship conflict, team performance, and team member satisfaction, in contrast to what has been suggested in both academic research and introductory textbooks.
Abstract: This study provides a meta-analysis of research on the associations between relationship conflict, task conflict, team performance, and team member satisfaction. Consistent with past theorizing, resultsrevealed strong and negative correlations between relationship conflict, team performance, and team member satisfaction. In contrast to what has been suggested in both academic research and introductorytextbooks, however, results also revealed strong and negative (instead of the predicted positive) correlations between task conflict, team performance, and team member satisfaction. As predicted, conflict had stronger negative relations with team performance in highly complex (decision making, project, mixed) than in less complex (production) tasks. Finally, task conflict was less negatively related to team performance when task conflict and relationship conflict were weakly, rather than strongly, correlated.

2,673 citations


Journal ArticleDOI
TL;DR: In this paper, the authors used Monte Carlo realizations of different star formation histories, including starbursts of varying strength and a range of metallicities, to constrain the mean stellar ages of galaxies and the fractional stellar mass formed in bursts over the past few Gyr.
Abstract: We develop a new method to constrain the star formation histories, dust attenuation and stellar masses of galaxies. It is based on two stellar absorption-line indices, the 4000-A break strength and the Balmer absorption-line index Hδ A . Together, these indices allow us to constrain the mean stellar ages of galaxies and the fractional stellar mass formed in bursts over the past few Gyr. A comparison with broad-band photometry then yields estimates of dust attenuation and of stellar mass. We generate a large library of Monte Carlo realizations of different star formation histories, including starbursts of varying strength and a range of metallicities. We use this library to generate median likelihood estimates of burst mass fractions, dust attenuation strengths, stellar masses and stellar mass-to-light ratios for a sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey. The typical 95 per cent confidence range in our estimated stellar masses is ′40 per cent. We study how the stellar mass-to-light ratios of galaxies vary as a function of absolute magnitude, concentration index and photometric passband and how dust attenuation varies as a function of absolute magnitude and 4000-A break strength. We also calculate how the total stellar mass of the present Universe is distributed over galaxies as a function of their mass, size, concentration, colour, burst mass fraction and surface mass density. We find that most of the stellar mass in the local Universe resides in galaxies that have, to within a factor of approximately 2, stellar masses ∼5 x 10 1 0 M O ., half-light radii ∼3 kpc and half-light surface mass densities ∼10 9 M O .kpc - 2 . The distribution of D n (4000) is strongly bimodal, showing a clear division between galaxies dominated by old stellar populations and galaxies with more recent star formation.

2,407 citations


Proceedings Article
21 Aug 2003
TL;DR: An algorithm for convex programming is introduced, and it is shown that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized inf initesimalgradient ascent (GIGA) is universally consistent.
Abstract: Convex programming involves a convex set F ⊆ Rn and a convex cost function c : F → R. The goal of convex programming is to find a point in F which minimizes c. In online convex programming, the convex set is known in advance, but in each step of some repeated optimization problem, one must select a point in F before seeing the cost function for that step. This can be used to model factory production, farm production, and many other industrial optimization problems where one is unaware of the value of the items produced until they have already been constructed. We introduce an algorithm for this domain. We also apply this algorithm to repeated games, and show that it is really a generalization of infinitesimal gradient ascent, and the results here imply that generalized infinitesimal gradient ascent (GIGA) is universally consistent.

2,273 citations


Journal ArticleDOI
TL;DR: In the Fall of 2000, a database of more than 40,000 facial images of 68 people was collected using the Carnegie Mellon University 3D Room to imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions.
Abstract: In the Fall of 2000, we collected a database of more than 40,000 facial images of 68 people. Using the Carnegie Mellon University 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this the CMU pose, illumination, and expression (PIE) database. We describe the imaging hardware, the collection procedure, the organization of the images, several possible uses, and how to obtain the database.

1,880 citations


Proceedings ArticleDOI
09 Jul 2003
TL;DR: A new, general mechanism, called packet leashes, is presented for detecting and thus defending against wormhole attacks, and a specific protocol is presented, called TIK, that implements leashes.
Abstract: As mobile ad hoc network applications are deployed, security emerges as a central requirement. In this paper, we introduce the wormhole attack, a severe attack in ad hoc networks that is particularly challenging to defend against. The wormhole attack is possible even if the attacker has not compromised any hosts, and even if all communication provides authenticity and confidentiality. In the wormhole attack, an attacker records packets (or bits) at one location in the network, tunnels them (possibly selectively) to another location, and retransmits them there into the network. The wormhole attack can form a serious threat in wireless networks, especially against many ad hoc network routing protocols and location-based wireless security systems. For example, most existing ad hoc network routing protocols, without some mechanism to defend against the wormhole attack, would be unable to find routes longer than one or two hops, severely disrupting communication. We present a new, general mechanism, called packet leashes, for detecting and thus defending against wormhole attacks, and we present a specific protocol, called TIK, that implements leashes.

1,667 citations


Book
01 Jan 2003
TL;DR: This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping, and describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems.
Abstract: This article provides a comprehensive introduction into the field of robotic mapping, with a focus on indoor mapping. It describes and compares various probabilistic techniques, as they are presently being applied to a vast array of mobile robot mapping problems. The history of robotic mapping is also detailed, along with an extensive list of open research problems.

Book ChapterDOI
04 May 2003
TL;DR: This work introduces captcha, an automated test that humans can pass, but current computer programs can't pass; any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem; and provides several novel constructions of captchas, which imply a win-win situation.
Abstract: We introduce captcha, an automated test that humans can pass, but current computer programs can't pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. captchas based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels.

Book ChapterDOI
08 Jul 2003
TL;DR: Counterexample-guided abstraction refinement is an automatic abstraction method where the key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.
Abstract: The main practical problem in model checking is the combinatorial explosion of system states commonly known as the state explosion problem. Abstraction methods attempt to reduce the size of the state space by employing knowledge about the system and the specification in order to model only relevant features in the Kripke structure. Counterexample-guided abstraction refinement is an automatic abstraction method where, starting with a relatively small skeletal representation of the system to be verified, increasingly precise abstract representations of the system are computed. The key step is to extract information from false negatives ("spurious counterexamples") due to over-approximation.

Journal ArticleDOI
TL;DR: This paper proposes and develops a link-layer channel model termed effective capacity (EC), which first model a wireless link by two EC functions, namely, the probability of nonempty buffer, and the QoS exponent of a connection, and proposes a simple and efficient algorithm to estimate these EC functions.
Abstract: To facilitate the efficient support of quality of service (QoS) in next-generation wireless networks, it is essential to model a wireless channel in terms of connection-level QoS metrics such as data rate, delay, and delay-violation probability. However, the existing wireless channel models, i.e., physical-layer channel models, do not explicitly characterize a wireless channel in terms of these QoS metrics. In this paper, we propose and develop a link-layer channel model termed effective capacity (EC). In this approach, we first model a wireless link by two EC functions, namely, the probability of nonempty buffer, and the QoS exponent of a connection. Then, we propose a simple and efficient algorithm to estimate these EC functions. The physical-layer analogs of these two link-layer EC functions are the marginal distribution (e.g., Rayleigh-Ricean distribution) and the Doppler spectrum, respectively. The key advantages of the EC link-layer modeling and estimation are: 1) ease of translation into QoS guarantees, such as delay bounds; 2) simplicity of implementation; and 3) accuracy, and hence, efficiency in admission control and resource reservation. We illustrate the advantage of our approach with a set of simulation experiments, which show that the actual QoS metric is closely approximated by the QoS metric predicted by the EC link-layer model, under a wide range of conditions.

Journal ArticleDOI
TL;DR: The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust.
Abstract: Although research on trust in an organizational context has advanced considerably in recent years, the literature has yet to produce a set of generalizable propositions that inform our understanding of the organization and coordination of work. We propose that conceptualizing trust as an organizing principle is a powerful way of integrating the diverse trust literature and distilling generalizable implications for how trust affects organizing. We develop the notion of trust as an organizing principle by specifying structuring and mobilizing as two sets of causal pathways through which trust influences several important properties of organizations. We further describe specific mechanisms within structuring and mobilizing that influence interaction patterns and organizational processes. The principal aim of the framework is to advance the literature by connecting the psychological and sociological micro-foundations of trust with the macro-bases of organizing. The paper concludes by demonstrating how the framework can be applied to yield novel insights into traditional views of organizations and to stimulate original and innovative avenues of organizational research that consider both the benefits and downsides of trust.

Book
21 Apr 2003
TL;DR: In this paper, the authors develop the mathematical foundations of partially ordered sets with completeness properties of various degrees, in particular directed complete ordered sets and complete lattices, and model the notion that one element 'finitely approximates' another, something closely related to intrinsic topologies linking order and topology.
Abstract: Information content and programming semantics are just two of the applications of the mathematical concepts of order, continuity and domains. The authors develop the mathematical foundations of partially ordered sets with completeness properties of various degrees, in particular directed complete ordered sets and complete lattices. Uniquely, they focus on partially ordered sets that have an extra order relation, modelling the notion that one element 'finitely approximates' another, something closely related to intrinsic topologies linking order and topology. Extensive use is made of topological ideas, both by defining useful topologies on the structures themselves and by developing close connections with numerous aspects of topology. The theory so developed not only has applications to computer science but also within mathematics to such areas as analysis, the spectral theory of algebras and the theory of computability. This authoritative, comprehensive account of the subject will be essential for all those working in the area.

Journal ArticleDOI
TL;DR: This work proposes a new theoretical setting based on the mathematical framework of hierarchical Bayesian inference for reasoning about the visual system, and suggests that the algorithms of particle filtering and Bayesian-belief propagation might model these interactive cortical computations.
Abstract: Traditional views of visual processing suggest that early visual neurons in areas V1 and V2 are static spatiotemporal filters that extract local features from a visual scene. The extracted information is then channeled through a feedforward chain of modules in successively higher visual areas for further analysis. Recent electrophysiological recordings from early visual neurons in awake behaving monkeys reveal that there are many levels of complexity in the information processing of the early visual cortex, as seen in the long-latency responses of its neurons. These new findings suggest that activity in the early visual cortex is tightly coupled and highly interactive with the rest of the visual system. They lead us to propose a new theoretical setting based on the mathematical framework of hierarchical Bayesian inference for reasoning about the visual system. In this framework, the recurrent feedforward/feedback loops in the cortex serve to integrate top-down contextual priors and bottom-up observations so as to implement concurrent probabilistic inference along the visual hierarchy. We suggest that the algorithms of particle filtering and Bayesian-belief propagation might model these interactive cortical computations. We review some recent neurophysiological evidences that support the plausibility of these ideas.

Journal ArticleDOI
TL;DR: The authors show that initial valuations of familiar products and simple hedonic experiences are strongly influenced by arbitrary "anchors" (sometimes derived from a person's social security number) and that subsequent valuations are also coherent with respect to salient differences in perceived quality or quantity of these products and experiences.
Abstract: In six experiments we show that initial valuations of familiar products and simple hedonic experiences are strongly influenced by arbitrary “anchors” (sometimes derived from a person’s social security number). Because subsequent valuations are also coherent with respect to salient differences in perceived quality or quantity of these products and experiences, the entire pattern of valuations can easily create an illusion of order, as if it is being generated by stable underlying preferences. The experiments show that this combination of coherent arbitrariness (1) cannot be interpreted as a rational response to information, (2) does not decrease as a result of experience with a good, (3) is not necessarily reduced by market forces, and (4) is not unique to cash prices. The results imply that demand curves estimated from market data need not reveal true consumer preferences, in any normatively significant sense of the term.

Proceedings Article
09 Aug 2003
TL;DR: Using an open-source, Java toolkit of name-matching methods, the authors experimentally compare string distance metrics on the task of matching entity names and find that the best performing method is a hybrid scheme combining a TFIDF weighting scheme, which is widely used in information retrieval, with the Jaro-Winkler string-distance scheme.
Abstract: Using an open-source, Java toolkit of name-matching methods, we experimentally compare string distance metrics on the task of matching entity names We investigate a number of different metrics proposed by different communities, including edit-distance metrics, fast heuristic string comparators, token-based distance metrics, and hybrid methods Overall, the best-performing method is a hybrid scheme combining a TFIDF weighting scheme, which is widely used in information retrieval, with the Jaro-Winkler string-distance scheme, which was developed in the probabilistic record linkage community

Journal ArticleDOI
TL;DR: For a complete sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey (SDSS), this article studied the relationship between stellar mass, star formation history, size and internal structure.
Abstract: We study the relations between stellar mass, star formation history, size and internal structure for a complete sample of 122 808 galaxies drawn from the Sloan Digital Sky Survey We show that low-redshift galaxies divide into two distinct families at a stellar mass of 3 x 10 1 0 M O Lower-mass galaxies have young stellar populations, low surface mass densities and the low concentrations typical of discs Their star formation histories are more strongly correlated with surface mass density than with stellar mass A significant fraction of the lowest-mass galaxies in our sample have experienced recent starbursts At given stellar mass, the sizes of low-mass galaxies are lognormally distributed with dispersion σ(In R 5 0 ) ∼05, in excellent agreement with the idea that they form with little angular momentum loss through cooling and condensation in a gravitationally dominant dark matter halo Their median stellar surface mass density scales with stellar mass as μ * M * 054, suggesting that the stellar mass of a disc galaxy is proportional to the three halves power of its halo mass All of this suggests that the efficiency of the conversion of baryons into stars in low-mass galaxies increases in proportion to halo mass, perhaps as a result of supernova feedback processes At stellar masses above 3 x 10 1 0 M O , there is a rapidly increasing fraction of galaxies with old stellar populations, high surface mass densities and the high concentrations typical of bulges In this regime, the size distribution remains lognormal, but its dispersion decreases rapidly with increasing mass and the median stellar mass surface density is approximately constant This suggests that the star formation efficiency decreases in the highest-mass haloes, and that little star formation occurs in massive galaxies after they have assembled

Journal ArticleDOI
TL;DR: Computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs) are discussed and computational results on problems from the SDPLIB and DIMACS Challenge collections are reported.
Abstract: This paper discusses computational experiments with linear optimization problems involving semidefinite, quadratic, and linear cone constraints (SQLPs). Many test problems of this type are solved using a new release of SDPT3, a Matlab implementation of infeasible primal-dual path-following algorithms. The software developed by the authors uses Mehrotra-type predictor-corrector variants of interior-point methods and two types of search directions: the HKM and NT directions. A discussion of implementation details is provided and computational results on problems from the SDPLIB and DIMACS Challenge collections are reported.

Journal ArticleDOI
TL;DR: In this article, the authors examined the developmental course of physical aggression in childhood and analyzed its linkage to violent and nonviolent offending outcomes in adolescence and found that among boys there is continuity in problem behavior from childhood to adolescence.
Abstract: This study used data from 6 sites and 3 countries to examine the developmental course of physical aggression in childhood and to analyze its linkage to violent and nonviolent offending outcomes in adolescence The results indicate that among boys there is continuity in problem behavior from childhood to adolescence and that such continuity is especially acute when early problem behavior takes the form of physical aggression Chronic physical aggression during the elementary school years specifically increases the risk for continued physical violence as well as other nonviolent forms of delinquency during adolescence However, this conclusion is reserved primarily for boys, because the results indicate no clear linkage between childhood physical aggression and adolescent offending among female samples despite notable similarities across male and female samples in the developmental course of physical aggression in childhood

Journal ArticleDOI
TL;DR: The design procedure starts by defining a cost function, such as minimizing a combination of fuel consumption and selected emission species over a driving cycle, and dynamic programming is utilized to find the optimal control actions including the gear-shifting sequence and the power split between the engine and motor while subject to a battery SOC-sustaining constraint.
Abstract: Hybrid vehicle techniques have been widely studied recently because of their potential to significantly improve the fuel economy and drivability of future ground vehicles. Due to the dual-power-source nature of these vehicles, control strategies based on engineering intuition frequently fail to fully explore the potential of these advanced vehicles. In this paper, we present a procedure for the design of a near-optimal power management strategy. The design procedure starts by defining a cost function, such as minimizing a combination of fuel consumption and selected emission species over a driving cycle. Dynamic programming (DP) is then utilized to find the optimal control actions including the gear-shifting sequence and the power split between the engine and motor while subject to a battery SOC-sustaining constraint. Through analysis of the behavior of DP control actions, near-optimal rules are extracted, which, unlike DP control signals, are implementable. The performance of this power management control strategy is studied by using the hybrid vehicle model HE-VESIM developed at the Automotive Research Center of the University of Michigan. A tradeoff study between fuel economy and emissions was performed. It was found that significant emission reduction could be achieved at the expense of a small increase in fuel consumption.

Journal ArticleDOI
TL;DR: In this article, the cosmological evolution of the hard X-ray luminosity function (HXLF) of active galactic nuclei (AGNs) in the 2-10 keV luminosity range of 1041.5-1046.5 ergs s-1 was investigated.
Abstract: We investigate the cosmological evolution of the hard X-ray luminosity function (HXLF) of active galactic nuclei (AGNs) in the 2-10 keV luminosity range of 1041.5-1046.5 ergs s-1 as a function of redshift up to 3. From a combination of surveys conducted at photon energies above 2 keV with HEAO 1, ASCA, and Chandra, we construct a highly complete (>96%) sample consisting of 247 AGNs over the wide flux range of 10-10 to 3.8 × 10-15 ergs cm-2 s-1 (2-10 keV). For our purpose, we develop an extensive method of calculating the intrinsic (before absorption) HXLF and the absorption (NH) function. This utilizes the maximum likelihood method, fully correcting for observational biases with consideration of the X-ray spectrum of each source. We find that (1) the fraction of X-ray absorbed AGNs decreases with the intrinsic luminosity and (2) the evolution of the HXLF of all AGNs (including both type I and type II AGNs) is best described with a luminosity-dependent density evolution (LDDE) where the cutoff redshift increases with the luminosity. Our results directly constrain the evolution of AGNs that produce a major part of the hard X-ray background, thus solving its origin quantitatively. A combination of the HXLF and the NH function enables us to construct a purely observation-based population synthesis model. We present basic consequences of this model and discuss the contribution of Compton-thick AGNs to the rest of the hard X-ray background.

Journal ArticleDOI
TL;DR: The aftermath of September 11th highlights the need to understand how emotion affects citizens' responses to risk and provides an opportunity to test current theories of such effects, and predicted opposite effects for anger and fear on risk judgments and policy preferences.
Abstract: The aftermath of September 11th highlights the need to un- derstand how emotion affects citizens' responses to risk. It also provides an opportunity to test current theories of such effects. On the basis of appraisal-tendency theory, we predicted opposite effects for anger and fear on risk judgments and policy preferences. In a nationally representative sample of Americans ( N � 973, ages 13-88), fear increased risk esti- mates and plans for precautionary measures; anger did the opposite. These patterns emerged with both experimentally induced emotions and naturally occurring ones. Males had less pessimistic risk estimates than did females, emotion differences explaining 60 to 80% of the gender dif- ference. Emotions also predicted diverging public policy preferences. Dis- cussion focuses on theoretical, methodological, and policy implications. Terrorist attacks on the United States intensely affected many individu- als and institutions, well beyond those directly harmed. Financial markets dropped, consumer spending declined, air travel plummeted, and public opinion toward government shifted. These responses reflected intense thought—and emotion. The attacks—and prospect of sustained conflict with a diffuse, unfamiliar enemy—created anger, fear, and sadness.


Journal ArticleDOI
TL;DR: In this paper, the authors measured the galaxy luminosity density at z = 0.1 in five optical band passes corresponding to the SDSS bandpasses shifted to match their rest-frame shape.
Abstract: Using a catalog of 147,986 galaxy redshifts and fluxes from the Sloan Digital Sky Survey (SDSS), we measure the galaxy luminosity density at z = 0.1 in five optical bandpasses corresponding to the SDSS bandpasses shifted to match their rest-frame shape at z = 0.1. We denote the bands 0.1u, 0.1g, 0.1r, 0.1i, 0.1z with λeff = (3216, 4240, 5595, 6792, 8111 A), respectively. To estimate the luminosity function, we use a maximum likelihood method that allows for a general form for the shape of the luminosity function, fits for simple luminosity and number evolution, incorporates the flux uncertainties, and accounts for the flux limits of the survey. We find luminosity densities at z = 0.1 expressed in absolute AB magnitudes in a Mpc3 to be (-14.10 ± 0.15, -15.18 ± 0.03, -15.90 ± 0.03, -16.24 ± 0.03, -16.56 ± 0.02) in (0.1u, 0.1g, 0.1r, 0.1i, 0.1z), respectively, for a cosmological model with Ω0 = 0.3, ΩΛ = 0.7, and h = 1 and using SDSS Petrosian magnitudes. Similar results are obtained using Sersic model magnitudes, suggesting that flux from outside the Petrosian apertures is not a major correction. In the 0.1r band, the best-fit Schechter function to our results has * = (1.49 ± 0.04) × 10-2 h3 Mpc-3, M* - 5 log10 h = -20.44 ± 0.01, and α = -1.05 ± 0.01. In solar luminosities, the luminosity density in 0.1r is (1.84 ± 0.04) × 108 h L0.1r,☉ Mpc-3. Our results in the 0.1g band are consistent with other estimates of the luminosity density, from the Two-Degree Field Galaxy Redshift Survey and the Millennium Galaxy Catalog. They represent a substantial change (~0.5 mag) from earlier SDSS luminosity density results based on commissioning data, almost entirely because of the inclusion of evolution in the luminosity function model.

Proceedings Article
09 Aug 2003
TL;DR: This paper introduces the Point-Based Value Iteration (PBVI) algorithm for POMDP planning, and presents results on a robotic laser tag problem as well as three test domains from the literature.
Abstract: This paper introduces the Point-Based Value Iteration (PBVI) algorithm for POMDP planning. PBVI approximates an exact value iteration solution by selecting a small set of representative belief points and then tracking the value and its derivative for those points only. By using stochastic trajectories to choose belief points, and by maintaining only one value hyper-plane per point, PBVI successfully solves large problems: we present results on a robotic laser tag problem as well as three test domains from the literature.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the cosmological evolution of the hard X-ray luminosity function (HXLF) of Active Galactic Nuclei (AGN) in the 2-10 keV luminosity range of 10^{41.5} - 10^{46.5] erg s^-1 as a function of redshift up to 3.8*10^{-15} erg cm^-2 s
Abstract: We investigate the cosmological evolution of the hard X-ray luminosity function (HXLF) of Active Galactic Nuclei (AGN) in the 2-10 keV luminosity range of 10^{41.5} - 10^{46.5} erg s^-1 as a function of redshift up to 3. From a combination of surveys conducted at photon energies above 2 keV with HEAO1, ASCA, and Chandra, we construct a highly complete (>96%) sample consisting of 247 AGNs over the wide flux range of 10^{-10} - 3.8*10^{-15} erg cm^-2 s^-1 (2-10 keV). For our purpose, we develop an extensive method of calculating the intrinsic (before-absorption) HXLF and the absorption (N_H) function. This utilizes the maximum likelihood method fully correcting for observational biases with consideration of the X-ray spectrum of each source. We find that (i) the fraction of X-ray absorbed AGNs decreases with the intrinsic luminosity and (ii) the evolution of the HXLF of all AGNs (including both type-I and type-II AGNs) is best described with a luminosity dependent density evolution (LDDE) where the cutoff redshift increases with the luminosity. Our results directly constrain the evolution of AGNs that produce a major part of the hard X-ray background, thus solving its origin quantitatively. A combination of the HXLF and the NH function enables us to construct a purely "observation based" population synthesis model. We present basic consequences of this model, and discuss the contribution of Compton-thick AGNs to the rest of the hard X-ray background.

Proceedings Article
09 Aug 2003
TL;DR: This paper describes a modified version of FastSLAM which overcomes important deficiencies of the original algorithm and proves convergence of this new algorithm for linear SLAM problems and provides real-world experimental results that illustrate an order of magnitude improvement in accuracy over the original Fast SLAM algorithm.
Abstract: Proceedings of IJCAI 2003 In [15], Montemerlo et al. proposed an algorithm called FastSLAM as an efficient and robust solution to the simultaneous localization and mapping problem. This paper describes a modified version of FastSLAM which overcomes important deficiencies of the original algorithm. We prove convergence of this new algorithm for linear SLAM problems and provide real-world experimental results that illustrate an order of magnitude improvement in accuracy over the original FastSLAM algorithm.