scispace - formally typeset
Search or ask a question

Showing papers by "University of Notre Dame published in 2009"


Journal ArticleDOI
Patrick S. Schnable1, Doreen Ware2, Robert S. Fulton3, Joshua C. Stein2  +156 moreInstitutions (18)
20 Nov 2009-Science
TL;DR: The sequence of the maize genome reveals it to be the most complex genome known to date and the correlation of methylation-poor regions with Mu transposon insertions and recombination and how uneven gene losses between duplicated regions were involved in returning an ancient allotetraploid to a genetically diploid state is reported.
Abstract: We report an improved draft nucleotide sequence of the 2.3-gigabase genome of maize, an important crop plant and model for biological research. Over 32,000 genes were predicted, of which 99.8% were placed on reference chromosomes. Nearly 85% of the genome is composed of hundreds of families of transposable elements, dispersed nonuniformly across the genome. These were responsible for the capture and amplification of numerous gene fragments and affect the composition, sizes, and positions of centromeres. We also report on the correlation of methylation-poor regions with Mu transposon insertions and recombination, and copy number variants with insertions and/or deletions, as well as how uneven gene losses between duplicated regions were involved in returning an ancient allotetraploid to a genetically diploid state. These analyses inform and set the stage for further investigations to improve our understanding of the domestication and agricultural improvements of maize.

3,761 citations


Proceedings ArticleDOI
12 Dec 2009
TL;DR: Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taking into account configuring clusters with 4 cores gives thebest EDA2P and EDAP.
Abstract: This paper introduces McPAT, an integrated power, area, and timing modeling framework that supports comprehensive design space exploration for multicore and manycore processor configurations ranging from 90nm to 22nm and beyond. At the microarchitectural level, McPAT includes models for the fundamental components of a chip multiprocessor, including in-order and out-of-order processor cores, networks-on-chip, shared caches, integrated memory controllers, and multiple-domain clocking. At the circuit and technology levels, McPAT supports critical-path timing modeling, area modeling, and dynamic, short-circuit, and leakage power modeling for each of the device types forecast in the ITRS roadmap including bulk CMOS, SOI, and double-gate transistors. McPAT has a flexible XML interface to facilitate its use with many performance simulators. Combined with a performance simulator, McPAT enables architects to consistently quantify the cost of new ideas and assess tradeoffs of different architectures using new metrics like energy-delay-area2 product (EDA2P) and energy-delay-area product (EDAP). This paper explores the interconnect options of future manycore processors by varying the degree of clustering over generations of process technologies. Clustering will bring interesting tradeoffs between area and performance because the interconnects needed to group cores into clusters incur area overhead, but many applications can make good use of them due to synergies of cache sharing. Combining power, area, and timing results of McPAT with performance simulation of PARSEC benchmarks at the 22nm technology node for both common in-order and out-of-order manycore designs shows that when die cost is not taken into account clustering 8 cores together gives the best energy-delay product, whereas when cost is taken into account configuring clusters with 4 cores gives the best EDA2P and EDAP.

2,487 citations


Journal ArticleDOI
TL;DR: This paper focuses on the stability analysis for switched linear systems under arbitrary switching, and highlights necessary and sufficient conditions for asymptotic stability.
Abstract: During the past several years, there have been increasing research activities in the field of stability analysis and switching stabilization for switched systems. This paper aims to briefly survey recent results in this field. First, the stability analysis for switched systems is reviewed. We focus on the stability analysis for switched linear systems under arbitrary switching, and we highlight necessary and sufficient conditions for asymptotic stability. After a brief review of the stability analysis under restricted switching and the multiple Lyapunov function theory, the switching stabilization problem is studied, and a variety of switching stabilization methods found in the literature are outlined. Then the switching stabilizability problem is investigated, that is under what condition it is possible to stabilize a switched system by properly designing switching control laws. Note that the switching stabilizability problem has been one of the most elusive problems in the switched systems literature. A necessary and sufficient condition for asymptotic stabilizability of switched linear systems is described here.

2,470 citations


Journal ArticleDOI
TL;DR: This tutorial article surveys some of these techniques based on stochastic geometry and the theory of random geometric graphs, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature.
Abstract: Wireless networks are fundamentally limited by the intensity of the received signals and by their interference. Since both of these quantities depend on the spatial location of the nodes, mathematical techniques have been developed in the last decade to provide communication-theoretic results accounting for the networks geometrical configuration. Often, the location of the nodes in the network can be modeled as random, following for example a Poisson point process. In this case, different techniques based on stochastic geometry and the theory of random geometric graphs -including point process theory, percolation theory, and probabilistic combinatorics-have led to results on the connectivity, the capacity, the outage probability, and other fundamental limits of wireless networks. This tutorial article surveys some of these techniques, discusses their application to model wireless networks, and presents some of the main results that have appeared in the literature. It also serves as an introduction to the field for the other papers in this special issue.

1,893 citations


Book
13 Nov 2009
TL;DR: For certain classes of node distributions, most notably Poisson point processes, and attenuation laws, closed-form results are available, for both the interference itself as well as the signal-to-interference ratios, which determine the network performance.
Abstract: Since interference is the main performance-limiting factor in most wireless networks, it is crucial to characterize the interference statistics. The two main determinants of the interference are the network geometry (spatial distribution of concurrently transmitting nodes) and the path loss law (signal attenuation with distance). For certain classes of node distributions, most notably Poisson point processes, and attenuation laws, closed-form results are available, for both the interference itself as well as the signal-to-interference ratios, which determine the network performance. This monograph presents an overview of these results and gives an introduction to the analytical techniques used in their derivation. The node distribution models range from lattices to homogeneous and clustered Poisson models to general motion-invariant ones. The analysis of the more general models requires the use of Palm theory, in particular conditional probability generating functionals, which are briefly introduced in the appendix.

976 citations


Journal ArticleDOI
TL;DR: The use of a 2D carbon nanostructure, graphene, as a support material for the dispersion of Pt nanoparticles provides new ways to develop advanced electrocatalyst materials for fuel cells as discussed by the authors.
Abstract: The use of a 2-D carbon nanostructure, graphene, as a support material for the dispersion of Pt nanoparticles provides new ways to develop advanced electrocatalyst materials for fuel cells. Platinum nanoparticles are deposited onto graphene sheets by means of borohydride reduction of H2PtCl6 in a graphene oxide (GO) suspension. The partially reduced GO-Pt catalyst is deposited as films onto glassy carbon and carbon Toray paper by drop cast or electrophoretic deposition methods. Nearly 80% enhancement in the electrochemically active surface area (ECSA) can be achieved by exposing partially reduced GO-Pt films with hydrazine followed by heat treatment (300 °C, 8 h). The electrocatalyst performance as evaluated from the hydrogen fuel cell demonstrates the role of graphene as an effective support material in the development of an electrocatalyst.

936 citations


Journal ArticleDOI
TL;DR: The results indicate that high-throughput yeast two-hybrid interactions for human proteins are more precise than literature-curated interactions supported by a single publication, suggesting that HT-Y2H is suitable to map a significant portion of the human interactome.
Abstract: Several attempts have been made to systematically map protein-protein interaction, or 'interactome', networks. However, it remains difficult to assess the quality and coverage of existing data sets. Here we describe a framework that uses an empirically-based approach to rigorously dissect quality parameters of currently available human interactome maps. Our results indicate that high-throughput yeast two-hybrid (HT-Y2H) interactions for human proteins are more precise than literature-curated interactions supported by a single publication, suggesting that HT-Y2H is suitable to map a significant portion of the human interactome. We estimate that the human interactome contains approximately 130,000 binary interactions, most of which remain to be mapped. Similar to estimates of DNA sequence data quality and genome size early in the Human Genome Project, estimates of protein interaction data quality and interactome size are crucial to establish the magnitude of the task of comprehensive human interactome mapping and to elucidate a path toward this goal.

862 citations


Journal ArticleDOI
01 Feb 2009
TL;DR: Of the four SVM variations considered in this paper, the novel granular SVMs-repetitive undersampling algorithm (GSVM-RU) is the best in terms of both effectiveness and efficiency.
Abstract: Traditional classification algorithms can be limited in their performance on highly unbalanced data sets. A popular stream of work for countering the problem of class imbalance has been the application of a sundry of sampling strategies. In this paper, we focus on designing modifications to support vector machines (SVMs) to appropriately tackle the problem of class imbalance. We incorporate different ldquorebalancerdquo heuristics in SVM modeling, including cost-sensitive learning, and over- and undersampling. These SVM-based strategies are compared with various state-of-the-art approaches on a variety of data sets by using various metrics, including G-mean, area under the receiver operating characteristic curve, F-measure, and area under the precision/recall curve. We show that we are able to surpass or match the previously known best algorithms on each data set. In particular, of the four SVM variations considered in this paper, the novel granular SVMs-repetitive undersampling algorithm (GSVM-RU) is the best in terms of both effectiveness and efficiency. GSVM-RU is effective, as it can minimize the negative effect of information loss while maximizing the positive effect of data cleaning in the undersampling process. GSVM-RU is efficient by extracting much less support vectors and, hence, greatly speeding up SVM prediction.

860 citations


Journal ArticleDOI
TL;DR: In this paper, a photoelectrochemical cell in the visible region was modified with CdS quantum dots with an aim to tune the response of the photoelectronchemical cell.
Abstract: TiO2 nanotube arrays and particulate films are modified with CdS quantum dots with an aim to tune the response of the photoelectrochemical cell in the visible region. The method of successive ionic layer adsorption and reaction facilitates size control of CdS quantum dots. These CdS nanocrystals, upon excitation with visible light, inject electrons into the TiO2 nanotubes and particles and thus enable their use as photosensitive electrodes. Maximum incident photon to charge carrier efficiency (IPCE) values of 55% and 26% are observed for CdS sensitized TiO2 nanotube and nanoparticulate architectures respectively. The nearly doubling of IPCE observed with the TiO2 nanotube architecture is attributed to the increased efficiency of charge separation and transport of electrons.

773 citations


Journal ArticleDOI
TL;DR: In this article, the authors present measurements of the Hubble diagram for 103 Type Ia supernovae with redshifts 0.04 < z < 0.42, discovered during the first season (Fall 2005) of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey.
Abstract: We present measurements of the Hubble diagram for 103 Type Ia supernovae (SNe) with redshifts 0.04 < z < 0.42, discovered during the first season (Fall 2005) of the Sloan Digital Sky Survey-II (SDSS-II) Supernova Survey. These data fill in the redshift "desert" between low- and high-redshift SN Ia surveys. Within the framework of the MLCS2K2 light-curve fitting method, we use the SDSS-II SN sample to infer the mean reddening parameter for host galaxies, RV = 2.18 ± 0.14stat ± 0.48syst, and find that the intrinsic distribution of host-galaxy extinction is well fitted by an exponential function, P(AV ) = exp(–AV /τV), with τV = 0.334 ± 0.088 mag. We combine the SDSS-II measurements with new distance estimates for published SN data from the ESSENCE survey, the Supernova Legacy Survey (SNLS), the Hubble Space Telescope (HST), and a compilation of Nearby SN Ia measurements. A new feature in our analysis is the use of detailed Monte Carlo simulations of all surveys to account for selection biases, including those from spectroscopic targeting. Combining the SN Hubble diagram with measurements of baryon acoustic oscillations from the SDSS Luminous Red Galaxy sample and with cosmic microwave background temperature anisotropy measurements from the Wilkinson Microwave Anisotropy Probe, we estimate the cosmological parameters w and ΩM, assuming a spatially flat cosmological model (FwCDM) with constant dark energy equation of state parameter, w. We also consider constraints upon ΩM and ΩΛ for a cosmological constant model (ΛCDM) with w = –1 and non-zero spatial curvature. For the FwCDM model and the combined sample of 288 SNe Ia, we find w = –0.76 ± 0.07(stat) ± 0.11(syst), ΩM = 0.307 ± 0.019(stat) ± 0.023(syst) using MLCS2K2 and w = –0.96 ± 0.06(stat) ± 0.12(syst), ΩM = 0.265 ± 0.016(stat) ± 0.025(syst) using the SALT-II fitter. We trace the discrepancy between these results to a difference in the rest-frame UV model combined with a different luminosity correction from color variations; these differences mostly affect the distance estimates for the SNLS and HST SNe. We present detailed discussions of systematic errors for both light-curve methods and find that they both show data-model discrepancies in rest-frame U band. For the SALT-II approach, we also see strong evidence for redshift-dependence of the color-luminosity parameter (β). Restricting the analysis to the 136 SNe Ia in the Nearby+SDSS-II samples, we find much better agreement between the two analysis methods but with larger uncertainties: w = –0.92 ± 0.13(stat)+0.10 –0.33(syst) for MLCS2K2 and w = –0.92 ± 0.11(stat)+0.07 –0.15 (syst) for SALT-II.

754 citations


Journal ArticleDOI
TL;DR: It is shown that the ARF6 GTP/GDP cycle regulates the release of protease-loaded plasma membrane-derived microvesicles from tumor cells into the surrounding environment, suggesting that ARf6 activation and the proteolytic activities of microvesicle could potentially serve as biomarkers for disease.

Journal ArticleDOI
TL;DR: Empirical simulations used to demonstrate that self-triggered control systems can be remarkably robust to task delay are used to derive bounds on a task's sampling period and deadline to quantify how robust the system's performance will be to variations in these parameters.
Abstract: This paper examines a class of real-time control systems in which each control task triggers its next release based on the value of the last sampled state. Prior work used simulations to demonstrate that self-triggered control systems can be remarkably robust to task delay. This paper derives bounds on a task's sampling period and deadline to quantify how robust the control system's performance will be to variations in these parameters. In particular we establish inequality constraints on a control task's period and deadline whose satisfaction ensures that the closed-loop system's induced L 2 gain lies below a specified performance threshold. The results apply to linear time-invariant systems driven by external disturbances whose magnitude is bounded by a linear function of the system state's norm. The plant is regulated by a full-information H infin controller. These results can serve as the basis for the design of soft real-time systems that guarantee closed-loop control system performance at levels traditionally seen in hard real-time systems.

Journal ArticleDOI
TL;DR: Using a Taylor-series expansion, this article solved a simple reduced-form gravity equation revealing a transparent theoretical relationship among bilateral trade flows, incomes, and trade costs, based upon the model in Anderson and van Wincoop.

Journal ArticleDOI
19 May 2009-Langmuir
TL;DR: Anchoring of ZnO nanoparticles on 2-D carbon nanostructures provides a new way to design semiconductor-carbon nanocomposites for catalytic applications.
Abstract: Graphene oxide sheets suspended in ethanol interact with excited ZnO nanoparticles and undergo photocatalytic reduction. The luminescence quenching of the green emission of ZnO serves as a probe to monitor the electron transfer from excited ZnO to graphene oxide. Anchoring of ZnO nanoparticles on 2-D carbon nanostructures provides a new way to design semiconductor−carbon nanocomposites for catalytic applications.

Journal ArticleDOI
TL;DR: A new phenotypic database summarizing correlations obtained from the disease history of more than 30 million patients in a Phenotypic Disease Network (PDN) is introduced, offering the potential to enhance the understanding of the origin and evolution of human diseases.
Abstract: The use of networks to integrate different genetic, proteomic, and metabolic datasets has been proposed as a viable path toward elucidating the origins of specific diseases. Here we introduce a new phenotypic database summarizing correlations obtained from the disease history of more than 30 million patients in a Phenotypic Disease Network (PDN). We present evidence that the structure of the PDN is relevant to the understanding of illness progression by showing that (1) patients develop diseases close in the network to those they already have; (2) the progression of disease along the links of the network is different for patients of different genders and ethnicities; (3) patients diagnosed with diseases which are more highly connected in the PDN tend to die sooner than those affected by less connected diseases; and (4) diseases that tend to be preceded by others in the PDN tend to be more connected than diseases that precede other illnesses, and are associated with higher degrees of mortality. Our findings show that disease progression can be represented and studied using network methods, offering the potential to enhance our understanding of the origin and evolution of human diseases. The dataset introduced here, released concurrently with this publication, represents the largest relational phenotypic resource publicly available to the research community.

Journal ArticleDOI
12 May 2009-ACS Nano
TL;DR: Two semiconductor nanocrystals linked to nanostructured TiO2 films using 3-mercaptopropionic acid as a linker molecule for establishing the mechanistic aspects of interfacial charge transfer processes are linked.
Abstract: CdSe and CdTe nanocrystals are linked to nanostructured TiO2 films using 3-mercaptopropionic acid as a linker molecule for establishing the mechanistic aspects of interfacial charge transfer processes. Both these quantum dots are energetically capable of sensitizing TiO2 films and generating photocurrents in quantum dot solar cells. These two semiconductor nanocrystals exhibit markedly different external quantum efficiencies (∼70% for CdSe and ∼0.1% for CdTe at 555 nm). Although CdTe with a more favorable conduction band energy (ECB = −1.0 V vs NHE) is capable of injecting electrons into TiO2 faster than CdSe (ECB = −0.6 V vs NHE), hole scavenging by a sulfide redox couple remains a major bottleneck. The sulfide ions dissolved in aqueous solutions are capable of scavenging photogenerated holes in photoirradiated CdSe system but not in CdTe. The anodic corrosion and exchange of Te with S dominate the charge transfer at the CdTe interface. Factors that dictate the efficiency and photostability of CdSe and C...

Journal ArticleDOI
TL;DR: Active Share as mentioned in this paper is a measure of active portfolio management, which represents the share of portfolio holdings that differ from the benchmark index holdings, and it has been shown that funds with the highest active share significantly outperform their benchmarks.
Abstract: We introduce a new measure of active portfolio management, Active Share, which represents the share of portfolio holdings that differ from the benchmark index holdings We compute Active Share for domestic equity mutual funds from 1980 to 2003 We relate Active Share to fund characteristics such as size, expenses, and turnover in the cross-section, and we also examine its evolution over time Active Share predicts fund performance: funds with the highest Active Share significantly outperform their benchmarks, both before and after expenses, and they exhibit strong performance persistence Non-index funds with the lowest Active Share underperform their benchmarks

Journal ArticleDOI
22 May 2009-Science
TL;DR: The mobility of mobile phone users is modeled in order to study the fundamental spreading patterns that characterize a mobile virus outbreak and it is found that although Bluetooth viruses can reach all susceptible handsets with time, they spread slowly because of human mobility, offering ample opportunities to deploy antiviral software.
Abstract: We modeled the mobility of mobile phone users in order to study the fundamental spreading patterns that characterize a mobile virus outbreak. We find that although Bluetooth viruses can reach all susceptible handsets with time, they spread slowly because of human mobility, offering ample opportunities to deploy antiviral software. In contrast, viruses using multimedia messaging services could infect all users in hours, but currently a phase transition on the underlying call graph limits them to only a small fraction of the susceptible users. These results explain the lack of a major mobile virus breakout so far and predict that once a mobile operating system's market share reaches the phase transition point, viruses will pose a serious threat to mobile communications.

Journal ArticleDOI
TL;DR: In this article, the authors present multiband photometry of 185 type-Ia supernovae (SNe Ia), with over 11,500 observations acquired between 2001 and 2008 at the F. L. Whipple Observatory of Harvard-Smithsonian Center for Astrophysics (CfA).
Abstract: We present multiband photometry of 185 type-Ia supernovae (SNe Ia), with over 11,500 observations. These were acquired between 2001 and 2008 at the F. L. Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics (CfA). This sample contains the largest number of homogeneously observed and reduced nearby SNe Ia (z 0.08) published to date. It more than doubles the nearby sample, bringing SN Ia cosmology to the point where systematic uncertainties dominate. Our natural system photometry has a precision of 0.02 mag in BVRIr'i' and 0.04 mag in U for points brighter than 17.5 mag. We also estimate a systematic uncertainty of 0.03 mag in our SN Ia standard system BVRIr'i' photometry and 0.07 mag for U. Comparisons of our standard system photometry with published SN Ia light curves and comparison stars, where available for the same SN, reveal agreement at the level of a few hundredths mag in most cases. We find that 1991bg-like SNe Ia are sufficiently distinct from other SNe Ia in their color and light-curve-shape/luminosity relation that they should be treated separately in light-curve/distance fitter training samples. The CfA3 sample will contribute to the development of better light-curve/distance fitters, particularly in the few dozen cases where near-infrared photometry has been obtained and, together, can help disentangle host-galaxy reddening from intrinsic supernova color, reducing the systematic uncertainty in SN Ia distances due to dust.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the relationship among purpose, hope, and life satisfaction among 153 adolescents, 237 emerging adults, and 416 adults (N = 806) and found that having identified a purpose in life was associated with greater life satisfaction at these three stages of life.
Abstract: Using the Revised Youth Purpose Survey (Bundick et al., 2006), the Trait Hope Scale (Snyder et al., 1991), and the Satisfaction with Life Scale (Diener, Emmons, Larsen, & Griffin, 1985), the present study examined the relationship among purpose, hope, and life satisfaction among 153 adolescents, 237 emerging adults, and 416 adults (N = 806). Results of this cross-sectional study revealed that having identified a purpose in life was associated with greater life satisfaction at these three stages of life. However, searching for a purpose was only associated with increased life satisfaction during adolescence and emerging adulthood. Additionally, aspects of hope mediated the relationship between purpose and life satisfaction at all three stages of life. Implications of these results for effectively fostering purpose are discussed.

Journal ArticleDOI
TL;DR: In this paper, the authors present the results of a parametric experimental investigation aimed at optimizing the body force produced by single dielectric barrier discharge plasma actuators used for aerodynamic flow control.
Abstract: This paper presents the results of a parametric experimental investigation aimed at optimizing the body force produced by single dielectric barrier discharge plasma actuators used for aerodynamic flow control. A primary goal of the study is the improvement of actuator authority for flow control applications at higher Reynolds number than previously possible. The study examines the effects of dielectric material and thickness, applied voltage amplitude and frequency, voltage waveform, exposed electrode geometry, covered electrode width, and multiple actuator arrays. The metric used to evaluate the performance of the actuator in each case is the measured actuator-induced thrust which is proportional to the total body force. It is demonstrated that actuators constructed with thick dielectric material of low dielectric constant produce a body force that is an order of magnitude larger than that obtained by the Kapton-based actuators used in many previous plasma flow control studies. These actuators allow operation at much higher applied voltages without the formation of discrete streamers which lead to body force saturation.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that as originally proposed, Allison's method can have serious problems and should not be applied on a routine basis and also show that his model belongs to a larger class of models known as heterogeneous choice or location-scale models.
Abstract: Allison (1999) notes that comparisons of logit and probit coefficients across groups can be invalid and misleading, proposes a procedure by which these problems can be corrected, and argues that ``routine use [of this method] seems advisable'' and that ``it is hard to see how [the method] can be improved.'' In this article, the author argues that as originally proposed, Allison's method can have serious problems and should not be applied on a routine basis. However, this study also shows that his model belongs to a larger class of models variously known as heterogeneous choice or location-scale models. Several advantages of this broader and more flexible class of models are illustrated. Dependent variables can be ordinal in addition to binary, sources of heterogeneity can be better modeled and controlled for, and insights can be gained into the effects of group characteristics on outcomes that would be missed by other methods.

Journal ArticleDOI
TL;DR: The magnetization of a magnetic random access memory is usually controlled by the injection of an externally polarized spin-current as mentioned in this paper, which can be manipulated with local fields generated by spin-orbit interactions of an unpolarized current.
Abstract: The magnetization of a magnetic random-access memory is usually controlled by the injection of an externally polarized spin-current. A proof-of-principle demonstration shows that this could instead be manipulated with local fields generated by spin–orbit interactions of an unpolarized current.

Journal ArticleDOI
TL;DR: This paper derives the distributional properties of the interference and provides upper and lower bounds for its distribution, and considers the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh.
Abstract: In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson cluster process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its distribution. We consider the probability of successful transmission in an interference-limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds. We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes, and the Campbell-Mecke theorem.

Posted Content
TL;DR: The authors analyzes the role of specialized high-skilled labor in the growth of the service sector as a share of the total economy and finds that the growth has been driven by the consumption of services rather than being driven by low-skill jobs.
Abstract: This paper analyzes the role of specialized high-skilled labor in the growth of the service sector as a share of the total economy. Empirically, we emphasize that the growth has been driven by the consumption of services. Rather than being driven by low-skill jobs, the importance of skill-intensive services has risen, and this has coincided with a period of rising relative wages and quantities of high-skilled labor. We develop a theory where demand shifts toward ever more skill-intensive output as income rises, and because skills are highly specialized this lowers the importance of home production relative to market services. The theory is also consistent with a rising level of skill and skill premium, a rising relative price of services that is linked to this skill premium, and rich product cycles between home and market, all of which are observed in the data.

Journal ArticleDOI
TL;DR: The security of the scheme is based on pseudorandom functions, without reliance on the Random Oracle Model, and it is shown how to handle extensions proposed by Crampton [2003] of the standard hierarchies to “limited depth” and reverse inheritance.
Abstract: Hierarchies arise in the context of access control whenever the user population can be modeled as a set of partially ordered classes (represented as a directed graph). A user with access privileges for a class obtains access to objects stored at that class and all descendant classes in the hierarchy. The problem of key management for such hierarchies then consists of assigning a key to each class in the hierarchy so that keys for descendant classes can be obtained via efficient key derivation.We propose a solution to this problem with the following properties: (1) the space complexity of the public information is the same as that of storing the hierarchy; (2) the private information at a class consists of a single key associated with that class; (3) updates (i.e., revocations and additions) are handled locally in the hierarchy; (4) the scheme is provably secure against collusion; and (5) each node can derive the key of any of its descendant with a number of symmetric-key operations bounded by the length of the path between the nodes. Whereas many previous schemes had some of these properties, ours is the first that satisfies all of them. The security of our scheme is based on pseudorandom functions, without reliance on the Random Oracle Model.Another substantial contribution of this work is that we are able to lower the key derivation time at the expense of modestly increasing the public storage associated with the hierarchy. Insertion of additional, so-called shortcut, edges, allows to lower the key derivation to a small constant number of steps for graphs that are total orders and trees by increasing the total number of edges by a small asymptotic factor such as O(log*n) for an n-node hierarchy. For more general access hierarchies of dimension d, we use a technique that consists of adding dummy nodes and dimension reduction. The key derivation work for such graphs is then linear in d and the increase in the number of edges is by the factor O(logd − 1n) compared to the one-dimensional case.Finally, by making simple modifications to our scheme, we show how to handle extensions proposed by Crampton [2003] of the standard hierarchies to “limited depth” and reverse inheritance.

Journal ArticleDOI
TL;DR: This topical review explores some of the history of ionic liquid molecular simulations, and gives examples of the recent use of molecular dynamics and Monte Carlo simulation in understanding the structure of ions, the sorption of small molecules in ionic liquids, the nature of Ionic liquids in the vapor phase and the dynamics of ioni liquids.
Abstract: Ionic liquids are salts that are liquid near ambient conditions. Interest in these unusual compounds has exploded in the last decade, both at the academic and commercial level. Molecular simulations based on classical potentials have played an important role in helping researchers understand how condensed phase properties of these materials are linked to chemical structure and composition. Simulations have also predicted many properties and unexpected phenomena that have subsequently been confirmed experimentally. The beneficial impact molecular simulations have had on this field is due in large part to excellent timing. Just when computing power and simulation methods matured to the point where complex fluids could be studied in great detail, a new class of materials virtually unknown to experimentalists came on the scene and demanded attention. This topical review explores some of the history of ionic liquid molecular simulations, and then gives examples of the recent use of molecular dynamics and Monte Carlo simulation in understanding the structure of ionic liquids, the sorption of small molecules in ionic liquids, the nature of ionic liquids in the vapor phase and the dynamics of ionic liquids. This review concludes with a discussion of some of the outstanding problems facing the ionic liquid modeling community and how condensed phase molecular simulation experts not presently working on ionic liquids might help advance the field.

Book
01 Mar 2009
TL;DR: This paper presents a meta-analyses of venn-I and Venn-II as well as two rival models of linguistic representation called Venn and L0, both of which stress the importance of semantic consistency.
Abstract: Acknowledgements 1. Introduction 2. Preliminaries 3. Venn-I 4. Venn-II 5. Venn-II and L0 6. Diagrammatic versus linguistic representation 7. Conclusion Appendix References Index.

Journal ArticleDOI
TL;DR: In this article, two ways of modelling attribute non-attendance were proposed to estimate non-market effects of agriculture from stated preference surveys from a stated preference survey designed to value landscapes in Ireland.
Abstract: Non-market effects of agriculture are often estimated using discrete choice models from stated preference surveys. In this context we propose two ways of modelling attribute non-attendance. The first involves constraining coefficients to zero in a latent class framework, whereas the second is based on stochastic attribute selection and grounded in Bayesian estimation. Their implications are explored in the context of a stated preference survey designed to value landscapes in Ireland. Taking account of attribute non-attendance with these data improves fit and tends to involve two attributes one of which is likely to be cost, thereby leading to substantive changes in derived welfare estimates. Oxford University Press and Foundation for the European Review of Agricultural Economics 2009; all rights reserved. For permissions, please email journals.permissions@oxfordjournals.org, Oxford University Press.

ReportDOI
TL;DR: In this paper, the extent of under-reporting for ten transfer programs in five major nationally representative surveys by comparing reported weighted totals for these programs with totals obtained from government agencies was investigated.
Abstract: High rates of understatement are found for many government transfer programs and in many datasets. This understatement has major implications for our understanding of economic well-being and the effects of transfer programs. We provide estimates of the extent of under-reporting for ten transfer programs in five major nationally representative surveys by comparing reported weighted totals for these programs with totals obtained from government agencies. We also examine imputation procedures and rates. We find increasing under-reporting and imputation over time and sharp differences across programs and surveys. We explore reasons for under-reporting and how under-reporting biases existing studies and suggest corrections.