scispace - formally typeset
Search or ask a question

Showing papers by "Carleton University published in 2010"


Journal ArticleDOI
Koji Nakamura1, K. Hagiwara, Ken Ichi Hikasa2, Hitoshi Murayama3  +180 moreInstitutions (92)
TL;DR: In this article, a biennial review summarizes much of particle physics using data from previous editions, plus 2158 new measurements from 551 papers, they list, evaluate and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons.
Abstract: This biennial Review summarizes much of particle physics. Using data from previous editions, plus 2158 new measurements from 551 papers, we list, evaluate, and average measured properties of gauge bosons, leptons, quarks, mesons, and baryons. We also summarize searches for hypothetical particles such as Higgs bosons, heavy neutrinos, and supersymmetric particles. All the particle properties and search limits are listed in Summary Tables. We also give numerous tables, figures, formulae, and reviews of topics such as the Standard Model, particle detectors, probability, and statistics. Among the 108 reviews are many that are new or heavily revised including those on neutrino mass, mixing, and oscillations, QCD, top quark, CKM quark-mixing matrix, V-ud & V-us, V-cb & V-ub, fragmentation functions, particle detectors for accelerator and non-accelerator physics, magnetic monopoles, cosmological parameters, and big bang cosmology.

2,788 citations



Journal ArticleDOI
TL;DR: The Risk-Need-Responsivity (RNR) model has been shown to reduce offender recidivism by up to 35% as mentioned in this paper, which describes who should receive services (moderate and higher risk cases), appropriate targets for rehabilitation services (criminogenic needs), and powerful influence strategies for reducing criminal behavior.
Abstract: For over 30 years, criminal justice policy has been dominated by a “get tough” approach to offenders. Increasing punitive measures have failed to reduce criminal recidivism and instead have led to a rapidly growing correctional system that has strained government budgets. The inability of reliance on official punishment to deter crime is understandable within the context of the psychology of human conduct. However, this knowledge was largely ignored in the quest for harsher punishment. A better option for dealing with crime is to place greater effort on the rehabilitation of offenders. In particular, programs that adhere to the Risk-NeedResponsivity (RNR) model have been shown to reduce offender recidivism by up to 35%. The model describes: a) who should receive services (moderate and higher risk cases), b) the appropriate targets for rehabilitation services (criminogenic needs), and c) the powerful influence strategies for reducing criminal behavior (cognitive social learning). Although the RNR model is well known in the correctional field it is less well known, but equally relevant, for forensic, clinical, and counseling psychology. The paper summarizes the empirical base to RNR along with implications for research, policy, and practice.

994 citations


Journal ArticleDOI
TL;DR: For example, this paper investigated the career expectations and priorities of members of the "millennial" generation (born in or after 1980) and explored differences among this cohort related to demographic factors (i.e., gender, race, and year of study) and academic performance.
Abstract: This study investigated the career expectations and priorities of members of the “millennial” generation (born in or after 1980) and explored differences among this cohort related to demographic factors (i.e., gender, race, and year of study) and academic performance. Data were obtained from a national survey of millennial undergraduate university students from across Canada (N = 23,413). Data were analyzed using various multivariate techniques to assess the impacts of demographic variables and academic achievement on career expectations and priorities. Millennials placed the greatest importance on individualistic aspects of a job. They had realistic expectations of their first job and salary but were seeking rapid advancement and the development of new skills, while also ensuring a meaningful and satisfying life outside of work. Our results suggest that Millennials’ expectations and values vary by gender, visible minority status, GPA, and year of study, but these variables explain only a small proportion of variance. Changing North American demographics have created a crisis in organizations as they strive to recruit and retain the millennial generation, who purportedly hold values, attitudes, and expectations that are significantly different from those of the generations of workers that preceded them. A better understanding of the Millennials’ career expectations and priorities helps employers to create job offerings and work environments that are more likely to engage and retain millennial workers. This is a large-sample study that provides benchmark results for the millennial generation, which can be compared to results from other generational cohorts, and to millennial cohorts in the future as they progress through their life-cycle. This is one of the few studies that examines demographic heterogeneity within the millennial cohort.

857 citations


Journal ArticleDOI
TL;DR: This review concentrates mainly on post-2002, new OHC effects data in Arctic wildlife and fish, and is largely based on recently available effects data for populations of several top trophic level species, including seabirds and Arctic charr.

739 citations


Journal ArticleDOI
TL;DR: This Focus Review describes the emerging class of near-infrared (NIR) organic compounds containing the conjugated polyene, polymethine, and donor-acceptor chromophores and exploration of their NIR-absorbing, Nir-fluorescence, and N IR-photosensitizing properties for potential applications in heat absorbers, solar cells, andNIR light-emitting diodes.
Abstract: This Focus Review describes the emerging class of near-infrared (NIR) organic compounds containing the conjugated polyene, polymethine, and donor-acceptor chromophores and exploration of their NIR-absorbing, NIR-fluorescence, and NIR-photosensitizing properties for potential applications in heat absorbers, solar cells, and NIR light-emitting diodes Examples of NIR organic compounds are reviewed with emphasis on the molecular design, NIR absorption, and fluorescence and particular emerging applications The donor-acceptor type of NIR chromophores are particularly introduced owing to some unique features, including the designer-made energy gaps, facile synthesis, good processability, and controllable morphology and properties in the solid state Future directions in research and development of NIR organic materials and applications are then offered from a personal perspective

637 citations


Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3098 moreInstitutions (192)
TL;DR: In this article, the authors used the ATLAS detector to detect dijet asymmetry in the collisions of lead ions at the Large Hadron Collider and found that the transverse energies of dijets in opposite hemispheres become systematically more unbalanced with increasing event centrality, leading to a large number of events which contain highly asymmetric di jets.
Abstract: By using the ATLAS detector, observations have been made of a centrality-dependent dijet asymmetry in the collisions of lead ions at the Large Hadron Collider. In a sample of lead-lead events with a per-nucleon center of mass energy of 2.76 TeV, selected with a minimum bias trigger, jets are reconstructed in fine-grained, longitudinally segmented electromagnetic and hadronic calorimeters. The transverse energies of dijets in opposite hemispheres are observed to become systematically more unbalanced with increasing event centrality leading to a large number of events which contain highly asymmetric dijets. This is the first observation of an enhancement of events with such large dijet asymmetries, not observed in proton-proton collisions, which may point to an interpretation in terms of strong jet energy loss in a hot, dense medium.

630 citations


Journal ArticleDOI
TL;DR: A model of the relations among cognitive precursors, early numeracy skill, and mathematical outcomes was tested for 182 children and highlighted the need to understand the fundamental underlying skills that contribute to diverse forms of mathematical competence.
Abstract: A model of the relations among cognitive precursors, early numeracy skill, and mathematical outcomes was tested for 182 children from 4.5 to 7.5 years of age. The model integrates research from neuroimaging, clinical populations, and normal development in children and adults. It includes 3 precursor pathways: quantitative, linguistic, and spatial attention. These pathways (a) contributed independently to early numeracy skills during preschool and kindergarten and (b) related differentially to performance on a variety of mathematical outcomes 2 years later. The success of the model in accounting for performance highlights the need to understand the fundamental underlying skills that contribute to diverse forms of mathematical competence.

620 citations


Journal ArticleDOI
TL;DR: Consideration of religion’s dual function as a social identity and a belief system may facilitate greater understanding of the variability in its importance across individuals and groups.
Abstract: As a social identity anchored in a system of guiding beliefs and symbols, religion ought to serve a uniquely powerful function in shaping psychological and social processes. Religious identification offers a distinctive "sacred" worldview and "eternal" group membership, unmatched by identification with other social groups. Thus, religiosity might be explained, at least partially, by the marked cognitive and emotional value that religious group membership provides. The uniqueness of a positive social group, grounded in a belief system that offers epistemological and ontological certainty, lends religious identity a twofold advantage for the promotion of well-being. However, that uniqueness may have equally negative impacts when religious identity itself is threatened through intergroup conflict. Such consequences are illustrated by an examination of identities ranging from religious fundamentalism to atheism. Consideration of religion's dual function as a social identity and a belief system may facilitate greater understanding of the variability in its importance across individuals and groups.

605 citations


Journal ArticleDOI
TL;DR: Owing to a high surface area to volume ratio, the effectiveness of nanofertilizers may surpass the most innovative polymer-coated conventional fertilizers, which have seen little improvement in the past ten years.
Abstract: To the Editor — Nitrogen, which is a key nutrient source for food, biomass, and fibre production in agriculture, is by far the most important element in fertilizers when judged in terms of the energy required for its synthesis, tonnage used and monetary value. However, compared with amounts of nitrogen applied to soil, the nitrogen use efficiency (NUE) by crops is very low. Between 50 and 70% of the nitrogen applied using conventional fertilizers — plant nutrient formulations with dimensions greater than 100 nm — is lost owing to leaching in the form of water soluble nitrates, emission of gaseous ammonia and nitrogen oxides, and long-term incorporation of mineral nitrogen into soil organic matter by soil microorganisms1. Numerous attempts to increase the NUE have so far met with little success, and the time may have come to apply nanotechnology to solve some of these problems. Carbon nanotubes were recently shown to penetrate tomato seeds2, and zinc oxide nanoparticles were shown to enter the root tissue of ryegrass3 (Fig. 1). This suggests that new nutrient delivery systems that exploit the nanoscale porous domains on plant surfaces can be developed. The potential use of nanotechnology to improve fertilizer formulations, however, may have been hindered by reduced research funding and the lack of clear regulations and innovation policies. Current patent literature shows that the use of nanotechnology in fertilizer development remains relatively low (about 100 patents and patent applications between 1998 and 2008) compared with pharmaceuticals (more than 6,000 patents and patent applications over the same period)4. A nanofertilizer refers to a product that delivers nutrients to crops in one of three ways. The nutrient can be encapsulated inside nanomaterials such as nanotubes or nanoporous materials, coated with a thin protective polymer film, or delivered as particles or emulsions of nanoscale dimensions. Owing to a high surface area to volume ratio, the effectiveness of nanofertilizers may surpass the most innovative polymer-coated conventional fertilizers, which have seen little improvement in the past ten years. Ideally, nanotechnology could provide devices and mechanisms to synchronize the release of nitrogen (from fertilizers) with its uptake by crops; the nanofertilizers should release the nutrients on-demand while preventing them from prematurely converting into chemical/gaseous forms that cannot be absorbed by plants. This can be achieved by preventing nutrients from interacting with soil, water and microorganisms, and releasing nutrients only when they can be directly internalized by the plant. Examples of these nanostrategies are beginning to emerge. Zinc–aluminiumlayered double-hydroxide nanocomposites have been used for the controlled release of chemical compounds that regulate plant growth5. Improved yields have been claimed for fertilizers that are incorporated into cochleate nanotubes (rolled-up lipid bilayer sheets)6. The release of nitrogen by urea hydrolysis has been controlled through the insertion of urease enzymes into nanoporous silica7. Although these approaches are promising, they lack mechanisms that can recognize and respond to the needs of the plant and changes in nitrogen levels in the soil. The development of functional nanoscale films8 and devices has the potential to produce significant gains in the NUE and crop production. In addition to increasing the NUE, nanotechnology might be able to improve the performance of fertilizers in other ways. For example, owing to its photocatalytic property, nanosize titanium dioxide has been incorporated into fertilizers as a bactericidal additive. Moreover, titanium dioxide may also lead to improved crop yield through the photoreduction of nitrogen gas9. Furthermore, nanosilica particles absorbed by roots have been shown to form films at the cell walls, which can enhance the plant’s resistance to stress and lead to improved yields10. Clearly, there is an opportunity for nanotechnology to have a profound impact on energy, the economy and the environment, by improving fertilizer products. New prospects for integrating nanotechnologies into fertilizers should be explored, cognizant of any potential risk to the environment or to human health. With targeted efforts by governments and academics in developing such enabled agriproducts, we believe that nanotechnology will be transformative in this field. ❐

592 citations


Proceedings ArticleDOI
29 Nov 2010
TL;DR: This paper forms the problem of radio resource allocation to the D2D communications as a mixed integer nonlinear programming (MINLP) and proposes an alternative greedy heuristic algorithm that can lessen interference to the primary cellular network utilizing channel gain information.
Abstract: Device-to-device (D2D) communication as an underlaying cellular network empowers user-driven rich multimedia applications and also has proven to be network efficient offloading eNodeB traffic. However, D2D transmitters may cause significant amount of interference to the primary cellular network when radio resources are shared between them. During the downlink (DL) phase, primary cell UE (user equipment) may suffer from interference by the D2D transmitter. On the other hand, the immobile eNodeB is the victim of interference by the D2D transmitter during the uplink (UL) phase when radio resources are allocated randomly. Such interference can be avoided otherwise diminish if radio resource allocated intelligently with the coordination from the eNodeB. In this paper, we formulate the problem of radio resource allocation to the D2D communications as a mixed integer nonlinear programming (MINLP). Such an optimization problem is notoriously hard to solve within fast scheduling period of the Long Term Evolution (LTE) network. We therefore propose an alternative greedy heuristic algorithm that can lessen interference to the primary cellular network utilizing channel gain information. We also perform extensive simulation to prove the efficacy of the proposed algorithm.

Proceedings ArticleDOI
04 Oct 2010
TL;DR: This work presents a methodology for the empirical analysis of permission-based security models which makes novel use of the Self-Organizing Map (SOM) algorithm of Kohonen (2001) and offers some discussion identifying potential points of improvement for the Android permission model.
Abstract: Permission-based security models provide controlled access to various system resources The expressiveness of the permission set plays an important role in providing the right level of granularity in access control In this work, we present a methodology for the empirical analysis of permission-based security models which makes novel use of the Self-Organizing Map (SOM) algorithm of Kohonen (2001) While the proposed methodology may be applicable to a wide range of architectures, we analyze 1,100 Android applications as a case study Our methodology is of independent interest for visualization of permission-based systems beyond our present Android-specific empirical analysis We offer some discussion identifying potential points of improvement for the Android permission model attempting to increase expressiveness where needed without increasing the total number of permissions or overall complexity

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper investigated the effects of Chinese companies' institutional environment on the development of trust and information integration between buyers and suppliers, and found that the importance of guanxi has a direct, positive impact on information sharing, and government support had a direct and positive effect on both information sharing and collaborative planning.

Journal ArticleDOI
01 Jan 2010
TL;DR: This tutorial paper aims at providing an overview of nonlinear equalization methods as a key ingredient in receivers of SCM for wideband transmission, and reviews both hybrid (with filters implemented both in time and frequency domain) and all-frequency-domain iterative structures.
Abstract: In recent years single carrier modulation (SCM) has again become an interesting and complementary alternative to multicarrier modulations such as orthogonal frequency division multiplexing (OFDM). This has been largely due to the use of nonlinear equalizer structures implemented in part in the frequency domain by means of fast Fourier transforms, bringing the complexity close to that of OFDM. Here a nonlinear equalizer is formed with a linear filter to remove part of intersymbol interference, followed by a canceler of remaining interference by using previous detected data. Moreover, the capacity of SCM is similar to that of OFDM in highly dispersive channels only if a nonlinear equalizer is adopted at the receiver. Indeed, the study of efficient nonlinear frequency domain equalization techniques has further pushed the adoption of SCM in various standards. This tutorial paper aims at providing an overview of nonlinear equalization methods as a key ingredient in receivers of SCM for wideband transmission. We review both hybrid (with filters implemented both in time and frequency domain) and all-frequency-domain iterative structures. Application of nonlinear frequency domain equalizers to a multiple input multiple output scenario is also investigated, with a comparison of two architectures for interference reduction. We also present methods for channel estimation and alternatives for pilot insertion. The impact on SCM transmission of impairments such as phase noise, frequency offset and saturation due to high power amplifiers is also assessed. The comparison among the considered frequency domain equalization techniques is based both on complexity and performance, in terms of bit error rate or throughput.

Journal ArticleDOI
TL;DR: Impression management bias represents a significant threat to the validity of self-reported alcohol use and harms and may lead to misspecification of models and under-estimates of harmful or hazardous use.

Journal ArticleDOI
TL;DR: A snapshot of the thermal state of permafrost in northern North America during the International Polar Year (IPY) was developed using ground temperature data collected from 350 boreholes as mentioned in this paper.
Abstract: A snapshot of the thermal state of permafrost in northern North America during the International Polar Year (IPY) was developed using ground temperature data collected from 350 boreholes. More than half these were established during IPY to enhance the network in sparsely monitored regions. The measurement sites span a diverse range of ecoclimatic and geological conditions across the continent and are at various elevations within the Cordillera. The ground temperatures within the discontinuous permafrost zone are generally above −3°C, and range down to −15°C in the continuous zone. Ground temperature envelopes vary according to substrate, with shallow depths of zero annual amplitude for peat and mineral soils, and much greater depths for bedrock. New monitoring sites in the mountains of southern and central Yukon suggest that permafrost may be limited in extent. In concert with regional air temperatures, permafrost has generally been warming across North America for the past several decades, as indicated by measurements from the western Arctic since the 1970s and from parts of eastern Canada since the early 1990s. The rates of ground warming have been variable, but are generally greater north of the treeline. Latent heat effects in the southern discontinuous zone dominate the permafrost thermal regime close to 0°C and allow permafrost to persist under a warming climate. Consequently, the spatial diversity of permafrost thermal conditions is decreasing over time. Copyright © 2010 Crown in the right of Canada and John Wiley & Sons, Ltd.

Journal ArticleDOI
TL;DR: Access to the P. ultimum genome has revealed not only core pathogenic mechanisms within the oomycetes but also lineage-specific genes associated with the alternative virulence and lifestyles found within the pythiaceous lineages compared to the Peronosporaceae.
Abstract: Background Pythium ultimum is a ubiquitous oomycete plant pathogen responsible for a variety of diseases on a broad range of crop and ornamental species.

Journal ArticleDOI
TL;DR: This paper focuses on the core design aspects of DSRC which is called Wireless Access in Vehicular Networks (WAVE), and describes some of the lessons learned from particular design approaches.
Abstract: The Dedicated Short-Range Communications (DSRC) standards suite is based on multiple cooperating standards mainly developed by the IEEE. In particular, we focus this paper on the core design aspects of DSRC which is called Wireless Access in Vehicular Networks (WAVE). WAVE is highlighted in IEEE 1609.1/.2/.3/.4. The DSRC and WAVE standards have been the center of major attention in both research and industrial communities. In 2008, WAVE standard was the third best seller standards in the history of the IEEE. This attention reflects the potential of WAVE to facilitate much of the vehicular safety applications. In this paper we present a fairly detailed tutorial of the WAVE standards. We extend the paper by describing some of the lessons learned from particular design approaches. We direct the reader to the landmark research papers in relevant topics. We alert the reader about major open research issues that might lead to future contribution to the WAVE design.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed scheme outperforms the reference schemes, in which either coordination is not employed or employed in a static manner, in terms of cell edge throughput with a minimal impact on the network throughput and with some increase in complexity.
Abstract: Interference management has been a key concept for designing future high data-rate wireless systems that are required to employ dense reuse of spectrum. Static or semi-static interference coordination based schemes provide enhanced cell-edge performance but with severe penalty to the overall cell throughput. Furthermore, static resource planning makes these schemes unsuitable for applications in which frequency planning is difficult, such as femtocell networks. In this paper, we present a novel dynamic interference avoidance scheme that makes use of inter-cell coordination in order to prevent excessive inter-cell interference, especially for cell or sector edge users that are most affected by inter-cell interference, with minimal or no impact on the network throughput. The proposed scheme is comprised of a two-level algorithm - one at the base station level and the other at a central controller to which a group of neighboring base stations are connected. Simulation results show that the proposed scheme outperforms the reference schemes, in which either coordination is not employed (reuse of 1) or employed in a static manner (reuse of 3 and fractional frequency reuse), in terms of cell edge throughput with a minimal impact on the network throughput and with some increase in complexity.

Journal ArticleDOI
TL;DR: The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the distance to the weighted values decreases with the distance between the cumulative payoff functions.
Abstract: Erev, Ert, and Roth organized three choice prediction competitions focused on three related choice tasks: One shot decisions from description (decisions under risk), one shot decisions from experience, and repeated decisions from experience. Each competition was based on two experimental datasets: An estimation dataset, and a competition dataset. The studies that generated the two datasets used the same methods and subject pool, and examined decision problems randomly selected from the same distribution. After collecting the experimental data to be used for estimation, the organizers posted them on the Web, together with their fit with several baseline models, and challenged other researchers to compete to predict the results of the second (competition) set of experimental sessions. Fourteen teams responded to the challenge: The last seven authors of this paper are members of the winning teams. The results highlight the robustness of the difference between decisions from description and decisions from experience. The best predictions of decisions from descriptions were obtained with a stochastic variant of prospect theory assuming that the sensitivity to the weighted values decreases with the distance between the cumulative payoff functions. The best predictions of decisions from experience were obtained with models that assume reliance on small samples. Merits and limitations of the competition method are discussed.

Journal ArticleDOI
TL;DR: A fully distributed and scalable cooperative spectrum-sensing scheme based on recent advances in consensus algorithms that not only has proven sensitivity in detecting the primary user's presence but also has robustness in choosing a desirable decision threshold.
Abstract: In cognitive radio (CR) networks, secondary users can cooperatively sense the spectrum to detect the presence of primary users. In this paper, we propose a fully distributed and scalable cooperative spectrum-sensing scheme based on recent advances in consensus algorithms. In the proposed scheme, the secondary users can maintain coordination based on only local information exchange without a centralized common receiver. Unlike most of the existing decision rules, such as the or-rule or the 1-out-of-N rule, we use the consensus of secondary users to make the final decision. Simulation results show that the proposed consensus scheme can have significant lower missing detection probabilities and false alarm probabilities in CR networks. It is also demonstrated that the proposed scheme not only has proven sensitivity in detecting the primary user's presence but also has robustness in choosing a desirable decision threshold.

Journal ArticleDOI
TL;DR: An overview of the current state of knowledge of the critical social determinants of child development and the complex ways in which these can influence health trajectories is offered.
Abstract: Aim: This paper offers an overview of the current state of knowledge of the critical social determinants of child development and the complex ways in which these can influence health trajectories. Methods: We conducted an overview of the research conducted by medical and social scientists in the attempt to uncover the conditions under which children reach optimal health and developmental. Results: The first years of life represent a critical period during which trajectories of health vulnerability are determined by the complex interplay between biological, genetic, and environmental conditions. Conclusions: There are fundamental principles of optimal child development that apply to all human beings, regardless of language and culture.

Journal ArticleDOI
TL;DR: The analysis indicates that the partially C-SG algorithm can give more accurate parameter estimates than the standard stochastic gradient (SG) algorithm.
Abstract: This technical note addresses identification problems of non-uniformly sampled systems. For the input-output representation of non-uniform discrete-time systems, a partially coupled stochastic gradient (C-SG) algorithm is proposed to estimate the model parameters with high computational efficiency compared with the standard stochastic gradient (SG) algorithm. The analysis indicates that the partially C-SG algorithm can give more accurate parameter estimates than the SG algorithm. The parameter estimates obtained using the partially C-SG algorithm converge to their true values as the data length approaches infinity.

Journal ArticleDOI
TL;DR: There is increasing evidence that many carbonatites are linked both spatially and temporally with large igneous provinces (LIPs), i.e. high volume, short duration, intraplate-type, magmatic events consisting mainly of flood basalts and their plumbing systems (of dykes, sills and layered intrusions).
Abstract: There is increasing evidence that many carbonatites are linked both spatially and temporally with large igneous provinces (LIPs), i.e. high volume, short duration, intraplate-type, magmatic events consisting mainly of flood basalts and their plumbing systems (of dykes, sills and layered intrusions). Examples of LIP-carbonatite associations include: i. the 66 Ma Deccan flood basalt province associated with the Amba Dongar, Sarnu-Dandali (Barmer), and Mundwara carbonatites and associated alkali rocks, ii. the 130 Ma Parana-Etendeka (e.g. Jacupiranga, Messum); iii. the 250 Ma Siberian LIP that includes a major alkaline province, Maimecha-Kotui with numerous carbonatites, iv. the ca. 370 Ma Kola Alkaline Province coeval with basaltic magmatism widespread in parts of the East European craton, and v. the 615–555 Ma CIMP (Central Iapetus Magmatic Province) of eastern Laurentia and western Baltica. In the Superior craton, Canada, a number of carbonatites are associated with the 1114–1085 Ma Keweenawan LIP and some are coeval with the pan-Superior 1880 Ma mafic-ultramafic magmatism. In addition, the Phalaborwa and Shiel carbonatites are associated with the 2055 Ma Bushveld event of the Kaapvaal craton. The frequency of this LIP-carbonatite association suggests that LIPs and carbonatites might be considered as different evolutionary ‘pathways’ in a single magmatic process/system. The isotopic mantle components FOZO, HIMU, EM1 but not DMM, along with primitive noble gas signatures in some carbonatites, suggest a sub-lithospheric mantle source for carbonatites, consistent with a plume/asthenospheric upwelling origin proposed for many LIPs.

Journal Article
TL;DR: In this article, a case study explores the relationship between cooperative learning and academic performance in higher education, specifically in the field of communication, and finds that involvement in cooperative learning is a strong predictor of a student's academic performance.
Abstract: Cooperative learning has increasingly become a popular form of active pedagogy employed in academic institutions. This case study explores the relationship between cooperative learning and academic performance in higher education, specifically in the field of communication. Findings from a questionnaire administered to undergraduate students in a communication research course indicate that involvement in cooperative learning is a strong predictor of a student's academic performance. A significant positive relationship was found between the degree to which grades are important to a student and his or her active participation in cooperative learning. Further, the importance of grades and sense of achievement are strong predictors of performance on readiness assessment tests.

Posted Content
TL;DR: The main contribution of this paper is to review and integrate the collection of these concepts, formalisms, and related results found in the literature into a unified coherent framework, called TVG (for timevarying graphs).
Abstract: The past few years have seen intensive research efforts carried out in some apparently unrelated areas of dynamic systems -- delay-tolerant networks, opportunistic-mobility networks, social networks -- obtaining closely related insights. Indeed, the concepts discovered in these investigations can be viewed as parts of the same conceptual universe; and the formal models proposed so far to express some specific concepts are components of a larger formal description of this universe. The main contribution of this paper is to integrate the vast collection of concepts, formalisms, and results found in the literature into a unified framework, which we call TVG (for time-varying graphs). Using this framework, it is possible to express directly in the same formalism not only the concepts common to all those different areas, but also those specific to each. Based on this definitional work, employing both existing results and original observations, we present a hierarchical classification of TVGs; each class corresponds to a significant property examined in the distributed computing literature. We then examine how TVGs can be used to study the evolution of network properties, and propose different techniques, depending on whether the indicators for these properties are a-temporal (as in the majority of existing studies) or temporal. Finally, we briefly discuss the introduction of randomness in TVGs.

Journal ArticleDOI
TL;DR: In this article, it is shown that a simple yet crucial modification to core geometry can solve this problem and enable the many advantages of the two-fluid design, which is a major potential change to the traditional Single Fluid, MSBR design and a subject of this presentation is a return to the mode of operation that ORNL proposed for the majority of its MSR program.

Journal ArticleDOI
B. Aharmim1, L. C. Stonehill2, L. C. Stonehill3, J. R. Leslie4  +153 moreInstitutions (30)
TL;DR: In this paper, a joint analysis of Phase I and Phase II data from the Sudbury Neutrino Observatory is reported, where the total flux of active-flavor neutrinos from 8B decay in the Sun measured using the neutral current (NC) reaction, with no constraint on the 8B neutrino energy spectrum, is found to be FNC=5.5 MeV, the lowest analysis threshold yet achieved with water Cherenkov detector data.
Abstract: Results are reported from a joint analysis of Phase I and Phase II data from the Sudbury Neutrino Observatory. The effective electron kinetic energy threshold used is Teff=3.5 MeV, the lowest analysis threshold yet achieved with water Cherenkov detector data. In units of 106 cm-2 s-1, the total flux of active-flavor neutrinos from 8B decay in the Sun measured using the neutral current (NC) reaction of neutrinos on deuterons, with no constraint on the 8B neutrino energy spectrum, is found to be FNC=5.140-0.158+0.160(stat)-0.117+0.132(syst). These uncertainties are more than a factor of 2 smaller than previously published results. Also presented are the spectra of recoil electrons from the charged current reaction of neutrinos on deuterons and the elastic scattering of electrons. A fit to the Sudbury Neutrino Observatory data in which the free parameters directly describe the total 8B neutrino flux and the energy-dependent e survival probability provides a measure of the total 8B neutrino flux F8B=5.046-0.152+0.159(stat)-0.123+0.107(syst). Combining these new results with results of all other solar experiments and the KamLAND reactor experiment yields best-fit values of the mixing parameters of 12=34.06-0.84+1.16 degrees and m212=7.59-0.21+0.2010-5 eV2. The global value of 8B is extracted to a precision of -2.95+2.38%. In a three-flavor analysis the best fit value of sin213 is 2.00-1.63+2.0910-2. This implies an upper bound of sin213<0.057 (95% C.L.).

Journal ArticleDOI
TL;DR: Electrical impedance tomography image reconstruction algorithms with regularization based on the total variation (TV) functional are suitable for in vivo imaging of physiological data and shows improved ability to reconstruct sharp contrasts compared to traditional quadratic regularization.
Abstract: We show that electrical impedance tomography (EIT) image reconstruction algorithms with regularization based on the total variation (TV) functional are suitable for in vivo imaging of physiological data. This reconstruction approach helps to preserve discontinuities in reconstructed profiles, such as step changes in electrical properties at interorgan boundaries, which are typically smoothed by traditional reconstruction algorithms. The use of the TV functional for regularization leads to the minimization of a nondifferentiable objective function in the inverse formulation. This cannot be efficiently solved with traditional optimization techniques such as the Newton method. We explore two implementations methods for regularization with the TV functional: the lagged diffusivity method and the primal dual-interior point method (PD-IPM). First we clarify the implementation details of these algorithms for EIT reconstruction. Next, we analyze the performance of these algorithms on noisy simulated data. Finally, we show reconstructed EIT images of in vivo data for ventilation and gastric emptying studies. In comparison to traditional quadratic regularization, TV regulariza tion shows improved ability to reconstruct sharp contrasts.

Journal ArticleDOI
15 Jan 2010-Science
TL;DR: Measurements of a controlled effect of predation risk along a 3350-kilometer north-south gradient across arctic Canada provides evidence that the risk of nest predation decreases with latitude, and evidence that birds migrating further north may acquire reproductive benefits in the form of reducedpredation risk.
Abstract: Quantifying the costs and benefits of migration distance is critical to understanding the evolution of long-distance migration. In migratory birds, life history theory predicts that the potential survival costs of migrating longer distances should be balanced by benefits to lifetime reproductive success, yet quantification of these reproductive benefits in a controlled manner along a large geographical gradient is challenging. We measured a controlled effect of predation risk along a 3350-kilometer south-north gradient in the Arctic and found that nest predation risk declined more than twofold along the latitudinal gradient. These results provide evidence that birds migrating farther north may acquire reproductive benefits in the form of lower nest predation risk.