Institution
University of New Brunswick
Education•Fredericton, New Brunswick, Canada•
About: University of New Brunswick is a education organization based out in Fredericton, New Brunswick, Canada. It is known for research contribution in the topics: Population & Adsorption. The organization has 10498 authors who have published 20654 publications receiving 474448 citations.
Topics: Population, Adsorption, Poison control, Excited state, Ground state
Papers published on a yearly basis
Papers
More filters
••
TL;DR: In this article, the authors used meta-analytic techniques to determine which predictor domains and actuarial assessment instruments were the best predictors of adult offender recidivism, and the LSI-R was identified as the most useful actuarial measure.
Abstract: Meta-analytic techniques were used to determine which predictor domains and actuarial assessment instruments were the best predictors of adult offender recidivism. One hundred and thirty-one studies produced 1,141 correlations with recidivism. The strongest predictor domains were criminogenic needs, criminal history/history of antisocial behavior, social achievement, age/gender/race, and family factors. Less robust predictors included intellectual functioning, personal distress factors, and socioeconomic status in the family of origin. Dynamic predictor domains performed at least as well as the static domains. The LSI-R was identified as the most useful actuarial measure. Recommendations for developing sound assessment practices in corrections are provided.
1,773 citations
••
TL;DR: The state of the art, explaining the science of smart cities is defined and seven project areas are proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelled Urban Land Use, Transport and Economic Interactions, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the smart city.
Abstract: Here we sketch the rudiments of what constitutes a smart city which we define as a city in which ICT is merged with traditional infrastructures, coordinated and integrated using new digital technologies. We first sketch our vision defining seven goals which concern: developing a new understanding of urban problems; effective and feasible ways to coordinate urban technologies; models and methods for using urban data across spatial and temporal scales; developing new technologies for communication and dissemination; developing new forms of urban governance and organisation; defining critical problems relating to cities, transport, and energy; and identifying risk, uncertainty, and hazards in the smart city. To this, we add six research challenges: to relate the infrastructure of smart cities to their operational functioning and planning through management, control and optimisation; to explore the notion of the city as a laboratory for innovation; to provide portfolios of urban simulation which inform future designs; to develop technologies that ensure equity, fairness and realise a better quality of city life; to develop technologies that ensure informed participation and create shared knowledge for democratic city governance; and to ensure greater and more effective mobility and access to opportunities for urban populations. We begin by defining the state of the art, explaining the science of smart cities. We define six scenarios based on new cities badging themselves as smart, older cities regenerating themselves as smart, the development of science parks, tech cities, and technopoles focused on high technologies, the development of urban services using contemporary ICT, the use of ICT to develop new urban intelligence functions, and the development of online and mobile forms of participation. Seven project areas are then proposed: Integrated Databases for the Smart City, Sensing, Networking and the Impact of New Social Media, Modelling Network Performance, Mobility and Travel Behaviour, Modelling Urban Land Use, Transport and Economic Interactions, Modelling Urban Transactional Activities in Labour and Housing Markets, Decision Support as Urban Intelligence, Participatory Governance and Planning Structures for the Smart City. Finally we anticipate the paradigm shifts that will occur in this research and define a series of key demonstrators which we believe are important to progressing a science of smart cities.
1,616 citations
••
TL;DR: It is shown that, by exploiting the processing power inherent in current computing systems, substantial gains in classifier accuracy and response time are possible and other important characteristics for prosthetic control systems are met.
Abstract: This paper represents an ongoing investigation of dexterous and natural control of upper extremity prostheses using the myoelectric signal (MES). The scheme described within uses pattern recognition to process four channels of MES, with the task of discriminating multiple classes of limb movement. The method does not require segmentation of the MES data, allowing a continuous stream of class decisions to be delivered to a prosthetic device. It is shown in this paper that, by exploiting the processing power inherent in current computing systems, substantial gains in classifier accuracy and response time are possible. Other important characteristics for prosthetic control systems are met as well. Due to the fact that the classifier learns the muscle activation patterns for each desired class for each individual, a natural control actuation results. The continuous decision stream allows complex sequences of manipulation involving multiple joints to be performed without interruption. Finally, minimal storage capacity is required, which is an important factor in embedded control systems.
1,545 citations
•
01 Jan 2004TL;DR: A sufficient condition for the optimality of naive Bayes is presented and proved, in which the dependence between attributes do exist, and evidence that dependence among attributes may cancel out each other is provided.
Abstract: Naive Bayes is one of the most efficient and effective inductive learning algorithms for machine learning and data mining. Its competitive performance in classification is surprising, because the conditional independence assumption on which it is based, is rarely true in realworld applications. An open question is: what is the true reason for the surprisingly good performance of naive Bayes in classification? In this paper, we propose a novel explanation on the superb classification performance of naive Bayes. We show that, essentially, the dependence distribution; i.e., how the local dependence of a node distributes in each class, evenly or unevenly, and how the local dependencies of all nodes work together, consistently (supporting a certain classification) or inconsistently (canceling each other out), plays a crucial role. Therefore, no matter how strong the dependences among attributes are, naive Bayes can still be optimal if the dependences distribute evenly in classes, or if the dependences cancel each other out. We propose and prove a sufficient and necessary conditions for the optimality of naive Bayes. Further, we investigate the optimality of naive Bayes under the Gaussian distribution. We present and prove a sufficient condition for the optimality of naive Bayes, in which the dependence between attributes do exist. This provides evidence that dependence among attributes may cancel out each other. In addition, we explore when naive Bayes works well. Naive Bayes and Augmented Naive Bayes Classification is a fundamental issue in machine learning and data mining. In classification, the goal of a learning algorithm is to construct a classifier given a set of training examples with class labels. Typically, an example E is represented by a tuple of attribute values (x1, x2, , · · · , xn), where xi is the value of attribute Xi. Let C represent the classification variable, and let c be the value of C. In this paper, we assume that there are only two classes: + (the positive class) or − (the negative class). A classifier is a function that assigns a class label to an example. From the probability perspective, according to Bayes Copyright c © 2004, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Rule, the probability of an example E = (x1, x2, · · · , xn) being class c is p(c|E) = p(E|c)p(c) p(E) . E is classified as the class C = + if and only if fb(E) = p(C = +|E) p(C = −|E) ≥ 1, (1) where fb(E) is called a Bayesian classifier. Assume that all attributes are independent given the value of the class variable; that is, p(E|c) = p(x1, x2, · · · , xn|c) = n ∏
1,536 citations
••
Health Canada1, United States Environmental Protection Agency2, Brigham Young University3, University of Texas at Austin4, University of British Columbia5, Health Effects Institute6, McGill University7, University of Minnesota8, Harvard University9, Utrecht University10, University of Washington11, Fudan University12, New York University13, University of California, Los Angeles14, University of Ottawa15, American Cancer Society16, University of California, Davis17, Cancer Prevention Institute of California18, University of New Brunswick19, Dalhousie University20, Carleton University21, Statistics Canada22, University of Toronto23, Chinese Center for Disease Control and Prevention24, St George's, University of London25, University of Hong Kong26, University of Ulm27, SERC Reliability Corporation28
TL;DR: PM2.5 exposure may be related to additional causes of death than the five considered by the GBD and that incorporation of risk information from other, nonoutdoor, particle sources leads to underestimation of disease burden, especially at higher concentrations.
Abstract: Exposure to ambient fine particulate matter (PM2.5) is a major global health concern. Quantitative estimates of attributable mortality are based on disease-specific hazard ratio models that incorporate risk information from multiple PM2.5 sources (outdoor and indoor air pollution from use of solid fuels and secondhand and active smoking), requiring assumptions about equivalent exposure and toxicity. We relax these contentious assumptions by constructing a PM2.5-mortality hazard ratio function based only on cohort studies of outdoor air pollution that covers the global exposure range. We modeled the shape of the association between PM2.5 and nonaccidental mortality using data from 41 cohorts from 16 countries-the Global Exposure Mortality Model (GEMM). We then constructed GEMMs for five specific causes of death examined by the global burden of disease (GBD). The GEMM predicts 8.9 million [95% confidence interval (CI): 7.5-10.3] deaths in 2015, a figure 30% larger than that predicted by the sum of deaths among the five specific causes (6.9; 95% CI: 4.9-8.5) and 120% larger than the risk function used in the GBD (4.0; 95% CI: 3.3-4.8). Differences between the GEMM and GBD risk functions are larger for a 20% reduction in concentrations, with the GEMM predicting 220% higher excess deaths. These results suggest that PM2.5 exposure may be related to additional causes of death than the five considered by the GBD and that incorporation of risk information from other, nonoutdoor, particle sources leads to underestimation of disease burden, especially at higher concentrations.
1,283 citations
Authors
Showing all 10596 results
Name | H-index | Papers | Citations |
---|---|---|---|
David Scott | 124 | 1561 | 82554 |
Wei Lu | 111 | 1973 | 61911 |
Richard J. Hobbs | 108 | 592 | 68141 |
Wei Zhang | 104 | 2911 | 64923 |
Chris M. Wood | 102 | 795 | 43076 |
Mark S. Tremblay | 100 | 541 | 43843 |
James Taylor | 95 | 1161 | 39945 |
Johan Richard | 95 | 499 | 25915 |
Chun Li | 93 | 517 | 41645 |
Bin Li | 92 | 1755 | 42835 |
Robert J. Blanchard | 83 | 241 | 22316 |
Robie W. Macdonald | 79 | 292 | 23460 |
Serge Kaliaguine | 76 | 465 | 21443 |
Ravin Balakrishnan | 72 | 182 | 15970 |
Min Wang | 72 | 716 | 19197 |