scispace - formally typeset
Search or ask a question

Showing papers by "University of New Brunswick published in 2009"


Proceedings ArticleDOI
08 Jul 2009
TL;DR: A new data set is proposed, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.
Abstract: During the last decade, anomaly detection has attracted the attention of many researchers to overcome the weakness of signature-based IDSs in detecting novel attacks, and KDDCUP'99 is the mostly widely used data set for the evaluation of these systems. Having conducted a statistical analysis on this data set, we found two important issues which highly affects the performance of evaluated systems, and results in a very poor evaluation of anomaly detection approaches. To solve these issues, we have proposed a new data set, NSL-KDD, which consists of selected records of the complete KDD data set and does not suffer from any of mentioned shortcomings.

3,300 citations


Journal ArticleDOI
11 Feb 2009-JAMA
TL;DR: The results suggest that reinnervated muscles can produce sufficient EMG information for real-time control of advanced artificial arms, as well as improving the function of prosthetic arms.
Abstract: Context Improving the function of prosthetic arms remains a challenge, because access to the neural-control information for the arm is lost during amputation. A surgical technique called targeted muscle reinnervation (TMR) transfers residual arm nerves to alternative muscle sites. After reinnervation, these target muscles produce electromyogram (EMG) signals on the surface of the skin that can be measured and used to control prosthetic arms. Objective To assess the performance of patients with upper-limb amputation who had undergone TMR surgery, using a pattern-recognition algorithm to decode EMG signals and control prosthetic-arm motions. Design, Setting, and Participants Study conducted between January 2007 and January 2008 at the Rehabilitation Institute of Chicago among 5 patients with shoulder-disarticulation or transhumeral amputations who underwent TMR surgery between February 2002 and October 2006 and 5 control participants without amputation. Surface EMG signals were recorded from all participants and decoded using a pattern-recognition algorithm. The decoding program controlled the movement of a virtual prosthetic arm. All participants were instructed to perform various arm movements, and their abilities to control the virtual prosthetic arm were measured. In addition, TMR patients used the same control system to operate advanced arm prosthesis prototypes. Main Outcome Measure Performance metrics measured during virtual arm movements included motion selection time, motion completion time, and motion completion (“success”) rate. Results The TMR patients were able to repeatedly perform 10 different elbow, wrist, and hand motions with the virtual prosthetic arm. For these patients, the mean motion selection and motion completion times for elbow and wrist movements were 0.22 seconds (SD, 0.06) and 1.29 seconds (SD, 0.15), respectively. These times were 0.06 seconds and 0.21 seconds longer than the mean times for control participants. For TMR patients, the mean motion selection and motion completion times for hand-grasp patterns were 0.38 seconds (SD, 0.12) and 1.54 seconds (SD, 0.27), respectively. These patients successfully completed a mean of 96.3% (SD, 3.8) of elbow and wrist movements and 86.9% (SD, 13.9) of hand movements within 5 seconds, compared with 100% (SD, 0) and 96.7% (SD, 4.7) completed by controls. Three of the patients were able to demonstrate the use of this control system in advanced prostheses, including motorized shoulders, elbows, wrists, and hands. Conclusion These results suggest that reinnervated muscles can produce sufficient EMG information for real-time control of advanced artificial arms.

920 citations


Journal ArticleDOI
TL;DR: This work reviews studies of the phylogenetic structure of communities of different major taxa and trophic levels, across different spatial and phylogenetic scales, and using different metrics and null models, and discusses the relationship between metrics of phylogenetic clustering and tree balance.
Abstract: The analysis of the phylogenetic structure of communities can help reveal contemporary ecological interactions, as well as link community ecology with biogeography and the study of character evolution. The number of studies employing this broad approach has increased to the point where comparison of their results can now be used to highlight successes and deficiencies in the approach, and to detect emerging patterns in community organization. We review studies of the phylogenetic structure of communities of different major taxa and trophic levels, across different spatial and phylogenetic scales, and using different metrics and null models. Twenty-three of 39 studies (59%) find evidence for phylogenetic clustering in contemporary communities, but terrestrial and/or plant systems are heavily over-represented among published studies. Experimental investigations, although uncommon at present, hold promise for unravelling mechanisms underlying the phylogenetic community structure patterns observed in community surveys. We discuss the relationship between metrics of phylogenetic clustering and tree balance and explore the various emerging biases in taxonomy and pitfalls of scale. Finally, we look beyond one-dimensional metrics of phylogenetic structure towards multivariate descriptors that better capture the variety of ecological behaviours likely to be exhibited in communities of species with hundreds of millions of years of independent evolution.

580 citations


Journal ArticleDOI
TL;DR: In this article, polystyrene was grafted onto the surface of ZnO nanoparticles to improve the dispersion of the particles and to reduce their photocatalytic activity.

541 citations


Journal ArticleDOI
TL;DR: The Greenland ice core from NorthGRIP (NGRIP) contains a proxy climate record across the Pleistocene-Holocene boundary of unprecedented clarity and resolution, which enables the base of the Holocene, as reflected in the first signs of climatic warming at the end of the Younger Dryas/Greenland Stadial 1 cold phase, to be located with a high degree of precision as discussed by the authors.
Abstract: The Greenland ice core from NorthGRIP (NGRIP) contains a proxy climate record across the Pleistocene-Holocene boundary of unprecedented clarity and resolution. Analysis of an array of physical and chemical parameters within the ice enables the base of the Holocene, as reflected in the first signs of climatic warming at the end of the Younger Dryas/Greenland Stadial 1 cold phase, to be located with a high degree of precision. This climatic event is most clearly reflected in an abrupt shift in deuterium excess values, accompanied by more gradual changes in d 18 O, dust concentration, a range of chemical species, and annual layer thickness. A timescale based on multi-parameter annual layer counting provides an age of 11 700 calendar yr b2 k (before AD 2000) for the base of the Holocene, with a maximum counting error of 99 yr. A proposal that an archived core from this unique sequence should constitute the Global Stratotype Section and Point (GSSP) for the base of the Holocene Series/Epoch (Quaternary System/Period) has been ratified by the International Union of Geological Sciences. Five auxiliary stratotypes for the Pleistocene-Holocene boundary have also been recognised. Copyright # 2008 John Wiley & Sons, Ltd.

534 citations


Journal ArticleDOI
TL;DR: The development of offshore IMTA requires the identification of environmental and economic risks and benefits of such large-scale systems, compared with similarly-scaled monocultures of high trophic-level finfish in offshore systems.

488 citations


Proceedings ArticleDOI
17 Apr 2009
TL;DR: In this paper, the authors describe and classify both the types of people who volunteer geospatial information and the nature of their contributions, and offer different taxonomies that can help researchers clarify what is at stake with respect to the contributors.
Abstract: Advances in positioning, Web mapping, cellular communications and wiki technologies have surpassed the original visions of GSDI programs around the world. By tapping the distributed knowledge, personal time and energy of volunteer contributors, GI voluntarism is beginning to relocate and redistribute selected GI productive activities from mapping agencies to networks of non-state volunteer actors. Participants in the production process are both users and producers, or ‘produsers’ to use a recent neologism. Indeed, GI voluntarism ultimately has the potential to redistribute the rights to define and judge the value of the produced geographic information and of the new production system in general. The concept and its implementation present a rich collection of both opportunities and risks now being considered by leaders of public and private mapping organizations world-wide. In this paper, the authors describe and classify both the types of people who volunteer geospatial information and the nature of their contributions. Combining empirical research dealing with the Open Source software and Wikipedia communities with input from selected national mapping agencies and private companies, the authors offer different taxonomies that can help researchers clarify what is at stake with respect to geospatial information contributors. They identify early lessons which may be drawn from this research, and suggest questions which may be posed by large mapping organizations when considering the potential opportunities and risks associated with encouraging and employing Volunteered Geographic Information in their programs.

445 citations


Journal ArticleDOI
TL;DR: A novel signal processing algorithm for the surface electromyogram (EMG) is proposed to extract simultaneous and proportional control information for multiple DOFs and a DOF-wise nonnegative matrix factorization is developed to estimate neural control information from the multichannel surface EMG.
Abstract: A novel signal processing algorithm for the surface electromyogram (EMG) is proposed to extract simultaneous and proportional control information for multiple DOFs. The algorithm is based on a generative model for the surface EMG. The model assumes that synergistic muscles share spinal neural drives, which correspond to the intended activations of different DOFs of natural movements and are embedded within the surface EMG. A DOF-wise nonnegative matrix factorization (NMF) is developed to estimate neural control information from the multichannel surface EMG. It is shown, both by simulation and experimental studies, that the proposed algorithm is able to extract the multidimensional control information simultaneously. A direct application of the proposed method would be providing simultaneous and proportional control of multifunction myoelectric prostheses.

415 citations


Journal ArticleDOI
TL;DR: In this paper, a meta-analysis compared risk instruments and other psychological measures on their ability to predict general (primarily nonsexual) violence in adults, and found little variation was found amongst the mean effect sizes of common actuarial or structured risk instruments (i.e., Historical, Clinical, and Risk Management Violence Risk Assessment Scheme; Level of Supervision Inventory, Revised; violence risk assessment guide; Statistical Information on Recidivism scale; and Psychopathy Checklist•Revised).
Abstract: Using 88 studies from 1980 to 2006, a meta-analysis compares risk instruments and other psychological measures on their ability to predict general (primarily nonsexual) violence in adults. Little variation was found amongst the mean effect sizes of common actuarial or structured risk instruments (i.e., Historical, Clinical, and Risk Management Violence Risk Assessment Scheme; Level of Supervision Inventory‐Revised; Violence Risk Assessment Guide; Statistical Information on Recidivism scale; and Psychopathy Checklist‐Revised). Third-generation instruments, dynamic risk factors, and file review plus interview methods had the advantage in predicting violent recidivism. Second-generation instruments, static risk factors, and use of file review were the strongest predictors of institutional violence. Measures derived from criminological-related theories or research produced larger effect sizes than did those of less content relevance. Additional research on existing risk instruments is required to provide more precise point estimates, especially regarding the outcome of institutional violence.

391 citations


Book
08 Oct 2009
TL;DR: In this paper, the authors present a Cognitive Theory and Research on Anxiety: A Common but Multifaceted Condition, Cognitive Assessment and Case Formulation, Cognitive Interventions for Anxiety, Behavioral Interventions: A Cognitive Perspective.
Abstract: Part I: Cognitive Theory and Research on Anxiety. Anxiety: A Common but Multifaceted Condition. The Cognitive Model of Anxiety. Empirical Status of the Cognitive Model of Anxiety. Vulnerability to Anxiety. Part II: Cognitive Therapy of Anxiety: Assessment and Intervention Strategies. Cognitive Assessment and Case Formulation. Cognitive Interventions for Anxiety. Behavioral Interventions: A Cognitive Perspective. Part III: Cognitive Theory and Treatment of Specific Anxiety Disorders. Cognitive Therapy of Panic Disorder. Cognitive Therapy of Social Phobia. Cognitive Therapy of Generalized Anxiety Disorder. Cognitive Therapy of Obsessive-Compulsive Disorder. Cognitive Therapy of Posttraumatic Stress Disorder.

323 citations


Journal ArticleDOI
TL;DR: In this paper, polyurethane-based coatings reinforced by ZnO nanoparticles (about 27 nm) were prepared via solution blending, and a simple method of solution casting and evaporation was used.

Journal ArticleDOI
TL;DR: This paper summarizes the existing improved algorithms and proposes a novel Bayes model: hidden naive Bayes (HNB), which significantly outperforms NB, SBC, NBTree, TAN, and AODE in terms of CLL and AUC.
Abstract: Because learning an optimal Bayesian network classifier is an NP-hard problem, learning-improved naive Bayes has attracted much attention from researchers. In this paper, we summarize the existing improved algorithms and propose a novel Bayes model: hidden naive Bayes (HNB). In HNB, a hidden parent is created for each attribute which combines the influences from all other attributes. We experimentally test HNB in terms of classification accuracy, using the 36 UCI data sets selected by Weka, and compare it to naive Bayes (NB), selective Bayesian classifiers (SBC), naive Bayes tree (NBTree), tree-augmented naive Bayes (TAN), and averaged one-dependence estimators (AODE). The experimental results show that HNB significantly outperforms NB, SBC, NBTree, TAN, and AODE. In many data mining applications, an accurate class probability estimation and ranking are also desirable. We study the class probability estimation and ranking performance, measured by conditional log likelihood (CLL) and the area under the ROC curve (AUC), respectively, of naive Bayes and its improved models, such as SBC, NBTree, TAN, and AODE, and then compare HNB to them in terms of CLL and AUC. Our experiments show that HNB also significantly outperforms all of them.

Journal ArticleDOI
TL;DR: In this paper, a cancrinite-type zeolite (ZFA) was synthesized from Class C fly ash via the molten-salt method and the maximum exchange level (MEL) was found that the uptake of heavy metals on ZFA was subjected to an ion exchange mechanism.

Journal Article
TL;DR: This paper used four data points from Canada's National Longitudinal Study of Children and Youth (NLSCY) to examine how the academic achievement gap attributed to socio-economic status changes from childhood to adolescence (ages 7 to 15).
Abstract: Although a positive relationship between socio‐economic status and academic achievement is well‐established, how it varies with age is not. This article uses four data points from Canada’s National Longitudinal Study of Children and Youth (NLSCY) to examine how the academic achievement gap attributed to SES changes from childhood to adolescence (ages 7 to 15). Estimates of panel data and hierarchical linear models indicate that the gap remains fairly stable from the age of 7 to 11 years and widens at an increasing rate from the age of 11 to the age of 15 years. Theoretical arguments and policy implications surrounding this finding are discussed. Key words: SES, academic achievement, early adolescence, growth model Bien qu’on sache depuis longtemps qu’il existe un lien entre le statut socioeconomique et le rendement scolaire, il reste encore a determiner dans quelle mesure ce lien varie en fonc‐ tion de l’âge. Cet article a recours a quatre points de donnees tires de l’Enquete longitudi‐ nale nationale sur les enfants et les jeunes (ELNEJ) du Canada en vue de mesurer comment les differences dans le rendement scolaire attribuees au statut socioeconomique changent de l’enfance a l’adolescence (de 7 a 15 ans). Des estimations tirees de donnees recueillies au moyen d’un panel ainsi que des modeles hierarchiques lineaires indiquent que les diffe‐ rences demeurent relativement stables entre 7 ans et 11 ans et deviennent de plus en plus marquees entre 11 ans et 15 ans. Les auteurs font l’analyse des arguments theoriques et des incidences sur les politiques entourant cette conclusion. Mots cles : statut socioeconomique, rendement scolaire, debut de l’adolescence, modele de croissance.

Journal ArticleDOI
TL;DR: In this paper, the authors used the Darcy law to determine the porosity and the diameter of the fibres in a 3D random fibrous media and derived a semi-empirical constitutive model for the permeability of the media.
Abstract: Fluid flow analyses for porous media are of great importance in a wide range of industrial applications including, but not limited to, resin transfer moulding, filter analysis, transport of underground water and pollutants, and hydrocarbon recovery. Permeability is perhaps the most important property that characterizes porous media; however, its determination for different types of porous media is challenging due its complex dependence on the pore-level structure of the media. In the present work, fluid flow in three-dimensional random fibrous media is simulated using the lattice Boltzmann method. We determine the permeability of the medium using the Darcy law across a wide range of void fractions (0.08 ⩽ ϕ ⩽ 0.99) and find that the values for the permeability that we obtain are consistent with available experimental data. We use our numerical data to develop a semi-empirical constitutive model for the permeability of fibrous media as a function of their porosity and of the fibre diameter. The model, which is underpinned by the theoretical analysis of flow through cylinder arrays presented by [Gebart BR. Permeability of unidirectional reinforcements for RTM. J Compos Mater 1992; 26(8): 1100–33], gives an excellent fit to these data across the range of ϕ. We perform further simulations to determine the impact of the curvature and aspect ratio of the fibres on the permeability. We find that curvature has a negligible effect, and that aspect ratio is only important for fibres with aspect ratio smaller than 6:1, in which case the permeability increases with increasing aspect ratio. Finally, we calculate the permeability tensor for the fibrous media studied and confirm numerically that, for an isotropic medium, the permeability tensor reduces to a scalar value.

Journal ArticleDOI
TL;DR: In this article, the authors outline some scientific issues for perfor-for performance analysis of sport performance and propose a method to improve future outcomes with a view to improving future outcomes.
Abstract: The scientific analysis of sport performance aims at advancing understanding of game behaviour with a view to improving future outcomes. In this article we outline some scientific issues for perfor...

Journal ArticleDOI
TL;DR: This study examined two proposed pathways between sexual self-disclosure (SSD) and sexual satisfaction in a sample of 104 heterosexual couples in long-term relationships and found support for the instrumental pathway for both men and women.
Abstract: This study examined two proposed pathways between sexual self-disclosure (SSD) and sexual satisfaction in a sample of 104 heterosexual couples in long-term relationships. According to the proposed instrumental pathway, disclosure of sexual preferences increases a partner's understanding of those preferences resulting in a sexual script that is more rewarding and less costly. A more favorable balance of sexual rewards to sexual costs, in turn, results in greater sexual satisfaction for the disclosing individual. According to the proposed expressive pathway, mutual self-disclosure contributes to relationship satisfaction, which in turn leads to greater sexual satisfaction. Support was found for the instrumental pathway for both men and women. Support also was found for an expressive pathway between own SSD and partner nonsexual self-disclosure (NSD) and men's sexual satisfaction, and between own NSD and women's sexual satisfaction. These results are interpreted in terms of mechanisms for establishing and maintaining sexual satisfaction in long-term relationships in men and women.

Journal ArticleDOI
TL;DR: A new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory is proposed, which achieves high-detection rates in terms of both attack instances and attack types.
Abstract: Signal processing techniques have been applied recently for analyzing and detecting network anomalies due to their potential to find novel or unknown intrusions. In this paper, we propose a new network signal modelling technique for detecting network anomalies, combining the wavelet approximation and system identification theory. In order to characterize network traffic behaviors, we present fifteen features and use them as the input signals in our system. We then evaluate our approach with the 1999 DARPA intrusion detection dataset and conduct a comprehensive analysis of the intrusions in the dataset. Evaluation results show that the approach achieves high-detection rates in terms of both attack instances and attack types. Furthermore, we conduct a full day's evaluation in a real large-scale WiFi ISP network where five attack types are successfully detected from over 30 millions flows.

Journal ArticleDOI
TL;DR: The measured raw MES signals are rotated by class-specific principal component matrices to spatially decorrelate the measured data prior to feature extraction to allow a pattern recognition classifier to better discriminate the test motions.
Abstract: Information extracted from multiple channels of the surface myoelectric signal (MES) recording sites can be used as inputs to control systems for powered upper limb prostheses. For small, closely spaced muscles, such as the muscles in the forearm, the detected MES often contains contributions from more than one muscle, the contribution from each specific muscle being modified by the dispersive propagation through the volume conductor between the muscle and the detection points. In this paper, the measured raw MES signals are rotated by class-specific principal component matrices to spatially decorrelate the measured data prior to feature extraction. This ldquotunesrdquo the data to allow a pattern recognition classifier to better discriminate the test motions. This processing technique was used to significantly (p<0.01) reduce pattern recognition classification error for both intact limbed and transradial amputee subjects.

Journal ArticleDOI
TL;DR: In this paper, the authors show that the non-crossing partitions associated with a finite Coxeter group form a lattice, which is a natural bijection with the cluster tilting objects in the associated cluster category.
Abstract: We situate the noncrossing partitions associated with a finite Coxeter group within the context of the representation theory of quivers. We describe Reading’s bijection between noncrossing partitions and clusters in this context, and show that it extends to the extended Dynkin case. Our setup also yields a new proof that the noncrossing partitions associated with a finite Coxeter group form a lattice. We also prove some new results within the theory of quiver representations. We show that the finitely generated, exact abelian, and extension-closed subcategories of the representations of a quiver Q without oriented cycles are in natural bijection with the cluster tilting objects in the associated cluster category. We also show that these subcategories are exactly the finitely generated categories that can be obtained as the semistable objects with respect to some stability condition.

Journal ArticleDOI
TL;DR: It is concluded that residence and place of socialization may be less of a factor than opportunity and highlight the importance of providing services and facilities in terms of ESB.
Abstract: Distinctions between rural and urban populations are well documented in environmental sociology literature. Rural and urban places may exert different influences on participation in environmentally supportive behavior (ESB) as well as on other forms of environmental concern (EC). The influence of these distinct geographies may be due to present circumstances or because of childhood socialization in these places. Using data from a national survey in Canada (n51 664), we use cognitive (basic values, environmental worldview, and environmental attitude) and behavioral indicators (public and private sphere) of EC to explore differences among rural and urban populations and we include analyses accounting for place of socialization. We extend the conventional private sphere category of ESB by including stewardship behaviors. Results showed few differences between rural and urban residents on indicators of EC. Rural residents, however, scored higher on altruistic values, placed a higher priority on the environment, and reported higher participation in recycling and stewardship behaviors. Analysis that included place of socialization showed differences on environmental worldview, basic values, and some ESB. In terms of ESB, we conclude that residence and place of socialization may be less of a factor than opportunity and highlight the importance of providing services and facilities. We recommend future research on residence and ESB include a variety of behaviors that reflect opportunities for both rural and urban residents.

Journal ArticleDOI
TL;DR: In this paper, exact algorithms for the synthesis of multiple-control Toffoli networks are presented, i.e., algorithms that guarantee to find a network with the minimal number of gates.
Abstract: Synthesis of reversible logic has become a very important research area in recent years. Applications can be found in the domain of low-power design, optical computing, and quantum computing. In the past, several approaches have been introduced that synthesize reversible networks with respect to a given function. Most of these methods only approximate a minimal network representation. In this paper, exact algorithms for the synthesis of multiple-control Toffoli networks are presented, i.e., algorithms that guarantee to find a network with the minimal number of gates. Our iterative algorithms formulate the synthesis problem as a sequence of decision problems. The decision problems are encoded as Boolean satisfiability (SAT) or SAT modulo theory (SMT) instances, respectively. As soon as one of these instances becomes satisfiable, a Toffoli network representation for the given function has been found. We show that choosing the encoding for synthesis is crucial for the resulting runtimes. Furthermore, we discuss the principal limits of the SAT and SMT approaches. To overcome these limits, we propose a method using problem-specific knowledge during synthesis. In addition, better embeddings to make irreversible functions reversible are considered. For the resulting synthesis problems, an improvement is presented that reduces the overall runtime by automatically setting the constant inputs to their optimal values. Experimental results on a large set of benchmarks demonstrate the differences between three exact synthesis algorithms. In addition, a comparison with the best-known heuristic results is provided. In summary, the results show that, for some benchmarks, the heuristic approaches have already found the minimal network, while for other benchmarks, significantly smaller networks exist.

01 Jan 2009
TL;DR: The next generation of IMTA systems will have to be designed and tested on a much larger scale than the current systems, and the number of systems will need to be expanded.
Abstract: 8 Background and objectives 9 Review of current IMTA systems 14 Major requirements for the expansion of IMTA 28 Conclusions and recommendations 39

Journal ArticleDOI
TL;DR: This meta-analysis represented a first attempt to consider the size of the effect of pollution on parasites, and highlighted the potential of susceptible parasite taxa, communities, and functional groups for use in the biological assessment of pollution.

Journal ArticleDOI
TL;DR: The Canadian High Arctic Ionospheric Network (CHAIN) as mentioned in this paper is a distributed array of ground-based radio instruments in the Canadian high Arctic, which is designed to take advantage of Canadian geographic vantage points for a better understanding of the Sun-Earth system.
Abstract: [1] Polar cap ionospheric measurements are important for the complete understanding of the various processes in the solar wind-magnetosphere-ionosphere system as well as for space weather applications. Currently, the polar cap region is lacking high temporal and spatial resolution ionospheric measurements because of the orbit limitations of space-based measurements and the sparse network providing ground-based measurements. Canada has a unique advantage in remedying this shortcoming because it has the most accessible landmass in the high Arctic regions, and the Canadian High Arctic Ionospheric Network (CHAIN) is designed to take advantage of Canadian geographic vantage points for a better understanding of the Sun-Earth system. CHAIN is a distributed array of ground-based radio instruments in the Canadian high Arctic. The instrument components of CHAIN are 10 high data rate Global Positioning System ionospheric scintillation and total electron content monitors and six Canadian Advanced Digital Ionosondes. Most of these instruments have been sited within the polar cap region except for two GPS reference stations at lower latitudes. This paper briefly overviews the scientific capabilities, instrument components, and deployment status of CHAIN. This paper also reports a GPS signal scintillation episode associated with a magnetospheric impulse event. More details of the CHAIN project and data can be found at http://chain.physics.unb.ca/chain.

Journal ArticleDOI
TL;DR: The results suggest that customers are more influenced by the content of trust-assuring arguments when the price of a product is relatively high than when it is relatively low, and that when customers have more at stake, they do not necessarily have to rely only on an independent third-party source to form high trust beliefs about the store.
Abstract: The research question examined in this paper is whether or not product price can be used as a proxy to predict how customers' trust will be influenced by different trust-assuring arguments displayed on a business-to-consumer e-commerce Web site. Drawing from the elaboration likelihood model (ELM) and Toulmin's model of argumentation, we examine the effects on consumer trust of two levels of source and two levels of content factors, under two levels of product price, in a laboratory experiment with 128 subjects. Product price was predicted as a moderating factor that would influence the customer's motivation to scrutinize more closely the content of the trust-assuring arguments. The results suggest that customers are more influenced by the content of trust-assuring arguments when the price of a product is relatively high than when it is relatively low. Presumably, Internet stores employ a third party's trust-assuring arguments because customers are less likely to trust an unknown Internet store's own trust-assuring arguments. However, the results paradoxically may imply that when customers have more at stake (e.g., buying a high-price product), they do not necessarily have to rely only on an independent third-party source to form high trust beliefs about the store. When customers purchase a high-price product, they seem to form trusting beliefs by scrutinizing argument content rather than by depending on heuristic cues (e.g., an independent party's opinion) as the ELM would predict.

Journal ArticleDOI
TL;DR: In this article, the authors introduced a shape parameter to an exponential model using the idea of Azzalini, which results in a new class of weighted exponential (WE) distributions, which have the probability density function (PDF) whose shape is very close to the shape of the PDFS of Weibull, gamma or generalized exponential distributions.
Abstract: Introducing a shape parameter to an exponential model is nothing new. There are many ways to introduce a shape parameter to an exponential distribution. The different methods may result in variety of weighted exponential (WE) distributions. In this article, we have introduced a shape parameter to an exponential model using the idea of Azzalini, which results in a new class of WE distributions. This new WE model has the probability density function (PDF) whose shape is very close to the shape of the PDFS of Weibull, gamma or generalized exponential distributions. Therefore, this model can be used as an alternative to any of these distributions. It is observed that this model can also be obtained as a hidden truncation model. Different properties of this new model have been discussed and compared with the corresponding properties of well-known distributions. Two data sets have been analysed for illustrative purposes and it is observed that in both the cases it fits better than Weibull, gamma or generalized ...

Journal ArticleDOI
TL;DR: Most faecal settling data used in published salmonid waste dispersal models are rudimentary and recent information suggests that such models are highly sensitive to this input, according to limited information on particle size, digestibility, settleable and non-settleable mass fractions of salmonid faeces at cage environments.
Abstract: Knowledge of the quantitative and qualitative properties of salmonid faeces is necessary for aquaculture waste dispersal models, and the design of integrated multi-trophic aquaculture (IMTA) systems. The amount and proximate composition of salmonid faeces can be estimated using a mass-balance, nutritional approach. Indigestible components of salmonid diets have the potential to aiect faecal‘cohesiveness’or ‘stability’. Nutrient content and density of faeces can vary depending on diet and submersion time. Faecal density has a greater in£uence on settling velocity than faecal size. Published settling velocity data on salmonid faeces are highly variable due to diierences in ¢sh size, rearing systems, collection time, water density, methodology, the mass fraction tested and diet. Most faecal settling data used in published salmonid waste dispersal models are rudimentary and recent information suggests that such models are highly sensitive to this input. The design of open-water IMTA systems and estimation of nutrient capture and recovery from co-cultured ¢lter feeders is di⁄cult due to limited information on particle size, digestibility, settleable and non-settleable mass fractions of salmonid faeces at cage environments. Implications of faecal properties on the accountability for the eiects of aquaculture nutrient loading are discussed.

Journal ArticleDOI
TL;DR: In this article, an artificial neural network (ANN) model was developed to predict soil texture (sand, clay and silt contents) based on soil attributes obtained from existing coarse-resolution soil maps combined with hydrographic parameters derived from a digital elevation model.

Journal ArticleDOI
TL;DR: The main aim of this paper is to define a bivariate generalized exponential distribution so that the marginals have generalized exponential distributions and to use the EM algorithm to compute the maximum likelihood estimators of the unknown parameters and also obtain the observed and expected Fisher information matrices.