scispace - formally typeset
Search or ask a question

Showing papers by "Carleton University published in 2003"


Journal ArticleDOI
TL;DR: In this article, the authors suggest that the term "fragmentation" should be reserved for the breaking apart of habitat, independent of habitat loss, and that fragmentation per se has much weaker effects on biodiversity that are at least as likely to be positive as negative.
Abstract: ■ Abstract The literature on effects of habitat fragmentation on biodiversity is huge. It is also very diverse, with different authors measuring fragmentation in different ways and, as a consequence, drawing different conclusions regarding both the magnitude and direction of its effects. Habitat fragmentation is usually defined as a landscape-scale process involving both habitat loss and the breaking apart of habitat. Results of empirical studies of habitat fragmentation are often difficult to interpret because (a) many researchers measure fragmentation at the patch scale, not the landscape scale and (b) most researchers measure fragmentation in ways that do not distinguish between habitat loss and habitat fragmentation per se, i.e., the breaking apart of habitat after controlling for habitat loss. Empirical studies to date suggest that habitat loss has large, consistently negative effects on biodiversity. Habitat fragmentation per se has much weaker effects on biodiversity that are at least as likely to be positive as negative. Therefore, to correctly interpret the influence of habitat fragmentation on biodiversity, the effects of these two components of fragmentation must be measured independently. More studies of the independent effects of habitat loss and fragmentation per se are needed to determine the factors that lead to positive versus negative effects of fragmentation per se. I suggest that the term “fragmentation” should be reserved for the breaking apart of habitat, independent of habitat loss.

6,341 citations


Book
J. N. K. Rao1
23 Jan 2003
TL;DR: In this paper, the authors proposed a model-based approach for estimating small area statistics based on direct and indirect estimates of the total population of a given region in a given domain.
Abstract: List of Figures. List of Tables. Foreword. Preface. 1. Introduction. What is a Small Area? Demand for Small Area Statistics. Traditional Indirect Estimators. Small Area Models. Model-Based Estimation. Some Examples. 2. Direct Domain Estimation. Introduction. Design-based Approach. Estimation of Totals. Domain Estimation. Modified Direct Estimators. Design Issues. Proofs. 3. Traditional Demographic Methods. Introduction. Symptomatic Accounting Techniques. Regression Symptomatic Procedures. Dual-system Estimation of Total Population. Derivation of Average MSEs. 4. Indirect Domain Estimation. Introduction. Synthetic Estimation. Composite Estimation. James-Stein Method. Proofs. 5. Small Area Models. Introduction. Basic Area Level (Type A) Mode l. Basic Unit Level (Type B) Model. Extensions: Type A Models. Extensions: Type B Models. Generalized Linear Mixed Models. 6. Empirical Best Linear Unbiased Prediction: Theory. Introduction. General Linear Mixed Model. Block Diagonal Covariance Structure. Proofs. 7. EBLUP: Basic Models. Basic Area Level Model. Basic Unit Level Model. 8. EBLUP: Extensions. Multivariate Fay-Herriot Model. Correlated Sampling Errors. Time Series and Cross-sectional Models. Spatial Models. Multivariate Nested Error Regression Model. Random Error Variances Linear Model. Two-fold Nested Error Regression Model. Two-level Model. 9. Empirical Bayes (EB) Method. Introduction. Basic Area Level Model. Linear Mixed Models. Binary Data. Disease Mapping. Triple-goal Estimation. Empirical Linear Bayes. Constrained LB. Proofs. 10. Hierarchical Bayes (HB) Method. Introduction. MCMC Methods. Basic Area Level Model. Unmatched Sampling and Linking Area Level Models. Basic Unit Level Model. General ANOVA Model. Two-level Models. Time Series and Cross-sectional Models. Multivariate Models. Disease Mapping Models. Binary Data. Exponential Family Models. Constrained HB. Proofs. References. Author Index. Subject Index.

1,359 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore how reflexivity can be operationalized and discuss reflexivity in terms of personal, interpersonal, institutional, pragmatic, emotional, theoretical, epistemological and ontological influences on our research and data analysis processes.
Abstract: While the importance of being reflexive is acknowledged within social science research, the difficulties, practicalities and methods of doing it are rarely addressed. Thus, the implications of current theoretical and philosophical discussions about reflexivity, epistemology and the construction of knowledge for empirical sociological research practice, specifically the analysis of qualitative data, remain underdeveloped. Drawing on our doctoral experiences, we reflect on the possibilities and limits of reflexivity during the interpretive stages of research.We explore how reflexivity can be operationalized and discuss reflexivity in terms of the personal, interpersonal, institutional, pragmatic, emotional, theoretical, epistemological and ontological influences on our research and data analysis processes. We argue that data analysis methods are not just neutral techniques.They reflect, and are imbued with, theoretical, epistemological and ontological assumptions ‐ including conceptions of subjects and subjectivities, and understandings of how knowledge is constructed and produced. In suggesting how epistemological and ontological positionings can be translated into research practice, our article contributes to current debates aiming to bridge the gap between abstract epistemological discussions and the nitty-gritty of research practice.

1,298 citations


Journal ArticleDOI
TL;DR: Fundamental concepts in this emerging area of neural-network computational modules are described at teaching RF/microwave engineers what neural networks are, why they are useful, when they can be used, and how to use them.
Abstract: Neural-network computational modules have recently gained recognition as an unconventional and useful tool for RF and microwave modeling and design. Neural networks can be trained to learn the behavior of passive/active components/circuits. A trained neural network can be used for high-level design, providing fast and accurate answers to the task it has learned. Neural networks are attractive alternatives to conventional methods such as numerical modeling methods, which could be computationally expensive, or analytical methods which could be difficult to obtain for new devices, or empirical modeling solutions whose range and accuracy may be limited. This tutorial describes fundamental concepts in this emerging area aimed at teaching RF/microwave engineers what neural networks are, why they are useful, when they can be used, and how to use them. Neural-network structures and their training methods are described from the RF/microwave designer's perspective. Electromagnetics-based training for passive component models and physics-based training for active device models are illustrated. Circuit design and yield optimization using passive/active neural models are also presented. A multimedia slide presentation along with narrative audio clips is included in the electronic version of this paper. A hyperlink to the NeuroModeler demonstration software is provided to allow readers practice neural-network-based design concepts.

608 citations


Journal ArticleDOI
TL;DR: In this paper, the development of coupled atomistic/continuum models is reviewed within a single coherent framework with the aim of providing both non-specialists and specialists with insight into the key ideas, features, differences and advantages of prevailing models.
Abstract: Important advances in multi-scale computer simulation techniques for computational materials science have been made in the last decade as scientists and engineers strive to imbue continuum-based models with more-realistic details at quantum and atomistic scales. One major class of multi-scale models directly couples a region described with full atomistic detail to a surrounding region modelled using continuum concepts and finite element methods. Here, the development of such coupled atomistic/continuum models is reviewed within a single coherent framework with the aim of providing both non-specialists and specialists with insight into the key ideas, features, differences and advantages of prevailing models. Some applications and very recent advances are noted, and important challenges for extending these models to their fullest potential are discussed.

532 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated critical management issues in ERP implementation projects such as selection of ERP vendor, project manager, and implementation partners, constitution of project team, project planning, training, infrastructure development, on-going project management; quality assurance and stabilization of the ERP.

415 citations


Journal ArticleDOI
TL;DR: The results suggest that web designers may need to pay attention to both visual appeal and usability, and the extent to which satisfaction rating scales capture the same interface qualities as uncovered in self-reports of interactive experiences.

355 citations


Journal ArticleDOI
D. Z. Besson1, S. Anderson2, V. V. Frolov2, D. T. Gong2, Yuichi Kubota2, Shuwang Li2, Ron Poling2, A. Smith2, C. J. Stepaniak2, J. Urheim2, Z. Metreveli3, K. K. Seth3, Amiran Tomaradze3, Peter K. Zweber3, K. E. Arms4, E. Eckhart4, K. K. Gan4, C. Gwon4, T. K. Pedlar4, E. von Toerne4, Horst Severini5, P. Skubic5, S. A. Dytman6, James Mueller6, S. Nam6, V. Savinov6, J. W. Hinson7, G. S. Huang7, Jason Sang Hun Lee7, D. H. Miller7, V. Pavlunin7, B. Sanghi7, E. I. Shibata7, I. P.J. Shipsey7, Daniel P Cronin-Hennessy8, C. S. Park8, W. Park8, J. B. Thayer8, E. H. Thorndike8, T. E. Coan9, Y. S. Gao9, F. Liu9, Ryszard Stroynowski9, Marina Artuso10, C. Boulahouache10, S. Blusk10, E. Dambasuren10, O. Dorjkhaidav10, R. Mountain10, H. Muramatsu10, R. Nandakumar10, Tomasz Skwarnicki10, Sheldon Stone10, Jing Wang10, A. H. Mahmood11, S. E. Csorna12, I. Danko12, G. Bonvicini13, D. Cinabro13, M. Dubrovin13, S. McGee13, A. Bornheim14, E. Lipeles14, S. P. Pappas14, A. Shapiro14, W. M. Sun14, A. J. Weinstein14, R. A. Briere15, G. P. Chen15, Thomas Ferguson15, G. Tatishvili15, Hans J. Vogel15, M. E. Watkins15, N. E. Adam16, J. P. Alexander16, Karl Berkelman16, V. Boisvert16, D. G. Cassel16, J. E. Duboscq16, K. M. Ecklund16, R. Ehrlich16, R. S. Galik16, L. K. Gibbons16, B. Gittelman16, S. W. Gray16, D. L. Hartill16, B. K. Heltsley16, L. Hsu16, C. D. Jones16, J. Kandaswamy16, D. L. Kreinick16, A. Magerkurth16, H. Mahlke-Krüger16, T. O. Meyer16, N. B. Mistry16, Juliet Ritchie Patterson16, D. Peterson16, J. Pivarski16, S. J. Richichi16, D. Riley16, A. J. Sadoff16, H. Schwarthoff16, M. R. Shepherd16, J. G. Thayer16, D. Urner16, T. Wilksen16, Andreas Warburton16, M. Weinberger16, S. B. Athar17, Paul Avery17, L. Breva-Newell17, V. Potlia17, H. Stoeck17, John Yelton17, B. I. Eisenstein18, G. D. Gollin18, I. Karliner18, N. Lowrey18, C. Plager18, C. Sedlack18, Mats A Selen18, J. J. Thaler18, J. Williams18, K. W. Edwards19, K. W. Edwards20 
TL;DR: K.D. Arms, E. Eckhart, K. Thayer, D. Urheim, Z. von Toerne, H. Severini, P. Selen, J. Thaler, J Williams, and K. Edwards
Abstract: Using 13.5 ${\mathrm{fb}}^{\ensuremath{-}1}$ of ${e}^{+}{e}^{\ensuremath{-}}$ annihilation data collected with the CLEO II detector, we have observed a narrow resonance decaying to ${D}_{s}^{*+}{\ensuremath{\pi}}^{0}$ with a mass near $2.46\mathrm{GeV}{/c}^{2}.$ The search for such a state was motivated by the recent discovery by the BaBar Collaboration of a narrow state at $2.32\mathrm{GeV}{/c}^{2},$ the ${D}_{\mathrm{sJ}}^{*}{(2317)}^{+},$ that decays to ${D}_{s}^{+}{\ensuremath{\pi}}^{0}.$ Reconstructing the ${D}_{s}^{+}{\ensuremath{\pi}}^{0}$ and ${D}_{s}^{*+}{\ensuremath{\pi}}^{0}$ final states in CLEO data, we observe peaks in both of the corresponding reconstructed mass difference distributions, $\ensuremath{\Delta}{M(D}_{s}{\ensuremath{\pi}}^{0}{)=M(D}_{s}{\ensuremath{\pi}}^{0})\ensuremath{-}{M(D}_{s})$ and $\ensuremath{\Delta}{M(D}_{s}^{*}{\ensuremath{\pi}}^{0}{)=M(D}_{s}^{*}{\ensuremath{\pi}}^{0})\ensuremath{-}{M(D}_{s}^{*}),$ both of them at values near $350\mathrm{MeV}{/c}^{2}.$ We interpret these peaks as signatures of two distinct states, the ${D}_{\mathrm{sJ}}^{*}{(2317)}^{+}$ plus a new state, designated as the ${D}_{\mathrm{sJ}}{(2463)}^{+}.$ Because of the similar $\ensuremath{\Delta}M$ values, each of these states represents a source of background for the other if photons are lost, ignored or added. A quantitative accounting of these reflections confirms that both states exist. We have measured the mean mass differences $〈\ensuremath{\Delta}{M(D}_{s}{\ensuremath{\pi}}^{0})〉=350.0\ifmmode\pm\else\textpm\fi{}1.2(\mathrm{stat})\ifmmode\pm\else\textpm\fi{}1.0(\mathrm{syst})\mathrm{MeV}{/c}^{2}$ for the ${D}_{\mathrm{sJ}}^{*}{(2317)}^{+}$ state, and $〈\ensuremath{\Delta}{M(D}_{s}^{*}{\ensuremath{\pi}}^{0})〉=351.2\ifmmode\pm\else\textpm\fi{}1.7(\mathrm{stat})\ifmmode\pm\else\textpm\fi{}1.0(\mathrm{syst})\mathrm{MeV}{/c}^{2}$ for the new ${D}_{\mathrm{sJ}}{(2463)}^{+}$ state. We have also searched, but find no evidence, for decays of the two states via the channels ${D}_{s}^{*+}\ensuremath{\gamma},{D}_{s}^{+}\ensuremath{\gamma},$ and ${D}_{s}^{+}{\ensuremath{\pi}}^{+}{\ensuremath{\pi}}^{\ensuremath{-}}.$ The observations of the two states at 2.32 and $2.46\mathrm{GeV}{/c}^{2},$ in the ${D}_{s}^{+}{\ensuremath{\pi}}^{0}$ and ${D}_{s}^{*+}{\ensuremath{\pi}}^{0}$ decay channels, respectively, are consistent with their interpretations as $c\overline{s}$ mesons with an orbital angular momentum $L=1$ and spin and parity ${J}^{P}{=0}^{+}$ and ${1}^{+}.$

345 citations


Journal ArticleDOI
TL;DR: Protection against nigral neuron degeneration in PD may be sufficient to facilitate normalized locomotor activity without necessitating striatal reinnervation, according to the results of a mouse model of PD.
Abstract: The molecular mechanisms mediating degeneration of midbrain dopamine neurons in Parkinson's disease (PD) are poorly understood. Here, we provide evidence to support a role for the involvement of the calcium-dependent proteases, calpains, in the loss of dopamine neurons in a mouse model of PD. We show that administration of N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) evokes an increase in calpain-mediated proteolysis in nigral dopamine neurons in vivo. Inhibition of calpain proteolysis using either a calpain inhibitor (MDL-28170) or adenovirus-mediated overexpression of the endogenous calpain inhibitor protein, calpastatin, significantly attenuated MPTP-induced loss of nigral dopamine neurons. Commensurate with this neuroprotection, MPTP-induced locomotor deficits were abolished, and markers of striatal postsynaptic activity were normalized in calpain inhibitor-treated mice. However, behavioral improvements in MPTP-treated, calpain inhibited mice did not correlate with restored levels of striatal dopamine. These results suggest that protection against nigral neuron degeneration in PD may be sufficient to facilitate normalized locomotor activity without necessitating striatal reinnervation. Immunohistochemical analyses of postmortem midbrain tissues from human PD cases also displayed evidence of increased calpain-related proteolytic activity that was not evident in age-matched control subjects. Taken together, our findings provide a potentially novel correlation between calpain proteolytic activity in an MPTP model of PD and the etiology of neuronal loss in PD in humans.

284 citations


Proceedings ArticleDOI
01 Dec 2003
TL;DR: This work considers a TDMA cellular multihop network where relaying - via wireless terminals that have a good communication link to the base station - is used as a coverage enhancement technique and investigates the effects of relaying node selection strategies and maximum relayer transmit power level on coverage.
Abstract: We consider a TDMA cellular multihop network where relaying - via wireless terminals that have a good communication link to the base station - is used as a coverage enhancement technique. Provided that the subscriber density is not very low, relaying via wireless terminals can have a significant impact on coverage, capacity, and throughput. This is mainly due to the fact that the signals only have to travel through shorter distances and/or improved paths. In this work, we investigated the effects of relaying node selection strategies (essentially a routing issue) and maximum relayer transmit power level on coverage. Our simulation results show that with a very modest level of relaying node transmit power and with some moderate intelligence incorporated in the relaying node selection scheme, the (high data rate) coverage can be improved significantly through two-hop relaying without consuming any additional bandwidth.

258 citations


Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the ability of four isolation metrics to predict animal dispersal using simulated data, and found that area-informed isolation metrics, such as the amount of available habitat within a given radius of a patch, were most successful at predicting immigration.
Abstract: Habitat isolation can affect the distribution and abundance of wildlife, but it is an ambiguous attribute to measure. Presumably, isolation is a characteristic of a habitat patch that reflects how spatially inaccessible it is to dispersing organisms. We identified four isolation metrics (nearest-neighbor distance, Voronoi polygons, proximity index, and habitat buffers) that were representative of the different families of metrics that are commonly used in the literature to measure patch isolation. Using simulated data, we evaluated the ability of each isolation metric to predict animal dispersal. We examined the simulated movement of organisms in two types of landscapes: an artificially-generated point-pattern landscapes where patch size and shape were consistent and only the arrangement of patches varied, and realistic landscapes derived from a geographic information system (GIS) of forest-vegetation maps where patch size, shape, and isolation were variable. We tested the performance of the four isolation metrics by examining the strength of the correlation between observed immigration rate in the simulations and each patch isolation metric. We also evaluated whether each isolation metric would perform consistently under varying conditions of patch size/shape, total amount of habitat in the landscape, and proximity of the patch to the landscape edge. The results indicate that a commonly-used distance-based metric, nearest-neighbor distance, did not adequately predict immigration rate when patch size and shape were variable. Area-informed isolation metrics, such as the amount of available habitat within a given radius of a patch, were most successful at predicting immigration. Overall, the use of area-informed metrics is advocated despite the limitation that these metrics require parameterization to reflect the movement capacity of the organism studied.

Journal ArticleDOI
TL;DR: In this article, the authors apply logic programming based on answer sets to the problem of retrieving consistent information from a possibly inconsistent database, since consistent information persists from the original database to every of its minimal repairs.
Abstract: A relational database is inconsistent if it does not satisfy a given set of integrity constraints. Nevertheless, it is likely that most of the data in it is consistent with the constraints. In this paper we apply logic programming based on answer sets to the problem of retrieving consistent information from a possibly inconsistent database. Since consistent information persists from the original database to every of its minimal repairs, the approach is based on a specification of database repairs using disjunctive logic programs with exceptions, whose answer set semantics can be represented and computed by systems that implement stable model semantics. These programs allow us to declare persistence by default of data from the original instance to the repairs; and changes to restore consistency, by exceptions. We concentrate mainly on logic programs for binary integrity constraints, among which we find most of the integrity constraints found in practice.

Journal ArticleDOI
TL;DR: In this paper, the mediational role of stress and health behaviors in the procrastination-illness relationship was examined, and it was hypothesized that in addition to stress, a behavioral pathway would be implicated, with poor weliness behaviors and delay in seeking treatment for health problems mediating the effects of procrastinations on health.

Proceedings ArticleDOI
06 Jan 2003
TL;DR: Heuristics that allow OLSR to find the maximum bandwidth path are developed, and it is proved that for the ad-hoc network model, two of the heuristics are indeed optimal (i.e., guarantee that the highest-bandwidth path between any two nodes is found).
Abstract: In an ad-hoc network, all communication is done over wireless media, without the help of wired base stations. While many routing protocols have been developed to find and maintain routes based on a best-effort service model, quality-of-service (QoS) routing in an ad-hoc network is difficult because the network topology may change constantly and the available state information for routing is inherently imprecise. In this paper, we discuss how to support QoS routing in OLSR (optimized link state routing protocol, one of the routing protocols under study by the IETF MANET Working Group). We develop heuristics that allow OLSR to find the maximum bandwidth path, show through simulation that these heuristics do improve OLSR in the static network case, and finally, we prove that for our ad-hoc network model, two of the heuristics are indeed optimal (i.e., guarantee that the highest-bandwidth path between any two nodes is found).

Book ChapterDOI
30 Jun 2003
TL;DR: This paper presents the first algorithm that solves the GATHERING PROBLEM for any initial configuration of the robots.
Abstract: Consider a set of n > 2 simple autonomous mobile robots (decentralized, asynchronous, no common coordinate system, no identities, no central coordination, no direct communication, no memory of the past, deterministic) moving freely in the plane and able to sense the positions of the other robots We study the primitive task of gathering them at a point not fixed in advance (GATHERING PROBLEM) In the literature, most contributions are simulation-validated heuristics The existing algorithmic contributions for such robots are limited to solutions for n ≤ 4 or for restricted sets of initial configurations of the robots In this paper, we present the first algorithm that solves the GATHERING PROBLEM for any initial configuration of the robots

Journal ArticleDOI
TL;DR: In this paper, a general design methodology of low-voltage wideband voltage-controlled oscillator (VCO) suitable for wireless LAN (WLAN) application is described, and the applications of high-quality passives for the resonator are introduced: a single-loop horseshoe inductor with Q > 20 between 2 and 5 GHz for good phase noise performance; and accumulation MOS (AMOS) varactors with C/sub max/C/sub min/ ratio of 6 to provide wide-band tuning capability at lowvoltage supply.
Abstract: In this paper, a general design methodology of low-voltage wide-band voltage-controlled oscillator (VCO) suitable for wireless LAN (WLAN) application is described. The applications of high-quality passives for the resonator are introduced: 1) a single-loop horseshoe inductor with Q > 20 between 2 and 5 GHz for good phase noise performance; and 2) accumulation MOS (AMOS) varactors with C/sub max//C/sub min/ ratio of 6 to provide wide-band tuning capability at low-voltage supply. The adverse effect of AMOS varactors due to high sensitivity is examined. Amendment using bandswitching topology is suggested, and a phase noise improvement of 7 dB is measured to prove the concept. The measured VCO operates on a 1-V supply with a wide tuning range of 58.7% between 3.0 and 5.6 GHz when tuned between /spl plusmn/0.7 V. The phase noise is -120 dBc/Hz at 3.0 GHz, and -114.5 dBc/Hz at 5.6 GHz, with the nominal power dissipation between 2 and 3 mW across the whole tuning range. The best phase noise at 1-MHz offset is -124 dBc/Hz at the frequency of 3 GHz, a supply voltage of 1.4 V, and power dissipation of 8.4 mW. When the supply is reduced to 0.83 V, the VCO dissipates less than 1 mW at 5.6 GHz. Using this design methodology, the feasibility of generating two local oscillator frequencies (2.4-GHz ISM and 5-GHz U-NII) for WLAN transceiver using a single VCO with only one monolithic inductor is demonstrated. The VCO is fabricated in a 0.13-/spl mu/m partially depleted silicon-on-insulator CMOS process.

Proceedings ArticleDOI
22 Sep 2003
TL;DR: This article proposes a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process.
Abstract: The use of Unified Modeling Language (UML) analysis/design models on large projects leads to a large number of interdependent UML diagrams. As software systems evolve, those diagrams undergo changes to, for instance, correct errors or address changes in the requirements. Those changes can in turn lead to subsequent changes to other elements in the UML diagrams. Impact analysis is then defined as the process of identifying the potential consequences (side-effects) of a change, and estimating what needs to be modified to accomplish a change. In this article, we propose a UML model-based approach to impact analysis that can be applied before any implementation of the changes, thus allowing an early decision-making and change planning process. We first verify that the UML diagrams are consistent (consistency check). Then changes between two different versions of a UML model are identified according to a change taxonomy, and model elements that are directly or indirectly impacted by those changes (i.e., may undergo changes) are determined using formally defined impact analysis rules (written with Object Constraint Language). A measure of distance between a changed element and potentially impacted elements is also proposed to prioritize the results of impact analysis according to their likelihood of occurrence. We also present a prototype tool that provides automated support for our impact analysis strategy, that we then apply on a case study to validate both the implementation and methodology.

Journal ArticleDOI
TL;DR: In this paper, the performance of the extreme value theory in value-at-risk calculations is compared to the performances of other well-known modeling techniques, such as GARCH, variance-covariance (Var-Cov) method and historical simulation in a volatile stock market.
Abstract: In this paper, the performance of the extreme value theory in value-at-risk calculations is compared to the performances of other well-known modeling techniques, such as GARCH, variance–covariance (Var–Cov) method and historical simulation in a volatile stock market. The models studied can be classified into two groups. The first group consists of GARCH(1, 1) and GARCH(1, 1)-t models which yield highly volatile quantile forecasts. The other group, consisting of historical simulation, Var–Cov approach, adaptive generalized Pareto distribution (GPD) and nonadaptive GPD models, leads to more stable quantile forecasts. The quantile forecasts of GARCH(1, 1) models are excessively volatile relative to the GPD quantile forecasts. This makes the GPD model be a robust quantile forecasting tool which is practical to implement and regulate for VaR measurements.

Journal ArticleDOI
TL;DR: In this article, the authors share lessons from a workshop in which interdisciplinary teams of young scientists developed conceptual models of social-ecological systems using data sets and metadata from Long-Term Ecological Research sites across the United States.
Abstract: To better understand and manage complex social-ecological systems, social scientists and ecologists must collaborate. However, issues related to language and research approaches can make it hard for researchers in different fields to work together. This paper suggests that researchers can improve interdisciplinary science through the use of conceptual models as a communication tool. The authors share lessons from a workshop in which interdisciplinary teams of young scientists developed conceptual models of social-ecological systems using data sets and metadata from Long-Term Ecological Research sites across the United States. Both the process of model building and the models that were created are discussed. The exercise revealed that the presence of social scientists in a group influenced the place and role of people in the models. This finding suggests that the participation of both ecologists and social scientists in the early stages of project development may produce better questions and more accurate models of interactions between humans and ecosystems. Although the participants agreed that a better understanding of human intentions and behavior would advance ecosystem science, they felt that interdisciplinary research might gain more by training strong disciplinarians than by merging ecology and social sciences into a new field. It is concluded that conceptual models can provide an inspiring point of departure and a guiding principle for interdisciplinary group discussions. Jointly developing a model not only helped the participants to formulate questions, clarify system boundaries, and identify gaps in existing data, but also revealed the thoughts and assumptions of fellow scientists. Although the use of conceptual models will not serve all purposes, the process of model building can help scientists, policy makers, and resource managers discuss applied problems and theory among themselves and with those in other areas.

Journal ArticleDOI
15 Jan 2003
TL;DR: The Test of Memory Malingering (TOMM) as mentioned in this paper is a test for detecting memory malingering with particular emphasis directed towards the historical setting within which the TOMM was developed.
Abstract: The Test of Memory Malingering (TOMM) is described with particular emphasis directed towards the historical setting within which the TOMM was developed. This includes a review of the criteria for developing a memory malingering test, and the use of the empirically derived decision along with a discussion of the relative merits of empirically based vs. statistically based rules for detecting malingering. Data from a series of five experiments showing the sensitivity of the TOMM to feigned memory impairments, guidelines for interpretation of TOMM scores, and answers to frequently asked questions about the TOMM are provided. Finally, the ability of the TOMM to meet the Daubert guidelines is addressed.

Journal ArticleDOI
TL;DR: This is the first reported measurement of a urinary biomarker for DON in both animals and humans and should facilitate epidemiological studies of disease associations with this mycotoxin.

Journal ArticleDOI
TL;DR: A meta-analysis of 40 tests of relapse prevention treatment revealed moderate mean reductions in recidivism, and certain elements of the relapse prevention model yielded stronger effects than others.
Abstract: Although relapse prevention models have been applied within offender treatment, there has been little controlled outcome research evaluating their effectiveness. This meta-analysis of 40 tests of relapse prevention treatment revealed moderate mean reductions in recidivism (0.15), and certain elements of the relapse prevention model (i.e., training significant others in the program model and identifying the offense chain) yielded stronger effects than others (i.e., provision of booster/aftercare sessions and developing coping skills). Further analyses revealed that the clinically relevant and psychologically informed principles of risk, need, and general responsivity yielded the strongest reductions in recidivism. The implications for future research and treatment are discussed.

Journal ArticleDOI
TL;DR: A simplified version of a previously established test that is exquisitely sensitive and reliable and an ideal preparation with which to assess anxiety and anxiety-altering manipulations is introduced.

Proceedings ArticleDOI
19 May 2003
TL;DR: This work investigates the use of identical tokens to break symmetry so that the two mobile agents can run the same deterministic algorithm and derives the lower and upper bounds for the time and memory complexity of the rendezvous search problem with various parameter sets.
Abstract: In the rendezvous search problem, two mobile agents must move along the n nodes of a network so as to minimize the time required to meet or rendezvous. When the mobile agents are identical and the network is anonymous, however, the resulting symmetry can make the problem impossible to solve. Symmetry is typically broken by having the mobile agents run either a randomized algorithm or different deterministic algorithms. We investigate the use of identical tokens to break symmetry so that the two mobile agents can run the same deterministic algorithm. After deriving the explicit conditions under which identical tokens can be used to break symmetry on the n node ring, we derive the lower and upper bounds for the time and memory complexity of the rendezvous search problem with various parameter sets. While these results suggest a possible tradeoff between the mobile agents' memory and the time complexity of the rendezvous search problem, we prove that this tradeoff is limited.

Journal ArticleDOI
TL;DR: The results indicated that area-based isolation metrics generally predict patch immigration more reliably than distance-based isolations and should be included in future patch isolation studies.
Abstract: We examined the effects of matrix structure and movement responses of organisms on the relationships between 7 patch isolation metrics and patch immigration. Our analysis was based on simulating movement behaviour of two generic disperser types (specialist and generalist) across mosaic landscapes containing three landcover types: habitat, hospitable matrix and inhospitable matrix. Movement, mortality and boundary crossing probabilities of simulated individuals were linked to the landcover and boundary types in the landscapes. The results indicated that area-based isolation metrics generally predict patch immigration more reliably than distance-based isolation metrics. Relationships between patch isolation metrics and patch immigration varied between the two generic disperser types and were affected by matrix composition or matrix fragmentation. Patch immigration was always affected by matrix composition but not by matrix fragmentation. Our results do not encourage the generic use of patch isolation metrics as a substitute for patch immigration, in particular in metapopulation models where generic use may result in wrong projections of the survival probability of metapopulations. However, our examination of the factors affecting the predictive potential of patch isolation metrics should facilitate interpretation and comparison of existing patch isolation studies. Future patch isolation studies should include information about landscape structure and the dispersal distance and dispersal behaviour of the organism of interest.

Journal ArticleDOI
TL;DR: Evidence is provided that both phonological and visual aspects of working memory are involved in mental arithmetic but that the role of each working memory component will depend on such factors as presentation format.
Abstract: The goal of the present research was to examine the role of working memory in mental arithmetic. Adults (n = 96) solved multidigit arithmetic problems (e.g., 52 + 3; 3 + 52) alone and in combination with either a phonological memory load (i.e., nonwords, such asgup) or a visual memory load (i.e., random pattern of asterisks). The participants solved problems presented in a vertical format significantly faster than problems presented in a horizontal format. They also solved double digit first problems (e.g., 52 + 3) more quickly than the reverse (e.g., 3 + 52), but only when the problems were presented horizontally. Performance was worse in the phonological load condition than in the visual load condition for the participants who solved problems presented horizontally, whereas performance was worse in the visual load condition than in the phonological load condition when problems were presented vertically. The present research provides evidence that both phonological and visual aspects of working memory are involved in mental arithmetic but that the role of each working memory component will depend on such factors as presentation format.

Journal ArticleDOI
TL;DR: A random-effects weighted mean method was used to estimate the intervention effect in articles indexed only in Embase compared with those indexed elsewhere, and whether searching Embase yields additional trials that influence a meta-analysis is explored.

Journal ArticleDOI
TL;DR: Overall, this work supports the contention that warning signals are selected for their reliability as indicators of defense rather than to capitalize on any inherent educational biases of predators.
Abstract: It is widely argued that defended prey have tended to evolve conspicuous traits because predators more readily learn to avoid defended prey when they are conspicuous. However, a rival theory proposes that defended prey have evolved such characters because it allows them to be distinguished from undefended prey. Here we investigated how the attributes of defended (unprofitable) and undefended (profitable) computer‐generated prey species tended to evolve when they were subject to selection by foraging humans. When cryptic forms of defended and undefended species were similar in appearance but their conspicuous forms were not, defended prey became conspicuous while undefended prey remained cryptic. Indeed, in all of our experiments, defended prey invariably evolved any trait that enabled them to be distinguished from undefended prey, even if such traits were cryptic. When conspicuous mutants of defended prey were extremely rare, they frequently overcame their initial disadvantage by chance. When Ba...

Journal ArticleDOI
TL;DR: This work provides a complete characterization of the computational complexity of scalar aggregation queries in databases that may violate a given set of functional dependencies and shows how tractability can be improved in several special cases.

Journal ArticleDOI
01 Sep 2003-Stress
TL;DR: Using profile analysis and multivariate techniques, it is demonstrated that coping profiles comprising multiple strategies distinguished between various mood states (dysphoria, anxiety, major depression, dysthymia and obsessive-compulsive disorder).
Abstract: As numerous coping strategies to deal with stressors can be used concurrently or sequentially, it may be productive to consider coping from a broad, systemic perspective. Using profile analysis and multivariate techniques, we demonstrated that coping profiles comprising multiple strategies distinguished between various mood states (dysphoria, anxiety, major depression, dysthymia and obsessive-compulsive disorder (OCD)). Generally, affective disturbances were associated with increased levels of rumination, cognitive distraction and emotion-focused coping (emotional expression, other-blame, self-blame, emotional containment and passive resignation) coupled with diminished problem solving and social support seeking. These coping profiles, however, varied as a function of anxiety vs. dysphoria, and severity of dysphoric symptoms, although the profile of dysphoric individuals was similar to that of clinically diagnosed dysthymic and major depressive patients. While coping profiles were generally stable over time (6 months), improvement or deterioration of mood was accompanied by corresponding alterations of coping profiles. Importantly, coping profile was not simply a correlate of dysphoric mood, but was also found to be an antecedent condition that favored the evolution of more severe affective problems. It is suggested that a multidimensional approach may prove useful in understanding coping as a dynamic system, and may have implications for clinical intervention.