scispace - formally typeset
Search or ask a question

Showing papers by "University of Waterloo published in 2000"


Journal ArticleDOI
TL;DR: A review of the literature suggests there are two major aspects of responsiveness, which characterizes the ability of a measure to change over a prespecified time frame and which reflects the extent to which change in a measure relates to correspondingchange in a reference measure of clinical or health status.

1,310 citations


OtherDOI
TL;DR: In this article, the amplitude amplification algorithm was proposed to find a good solution after an expected number of applications of the algorithm and its inverse which is proportional to a factor proportional to 1/a.
Abstract: Consider a Boolean function $\chi: X \to \{0,1\}$ that partitions set $X$ between its good and bad elements, where $x$ is good if $\chi(x)=1$ and bad otherwise. Consider also a quantum algorithm $\mathcal A$ such that $A |0\rangle= \sum_{x\in X} \alpha_x |x\rangle$ is a quantum superposition of the elements of $X$, and let $a$ denote the probability that a good element is produced if $A |0\rangle$ is measured. If we repeat the process of running $A$, measuring the output, and using $\chi$ to check the validity of the result, we shall expect to repeat $1/a$ times on the average before a solution is found. *Amplitude amplification* is a process that allows to find a good $x$ after an expected number of applications of $A$ and its inverse which is proportional to $1/\sqrt{a}$, assuming algorithm $A$ makes no measurements. This is a generalization of Grover's searching algorithm in which $A$ was restricted to producing an equal superposition of all members of $X$ and we had a promise that a single $x$ existed such that $\chi(x)=1$. Our algorithm works whether or not the value of $a$ is known ahead of time. In case the value of $a$ is known, we can find a good $x$ after a number of applications of $A$ and its inverse which is proportional to $1/\sqrt{a}$ even in the worst case. We show that this quadratic speedup can also be obtained for a large family of search problems for which good classical heuristics exist. Finally, as our main result, we combine ideas from Grover's and Shor's quantum algorithms to perform amplitude estimation, a process that allows to estimate the value of $a$. We apply amplitude estimation to the problem of *approximate counting*, in which we wish to estimate the number of $x\in X$ such that $\chi(x)=1$. We obtain optimal quantum algorithms in a variety of settings.

1,276 citations


Journal ArticleDOI
TL;DR: The SPME technique can be used routinely in combination with gas Chromatography (GC), GC-mass spectrometry (GC-MS), high-performance liquid chromatography (HPLC) or LC-MS, and can improve the detection limits.

1,023 citations


Journal ArticleDOI
TL;DR: The balanced scorecard as discussed by the authors is a new tool that complements traditional measures of business unit performance, including financial performance, customer relations, internal business processes, and learning and growth.
Abstract: The balanced scorecard is a new tool that complements traditional measures of business unit performance. The scorecard contains a diverse set of performance measures, including financial performance, customer relations, internal business processes, and learning and growth. Advocates of the balanced scorecard suggest that each unit in the organization should develop and use its own scorecard, choosing measures that capture the unit's business strategy. Our study examines judgmental effects of the balanced scorecard—specifically, how balanced scorecards that include some measures common to multiple units and other measures that are unique to a particular unit affect superiors' evaluations of that unit's performance. Our test shows that only the common measures affect the superiors' evaluations. We discuss the implications of this result for research and practice.

913 citations


Book
04 Jun 2000
TL;DR: The results of these studies provide the basis for a revision of the CCC theory that specifies more clearly the circumstances in which children will have difficulty using rules at various levels of complexity, provides a more detailed account of how to determine the complexity of rules required in a task, takes account of both the activation and inhibition of rules as a function of experience, and highlights the importance of taking intentionality seriously in the study of executive function.
Abstract: According to the Cognitive Complexity and Control (CCC) theory, the development of executive function can be understood in terms of age-related increases in the maximum complexity of the rules children can formulate and use when solving problems. This Monograph describes four studies (9 experiments) designed to test hypotheses derived from the CCC theory and from alternative theoretical perspectives on the development of executive function (memory accounts, inhibition accounts, and redescription accounts). Each study employed a version of the Dimensional Change Card Sort (DCCS), in which children are required first to sort cards by one pair of rules (e.g., color rules: "If red then here, if blue then there"), and then sort the same cards by another, incompatible pair of rules (e.g., shape rules). Study 1 found that although most 3- to 4-year-olds failed the standard version of this task (i.e., they perseverated on the preswitch rules during the postswitch phase), they usually performed well when they were required to use four rules (including bidimensional rules) and those rules were not in conflict (i.e., they did not require children to respond in two different ways to the same test card). These findings indicate that children's perseveration cannot be attributed in a straightforward fashion to limitations in children's memory capacity. Study 2 examined the circumstances in which children can use conflicting rules. Three experiments demonstrated effects of rule dimensionality (uni- vs. bidimensional rules) but no effects of stimulus characteristics (1 vs. 2 test cards; spatially integrated vs. separated stimuli). Taken together, these studies suggest that conflict among rules is a key determinant of difficulty, but that conflict interacts with dimensionality. Study 3 examined what types of conflict pose problems for 3- to 4-year-olds by comparing performance on standard, Partial Change, and Total Change versions of the DCCS. Results revealed effects of conflict at the level of specific rules (e.g., "If red, then there"), rather than specific stimulus configurations or dimensions per se, indicating that activation of the preswitch rules persists into the postswitch phase. Study 4 examined whether negative priming also contributes to difficulty on the DCCS. Two experiments suggested that the active selection of preswitch rules against a competing alternative results in the lasting suppression of the alternative. Taken together, the results of these studies provide the basis for a revision of the CCC theory (CCC-r) that specifies more clearly the circumstances in which children will have difficulty using rules at various levels of complexity, provides a more detailed account of how to determine the complexity of rules required in a task, takes account of both the activation and inhibition of rules as a function of experience, and highlights the importance of taking intentionality seriously in the study of executive function.

851 citations


Journal ArticleDOI
TL;DR: Two bacterial strains were used to inoculate tomato, canola, and Indian mustard seeds which were then grown in soil in the presence of either nickel, lead, or zinc, and both were effective at relieving a portion of the growth inhibition caused by the metals.
Abstract: Kluyvera ascorbata SUD165 and a siderophore-overproducing mutant of this bacterium, K. ascorbata SUD165/26, were used to inoculate tomato, canola, and Indian mustard seeds which were then grown in ...

777 citations


Journal ArticleDOI
TL;DR: It is concluded that N or P limitation of algal growth is a products of the TN and TP concentration and the TN: TP ratio rather than a product of whether the system of study is marine or freshwater.
Abstract: ~~~~~~~~~~~~~Number 6 Abstract Total nitrogen (TN) and total phosphorus (TP) measurements and contemporaneous measurements of chlorophyll a (Chl a) and phytoplankton nutrient deficiency have been made across a broad range of lakes and ocean sites using common methods. The ocean environment was nutrient rich in terms of TN and TP when compared with most lakes in the study, although Lake Victoria had the highest values of TN and TP. TN concentrations in lakes rose rapidly with TP concentrations, from low values to TN concentrations that are similar to those associated with the ocean sites. In contrast, the TN concentrations in the oceans were relatively homogeneous and independent of TP concentrations. The hyperbolic shape of the TN: TP relationship created a broad range of TN: TP values for both lakes and oceans. The TN: TP ratios of the surface ocean sites were usually well in excess of the Redfield ratio that is noted in the deep ocean. Phytoplankton biomass, as indicated by Chl a, was strongly dependent upon TP in the lakes, and there was a weaker relationship with TN. Oceanic Chl a values showed a positive relationship with TP, but at much higher TP values than were observed in the lakes; there was no relation with TN. P-deficient phytoplankton growth was inferred using independent indicators when TP was 50 (molar). At intermediate TN TP ratios, either N or P can become deficient. We conclude that N or P limitation of algal growth is a product of the TN and TP concentration and the TN: TP ratio rather than a product of whether the system of study is marine or freshwater.

774 citations


Journal ArticleDOI
TL;DR: The main objective of this contribution is to describe the development of the concepts, techniques and devices associated with solid-phase microextraction as a response to the evolution of understanding of the fundamental principles behind this technique.

740 citations


Journal ArticleDOI
TL;DR: Work in the cognitive neurosciences is used to explore two nonexclusive hypotheses about the putative links between naming speed and reading processes and about the sources of disruption that may cause subtypes of reading disabilities predicted by the double-deficit hypothesis.
Abstract: This article integrates the findings in the special issue with a comprehensive review of the evidence for seven central questions about the role of naming-speed deficits in developmental reading disabilities. Cross-sectional, longitudinal, and cross-linguistic research on naming-speed processes, timing processes, and reading is presented. An evolving model of visual naming illustrates areas of difference and areas of overlap between naming speed and phonology in their underlying requirements. Work in the cognitive neurosciences is used to explore two nonexclusive hypotheses about the putative links between naming speed and reading processes and about the sources of disruption that may cause subtypes of reading disabilities predicted by the double-deficit hypothesis. Finally, the implications of the work in this special issue for diagnosis and intervention are elaborated.

726 citations


Journal ArticleDOI
01 Mar 2000
TL;DR: This paper surveys the development of elliptic curve cryptosystems from their inception in 1985 by Koblitz and Miller to present day implementations.
Abstract: Since the introduction of public-key cryptography by Diffie and Hellman in 1976, the potential for the use of the discrete logarithm problem in public-key cryptosystems has been recognized. Although the discrete logarithm problem as first employed by Diffie and Hellman was defined explicitly as the problem of finding logarithms with respect to a generator in the multiplicative group of the integers modulo a prime, this idea can be extended to arbitrary groups and, in particular, to elliptic curve groups. The resulting public-key systems provide relatively small block size, high speed, and high security. This paper surveys the development of elliptic curve cryptosystems from their inception in 1985 by Koblitz and Miller to present day implementations.

640 citations


Proceedings ArticleDOI
11 Oct 2000
TL;DR: The authors explore the evolution of the Linux kernel both at the system level and within the major subsystems, and they discuss why they think Linux continues to exhibit such strong growth.
Abstract: Most studies of software evolution have been performed on systems developed within a single company using traditional management techniques. With the widespread availability of several large software systems that have been developed using an "open source" development approach, we now have a chance to examine these systems in detail, and see if their evolutionary narratives are significantly different from commercially developed systems. The paper summarizes our preliminary investigations into the evolution of the best known open source system: the Linux operating system kernel. Because Linux is large (over two million lines of code in the most recent version) and because its development model is not as tightly planned and managed as most industrial software processes, we had expected to find that Linux was growing more slowly as it got bigger and more complex. Instead, we have found that Linux has been growing at a super-linear rate for several years. The authors explore the evolution of the Linux kernel both at the system level and within the major subsystems, and they discuss why they think Linux continues to exhibit such strong growth.

Journal ArticleDOI
TL;DR: Permeable reactive barriers are an emerging alternative to traditional pump and treat systems for groundwater remediation as discussed by the authors, which has progressed rapidly from laboratory bench-scale studies to full-scale implementation.

Journal ArticleDOI
TL;DR: In this article, the constitutive and operational definitions of group cohesion have varied across various disciplines in group dynamics, and it has been suggested that a conceptualization of cohesion propose a conceptualisation of cohesion.
Abstract: The constitutive and operational definitions of group cohesion have varied across various disciplines in group dynamics. Recently, it has been suggested that a conceptualization of cohesion propose...

Journal ArticleDOI
TL;DR: It is shown that, subject to some mild restrictions, a grammar-based code is a universal code with respect to the family of finite-state information sources over the finite alphabet.
Abstract: We investigate a type of lossless source code called a grammar-based code, which, in response to any input data string x over a fixed finite alphabet, selects a context-free grammar G/sub x/ representing x in the sense that x is the unique string belonging to the language generated by G/sub x/. Lossless compression of x takes place indirectly via compression of the production rules of the grammar G/sub x/. It is shown that, subject to some mild restrictions, a grammar-based code is a universal code with respect to the family of finite-state information sources over the finite alphabet. Redundancy bounds for grammar-based codes are established. Reduction rules for designing grammar-based codes are presented.

Book ChapterDOI
17 Aug 2000
TL;DR: This paper presents an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields and the results of the implementation in C on a Pentium II 400MHz workstation.
Abstract: This paper presents an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields. We also present the results of our implementation in C on a Pentium II 400MHz workstation.

Journal Article
TL;DR: In this article, the authors present an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields, and present the results of their implementation in C on a Pentium II 400MHz workstation.
Abstract: This paper presents an extensive and careful study of the software implementation on workstations of the NIST-recommended elliptic curves over binary fields. We also present the results of our implementation in C on a Pentium II 400MHz workstation.

Journal ArticleDOI
TL;DR: This review will attempt to provide an overview as well as a theoretical and practical understanding of the use of microextraction technologies for drug analysis, with particular emphasis on the effect various sample matrices have on extraction characteristics.

Journal ArticleDOI
TL;DR: A new CUSUM procedure is described that adjusts for each patient's pre-operative risk of surgical failure through the use of a likelihood-based scoring method that is ideally suited for settings where there is a variable mix of patients over time.
Abstract: The cumulative sum (CUSUM) procedure is a graphical method that is widely used for quality monitoring in industrial settings. More recently it has been used to monitor surgical outcomes whereby it 'signals' if sufficient evidence has accumulated that there has been a change in the surgical failure rate. A limitation of the standard CUSUM procedure in this context is that since it is simply based on the observed surgical outcomes, it may signal as a result of changes in the referral pattern, such as an increased proportion of high-risk patients, rather than due to a change in the actual surgical performance. We describe a new CUSUM procedure that adjusts for each patient's pre-operative risk of surgical failure through the use of a likelihood-based scoring method. The procedure is therefore ideally suited for settings where there is a variable mix of patients over time.

Journal ArticleDOI
TL;DR: This paper developed a model of auditor-client accounting negotiation, using the elements of negotiation examined in the behavioral negotiation literature, elaborated to include accounting contextual features indicated in the accounting literature and suggested by interviews with senior practitioners, using a questionnaire structured according to the model to describe the elements, contextual features and associations between the two groups in a sample of real negotiations chosen by 93 experienced audit partners.
Abstract: We develop a model of auditor-client accounting negotiation, using the elements of negotiation examined in the behavioral negotiation literature, elaborated to include accounting contextual features indicated in the accounting literature and suggested by interviews with senior practitioners. We use a questionnaire structured according to the model to describe the elements, contextual features and associations between the two groups in a sample of real negotiations chosen by 93 experienced audit partners. The paper demonstrates important aspects of the sampled accounting negotiations and makes suggestions for further empirical and model development research.

Journal ArticleDOI
01 Mar 2000
TL;DR: An improved version of theoblitz algorithm, which runs 50 times faster than any previous version, is given, based on a new kind of representation of an integer, analogous to certain kinds of binary expansions.
Abstract: It has become increasingly common to implement discrete-logarithm based public-key protocols on elliptic curves over finite fields. The basic operation is scalar multiplication: taking a given integer multiple of a given point on the curve. The cost of the protocols depends on that of the elliptic scalar multiplication operation. Koblitz introduced a family of curves which admit especially fast elliptic scalar multiplication. His algorithm was later modified by Meier and Staffelbach. We give an improved version of the algorithm which runs 50 than any previous version. It is based on a new kind of representation of an integer, analogous to certain kinds of binary expansions. We also outline further speedups using precomputation and storage.

Journal ArticleDOI
TL;DR: In this paper, a mathematical model has been formulated for the performance and operation of a single polymer electrolyte membrane fuel cell, which incorporates all the essential fundamental physical and electrochemical processes occurring in the membrane electrolyte, cathode catalyst layer, electrode backing and flow channel.

Journal ArticleDOI
TL;DR: It is suggested that the spin-ice behavior in Ising pyrochlore systems is due to long-range dipolar interactions, and that the nearest-neighbor exchange in Dy2Ti2O7 is antiferromagnetic.
Abstract: Recent experiments suggest that the Ising pyrochlore magnets ${\mathrm{Ho}}_{2}{\mathrm{Ti}}_{2}{\mathrm{O}}_{7}$ and ${\mathrm{Dy}}_{2}{\mathrm{Ti}}_{2}{\mathrm{O}}_{7}$ display qualitative properties of the nearest-neighbor ``spin ice'' model. We discuss the dipolar energy scale present in both these materials and discuss how spin-ice behavior can occur despite the presence of long-range dipolar interactions. We present results of numerical simulations and a mean field analysis of Ising pyrochlore systems. Based on our quantitative theory, we suggest that the spin-ice behavior in these systems is due to long-range dipolar interactions, and that the nearest-neighbor exchange in ${\mathrm{Dy}}_{2}{\mathrm{Ti}}_{2}{\mathrm{O}}_{7}$ is antiferromagnetic.

Journal ArticleDOI
TL;DR: In this paper, a survey of the central 700 arcmin2 region of the ρ Ophiuchi molecular cloud at 850 μm using the Submillimeter Common-User Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope is presented.
Abstract: We present results from a survey of the central 700 arcmin2 region of the ρ Ophiuchi molecular cloud at 850 μm using the Submillimeter Common-User Bolometer Array (SCUBA) on the James Clerk Maxwell Telescope. Using the clump-finding procedure developed by Williams et al., we identify 55 independent objects and compute size, flux, and degree of central concentration. Comparison with isothermal, pressure-confined, self-gravitating Bonnor-Ebert spheres implies that the clumps have internal temperatures of 10-30 K and surface pressures P/k = 106-7 K cm-3, consistent with the expected average pressure in the ρ Ophiuchi central region, P/k ~ 2 × 107 K cm-3. The clump masses span 0.02-6.3 M☉ assuming a dust temperature Td ~ 20 K and a dust emissivity κ850 = 0.01 cm2 g-1. The distribution of clump masses is well characterized by a broken power law, N(M) ∝ M-α, with α = 1.0-1.5 for M > 0.6 M☉ and α = 0.5 for M ≤ 0.6 M☉, although significant incompleteness may affect the slope at the lower mass end. This mass function is in general agreement with the ρ Ophiuchi clump mass function derived at 1.3 mm by Motte et al. The two-point correlation function of the clump separations is measured and reveals clustering on size scales r < 3 × 104 AU with a radial power-law exponent γ = 0.75.

Journal ArticleDOI
TL;DR: Overall, using the Gabor "lter magnitude response given a frequency bandwidth and spacing of one octave and orientation bandwidth and spaced of 303 augmented by a measure of the texture complexity generated preferred results.

Journal ArticleDOI
TL;DR: It is concluded that fatty acid uptake is subject to short term regulation by muscle contraction and involves the translocation of FAT/CD36 from intracellular stores to the sarcolemma, analogous to the regulation of glucose uptake by GLUT-4.

Journal ArticleDOI
TL;DR: The problem of capacitor allocation for loss reduction in electric distribution systems has been extensively researched over the past several decades as mentioned in this paper, and a comprehensive survey of all the literature in capacitor allocation can be found in this paper.
Abstract: The problem of capacitor allocation for loss reduction in electric distribution systems has been extensively researched over the past several decades. This paper describes the evolution of the research and provides an evaluation of the practicality and accuracy of the capacitor placement algorithms in the literature. The intent of this paper is not to provide a complete survey of all the literature in capacitor allocation, but to provide researchers and utility engineers further insight into the choices of available capacitor allocation techniques and their respective merits and shortcomings. Furthermore, this paper serves as a useful and practical guideline to assist in the implementation of an appropriate capacitor allocation technique.

Journal ArticleDOI
TL;DR: A two-column limestone reactor was designed to reduce fluoride concentrations from wastewaters to below the maximum contaminant level (MCL limit) of 4 mg/L in this article.
Abstract: A two-column limestone reactor has been designed to reduce fluoride concentrations from wastewaters to below the maximum contaminant level (MCL limit) of 4 mg/L. The reactor functions by forcing calcite (CaCO3) to dissolve and fluorite (CaF2) to precipitate in the first column. The second column is not necessary to remove fluoride but is used to precipitate the calcite dissolved in the first column. This returns the treated water to its approximate initial composition. In laboratory experiments, the fluoride concentration of the effluent from feedwaters containing initial F amounts of up to 109 mg/L were brought below the MCL limit of 4 mg/L with a porewater residence time within the column of 2 h. Profile sampling shows that fluoride is reduced from 109 to 8 mg/L after only 35 min within the reactor. The major advantage of this potential technology over existing liming and ion exchange methods is that system monitoring is minimal, regular column regeneration is not required, and chemicals are not permane...

Journal ArticleDOI
TL;DR: Radiocarbon-dated macrofossils are used to document Holocene treeline history across northern Russia (including Siberia), and Boreal forest development in this region commenced by 10,000 yr B.P..

Journal ArticleDOI
TL;DR: In this article, a mathematical model has been developed to understand the principal processes of SPME by applying basic fundamental principles of thermodynamics and diffusion theory, and the model assumes idealized conditions and is limited to air, liquid, or headspace above liquid sampling.
Abstract: The main objective of this contribution is to describe the fundamental concepts associated with solid-phase microextraction (SPME). Theory provides insight when developing SPME methods and identifies parameters for rigorous control and optimization. A mathematical model has been developed to understand the principal processes of SPME by applying basic fundamental principles of thermodynamics and diffusion theory. The model assumes idealized conditions and is limited to air, liquid, or headspace above liquid sampling. Theory for ideal cases can be quite accurate for trace concentrations in simple matrices such as air or drinking water at ambient conditions when secondary factors such as thermal expansion of polymers and changes in diffusion coefficients because of solutes in polymers can be neglected. When conditions are more complex, theory for ideal cases still efficiently estimates general relationships between parameters.

Journal ArticleDOI
TL;DR: Performing curl-ups on labile surfaces changes both the level of muscle activity and the way that the muscles coactivate to stabilize the spine and the whole body, which suggests a much higher demand on the motor control system, which may be desirable for specific stages in a rehabilitation program.
Abstract: Background and Purpose. With the current interest in stability training for the injured low back, the use of labile (movable) surfaces, underneath the subject, to challenge the motor control system is becoming more popular. Little is known about the modulating effects of these surfaces on muscle activity. The purpose of this study was to establish the degree of modulating influence of the type of surface (whether stable or labile) on the mechanics of the abdominal wall. In this study, the amplitude of muscle activity together with the way that the muscles coactivated due to the type of surface under the subject were of interest. Subjects. Eight men (mean age=23.3 years [SD=4.3], mean height=177.6 cm [SD=3.4], mean weight=72.6 kg [SD=8.7]) volunteered to participate in the study. All subjects were in good health and reported no incidence of acute or chronic low back injury or prolonged back pain prior to this experiment. Methods. All subjects were requested to perform 4 different curl-up exercises—1 on a stable surface and the other 3 on varying labile surfaces. Electromyographic signals were recorded from 4 different abdominal sites on the right and left sides of the body and normalized to maximal voluntary contraction (MVC) amplitudes. Results. Performing curl-up exercises on labile surfaces increased abdominal muscle activity (eg, for curl-up on a stable surface, rectus abdominis muscle activity was 21% of MVC and external oblique muscle activity was 5% of MVC; for curl-up with the upper torso on a labile ball, rectus abdominis muscle activity was 35% of MVC and external oblique muscle activity was 10% of MVC). Furthermore, it appears that increases in external oblique muscle activity were larger than those of other abdominal muscles. Conclusion and Discussion. Performing curl-ups on labile surfaces changes both the level of muscle activity and the way that the muscles coactivate to stabilize the spine and the whole body. This finding suggests a much higher demand on the motor control system, which may be desirable for specific stages in a rehabilitation program.