scispace - formally typeset
Search or ask a question

Showing papers by "University of Waterloo published in 2009"


Journal ArticleDOI
TL;DR: In this paper, the authors report the feasibility to approach such capacities by creating highly ordered interwoven composites, where conductive mesoporous carbon framework precisely constrains sulphur nanofiller growth within its channels and generates essential electrical contact to the insulating sulphur.
Abstract: The Li-S battery has been under intense scrutiny for over two decades, as it offers the possibility of high gravimetric capacities and theoretical energy densities ranging up to a factor of five beyond conventional Li-ion systems. Herein, we report the feasibility to approach such capacities by creating highly ordered interwoven composites. The conductive mesoporous carbon framework precisely constrains sulphur nanofiller growth within its channels and generates essential electrical contact to the insulating sulphur. The structure provides access to Li+ ingress/egress for reactivity with the sulphur, and we speculate that the kinetic inhibition to diffusion within the framework and the sorption properties of the carbon aid in trapping the polysulphides formed during redox. Polymer modification of the carbon surface further provides a chemical gradient that retards diffusion of these large anions out of the electrode, thus facilitating more complete reaction. Reversible capacities up to 1,320 mA h g(-1) are attained. The assembly process is simple and broadly applicable, conceptually providing new opportunities for materials scientists for tailored design that can be extended to many different electrode materials.

5,151 citations


Journal ArticleDOI
TL;DR: This article has reviewed the reasons why people want to love or leave the venerable (but perhaps hoary) MSE and reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems.
Abstract: In this article, we have reviewed the reasons why we (collectively) want to love or leave the venerable (but perhaps hoary) MSE. We have also reviewed emerging alternative signal fidelity measures and discussed their potential application to a wide variety of problems. The message we are trying to send here is not that one should abandon use of the MSE nor to blindly switch to any other particular signal fidelity measure. Rather, we hope to make the point that there are powerful, easy-to-use, and easy-to-understand alternatives that might be deployed depending on the application environment and needs. While we expect (and indeed, hope) that the MSE will continue to be widely used as a signal fidelity measure, it is our greater desire to see more advanced signal fidelity measures being used, especially in applications where perceptual criteria might be relevant. Ideally, the performance of a new signal processing algorithm might be compared to other algorithms using several fidelity criteria. Lastly, we hope that we have given further motivation to the community to consider recent advanced signal fidelity measures as design criteria for optimizing signal processing algorithms and systems. It is in this direction that we believe that the greatest benefit eventually lies.

2,601 citations


Journal ArticleDOI
TL;DR: A taxonomy of research in self-adaptive software is presented, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area.
Abstract: Software systems dealing with distributed applications in changing environments normally require human supervision to continue operation in all conditions. These (re-)configuring, troubleshooting, and in general maintenance tasks lead to costly and time-consuming procedures during the operating phase. These problems are primarily due to the open-loop structure often followed in software development. Therefore, there is a high demand for management complexity reduction, management automation, robustness, and achieving all of the desired quality requirements within a reasonable cost and time range during operation. Self-adaptive software is a response to these demands; it is a closed-loop system with a feedback loop aiming to adjust itself to changes during its operation. These changes may stem from the software system's self (internal causes, e.g., failure) or context (external events, e.g., increasing requests from users). Such a system is required to monitor itself and its context, detect significant changes, decide how to react, and act to execute such decisions. These processes depend on adaptation properties (called self-a properties), domain characteristics (context information or models), and preferences of stakeholders. Noting these requirements, it is widely believed that new models and frameworks are needed to design self-adaptive software. This survey article presents a taxonomy, based on concerns of adaptation, that is, how, what, when and where, towards providing a unified view of this emerging area. Moreover, as adaptive systems are encountered in many disciplines, it is imperative to learn from the theories and models developed in these other areas. This survey article presents a landscape of research in self-adaptive software by highlighting relevant disciplines and some prominent research projects. This landscape helps to identify the underlying research gaps and elaborates on the corresponding challenges.

1,349 citations


Journal ArticleDOI
TL;DR: This paper provides a review of the classification of imbalanced data regarding the application domains, the nature of the problem, the learning difficulties with standard classifier learning algorithms; the learning objectives and evaluation measures; the reported research solutions; and the class imbalance problem in the presence of multiple classes.
Abstract: Classification of data with imbalanced class distribution has encountered a significant drawback of the performance attainable by most standard classifier learning algorithms which assume a relatively balanced class distribution and equal misclassification costs. This paper provides a review of the classification of imbalanced data regarding: the application domains; the nature of the problem; the learning difficulties with standard classifier learning algorithms; the learning objectives and evaluation measures; the reported research solutions; and the class imbalance problem in the presence of multiple classes.

1,268 citations


Journal ArticleDOI
09 Jul 2009-Nature
TL;DR: A two-qubit superconducting processor and the implementation of the Grover search and Deutsch–Jozsa quantum algorithms are demonstrated and the generation of highly entangled states with concurrence up to 94 per cent is allowed.
Abstract: By exploiting two key aspects of quantum mechanics — the superposition and entanglement of physical states — quantum computers may eventually outperform their classical equivalents. A team based at Yale has achieved an important step towards that goal — the demonstration of the first solid-state quantum processor, which was used to execute two quantum algorithms. Quantum processors based on a few quantum bits have been demonstrated before using nuclear magnetic resonance, cold ion traps and optical systems, all of which bear little resemblance to conventional computers. This new processor is based on superconducting quantum circuits fabricated using conventional nanofabrication technology. There is still a long way to go before quantum computers can challenge the classical type. The processor is very basic, containing just two quantum bits, and operates at a fraction of a degree above absolute zero. But the chip contains all the essential features of a miniature working quantum computer and may prove scalable to more quantum bits and more complex algorithms. Quantum computers, which harness the superposition and entanglement of physical states, hold great promise for the future. Here, the demonstration of a two-qubit superconducting processor and the implementation of quantum algorithms, represents an important step in quantum computing. Quantum computers, which harness the superposition and entanglement of physical states, could outperform their classical counterparts in solving problems with technological impact—such as factoring large numbers and searching databases1,2. A quantum processor executes algorithms by applying a programmable sequence of gates to an initialized register of qubits, which coherently evolves into a final state containing the result of the computation. Building a quantum processor is challenging because of the need to meet simultaneously requirements that are in conflict: state preparation, long coherence times, universal gate operations and qubit readout. Processors based on a few qubits have been demonstrated using nuclear magnetic resonance3,4,5, cold ion trap6,7 and optical8 systems, but a solid-state realization has remained an outstanding challenge. Here we demonstrate a two-qubit superconducting processor and the implementation of the Grover search and Deutsch–Jozsa quantum algorithms1,2. We use a two-qubit interaction, tunable in strength by two orders of magnitude on nanosecond timescales, which is mediated by a cavity bus in a circuit quantum electrodynamics architecture9,10. This interaction allows the generation of highly entangled states with concurrence up to 94 per cent. Although this processor constitutes an important step in quantum computing with integrated circuits, continuing efforts to increase qubit coherence times, gate performance and register size will be required to fulfil the promise of a scalable technology.

1,039 citations


Journal ArticleDOI
TL;DR: It is shown that quantum walk can be regarded as a universal computational primitive, with any quantum computation encoded in some graph, even if the Hamiltonian is restricted to be the adjacency matrix of a low-degree graph.
Abstract: In some of the earliest work on quantum computing, Feynman showed how to implement universal quantum computation with a time-independent Hamiltonian. I show that this remains possible even if the Hamiltonian is restricted to be the adjacency matrix of a low-degree graph. Thus quantum walk can be regarded as a universal computational primitive, with any quantum computation encoded in some graph. The main idea is to implement quantum gates by scattering processes.

909 citations


Journal ArticleDOI
TL;DR: The past and the state of the art in networkvirtualization along with the future challenges that must be addressed to realize a viable network virtualization environment are investigated.
Abstract: Recently network virtualization has been pushed forward by its proponents as a long-term solution to the gradual ossification problem faced by the existing Internet and proposed to be an integral part of the next-generation networking paradigm. By allowing multiple heterogeneous network architectures to cohabit on a shared physical substrate, network virtualization provides flexibility, promotes diversity, and promises security and increased manageability. However, many technical issues stand in the way of its successful realization. This article investigates the past and the state of the art in network virtualization along with the future challenges that must be addressed to realize a viable network virtualization environment.

880 citations


Proceedings ArticleDOI
19 Apr 2009
TL;DR: This paper formulate the VN em- bedding problem as a mixed integer program through substrate network augmentation, and devise two VN embedding algo- rithms D-ViNE and R- ViNE using deterministic and randomized rounding techniques, respectively.
Abstract: Recently network virtualization has been proposed as a promising way to overcome the current ossification of the Internet by allowing multiple heterogeneous virtual networks (VNs) to coexist on a shared infrastructure. A major challenge in this respect is the VN embedding problem that deals with efficient mapping of virtual nodes and virtual links onto the substrate network resources. Since this problem is known to be NP-hard, previous research focused on designing heuristic-based algorithms which had clear separation between the node mapping and the link mapping phases. This paper proposes VN embedding algorithms with better coordination between the two phases. We formulate the VN embedding problem as a mixed integer program through substrate network augmentation. We then relax the integer constraints to obtain a linear program, and devise two VN embedding algorithms D-ViNE and R-ViNE using deterministic and randomized rounding techniques, respectively. Simulation experiments show that the proposed algorithms increase the acceptance ratio and the revenue while decreasing the cost incurred by the substrate network in the long run.

861 citations


Journal ArticleDOI
TL;DR: If phytoremediation is to become an effective and viable remedial strategy, there is a need to mitigate plant stress in contaminated soils, and there is also aneed to establish reliable monitoring methods and evaluation criteria for remediation in the field.

853 citations


Journal ArticleDOI
TL;DR: The paper presents the architecture and functionality of the principal networking agent?the SECOQC node module, which enables the authentic classical communication required for key distillation, manages the generated key material, determines a communication path between any destinations in the network, and realizes end-to-end secure transport of key material between these destinations.
Abstract: In this paper, we present the quantum key distribution (QKD) network designed and implemented by the European project SEcure COmmunication based on Quantum Cryptography (SECOQC) (2004?2008), unifying the efforts of 41 research and industrial organizations. The paper summarizes the SECOQC approach to QKD networks with a focus on the trusted repeater paradigm. It discusses the architecture and functionality of the SECOQC trusted repeater prototype, which has been put into operation in Vienna in 2008 and publicly demonstrated in the framework of a SECOQC QKD conference held from October 8 to 10, 2008. The demonstration involved one-time pad encrypted telephone communication, a secure (AES encryption protected) video-conference with all deployed nodes and a number of rerouting experiments, highlighting basic mechanisms of the SECOQC network functionality.The paper gives an overview of the eight point-to-point network links in the prototype and their underlying technology: three plug and play systems by id Quantique, a one way weak pulse system from Toshiba Research in the UK, a coherent one-way system by GAP Optique with the participation of id Quantique and the AIT Austrian Institute of Technology (formerly ARC), an entangled photons system by the University of Vienna and the AIT, a continuous-variables system by Centre National de la Recherche Scientifique (CNRS) and THALES Research and Technology with the participation of Universit? Libre de Bruxelles, and a free space link by the Ludwig Maximillians University in Munich connecting two nodes situated in adjacent buildings (line of sight 80?m). The average link length is between 20 and 30?km, the longest link being 83?km.The paper presents the architecture and functionality of the principal networking agent?the SECOQC node module, which enables the authentic classical communication required for key distillation, manages the generated key material, determines a communication path between any destinations in the network, and realizes end-to-end secure transport of key material between these destinations.The paper also illustrates the operation of the network in a number of typical exploitation regimes and gives an initial estimate of the network transmission capacity, defined as the maximum amount of key that can be exchanged, or alternatively the amount of information that can be transmitted with information theoretic security, between two arbitrary nodes.

816 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a general technique that harnesses multi-level information carriers to reduce the number of gates required to build quantum logic gate sets, enabling the construction of key quantum circuits with existing technology.
Abstract: Quantum computation promises to solve fundamental, yet otherwise intractable, problems across a range of active fields of research. Recently, universal quantum logic-gate sets—the elemental building blocks for a quantum computer—have been demonstrated in several physical architectures. A serious obstacle to a full-scale implementation is the large number of these gates required to build even small quantum circuits. Here, we present and demonstrate a general technique that harnesses multi-level information carriers to significantly reduce this number, enabling the construction of key quantum circuits with existing technology. We present implementations of two key quantum circuits: the three-qubit Toffoli gate and the general two-qubit controlled-unitary gate. Although our experiment is carried out in a photonic architecture, the technique is independent of the particular physical encoding of quantum information, and has the potential for wider application.

Journal ArticleDOI
TL;DR: This work provides an easy to implement analytic formula that inhibits leakage from any single-control analog or pixelated pulse, based on adding a second control that is proportional to the time derivative of the first.
Abstract: In realizations of quantum computing, a two-level system (qubit) is often singled out from the many levels of an anharmonic oscillator. In these cases, simple qubit control fails on short time scales because of coupling to leakage levels. We provide an easy to implement analytic formula that inhibits this leakage from any single-control analog or pixelated pulse. It is based on adding a second control that is proportional to the time derivative of the first. For realistic parameters of superconducting qubits, this strategy reduces the error by an order of magnitude relative to the state of the art, all based on smooth and feasible pulse shapes. These results show that even weak anharmonicity is sufficient and in general not a limiting factor for implementing quantum gates.

Journal ArticleDOI
TL;DR: An exact version of nonnegative matrix factorization is defined and it is established that it is equivalent to a problem in polyhedral combinatorics; it is NP-hard; and that a polynomial-time local search heuristic exists.
Abstract: Nonnegative matrix factorization (NMF) has become a prominent technique for the analysis of image databases, text databases, and other information retrieval and clustering applications. The problem is most naturally posed as continuous optimization. In this report, we define an exact version of NMF. Then we establish several results about exact NMF: (i) that it is equivalent to a problem in polyhedral combinatorics; (ii) that it is NP-hard; and (iii) that a polynomial-time local search heuristic exists.

Journal ArticleDOI
TL;DR: In this paper, the concept of unitary 2-designs was introduced as a means of expressing operationally useful subsets of the stochastic properties of the uniform (Haar) measure on the unitary group $U({2}^{n})$ on qubits.
Abstract: We develop the concept of a unitary $t$-design as a means of expressing operationally useful subsets of the stochastic properties of the uniform (Haar) measure on the unitary group $U({2}^{n})$ on $n$ qubits. In particular, sets of unitaries forming 2-designs have wide applicability to quantum information protocols. We devise an $O(n)$-size in-place circuit construction for an approximate unitary 2-design. We then show that this can be used to construct an efficient protocol for experimentally characterizing the fidelity of a quantum process on $n$ qubits with quantum circuits of size $O(n)$ without requiring any ancilla qubits, thereby improving upon previous approaches.

Journal ArticleDOI
TL;DR: A new measure of image similarity called the complex wavelet structural similarity (CW-SSIM) index is introduced and its applicability as a general purpose image similarity index is shown and it is demonstrated that it is computationally less expensive and robust to small rotations and translations.
Abstract: We introduce a new measure of image similarity called the complex wavelet structural similarity (CW-SSIM) index and show its applicability as a general purpose image similarity index. The key idea behind CW-SSIM is that certain image distortions lead to consistent phase changes in the local wavelet coefficients, and that a consistent phase shift of the coefficients does not change the structural content of the image. By conducting four case studies, we have demonstrated the superiority of the CW-SSIM index against other indices (e.g., Dice, Hausdorff distance) commonly used for assessing the similarity of a given pair of images. In addition, we show that the CW-SSIM index has a number of advantages. It is robust to small rotations and translations. It provides useful comparisons even without a preprocessing image registration step, which is essential for other indices. Moreover, it is computationally less expensive.

Journal ArticleDOI
TL;DR: It is proved that the full Han-Kobayashi achievable rate region using Gaussian codebooks is equivalent to that of the one-sided Gaussian IC for a particular range of channel parameters.
Abstract: The capacity region of the two-user Gaussian interference channel (IC) is studied. Three classes of channels are considered: weak, one-sided, and mixed Gaussian ICs. For the weak Gaussian IC, a new outer bound on the capacity region is obtained that outperforms previously known outer bounds. The sum capacity for a certain range of channel parameters is derived. For this range, it is proved that using Gaussian codebooks and treating interference as noise are optimal. It is shown that when Gaussian codebooks are used, the full Han-Kobayashi achievable rate region can be obtained by using the naive Han-Kobayashi achievable scheme over three frequency bands (equivalently, three subspaces). For the one-sided Gaussian IC, an alternative proof for the Sato's outer bound is presented. We derive the full Han-Kobayashi achievable rate region when Gaussian codebooks are utilized. For the mixed Gaussian IC, a new outer bound is obtained that outperforms previously known outer bounds. For this case, the sum capacity for the entire range of channel parameters is derived. It is proved that the full Han-Kobayashi achievable rate region using Gaussian codebooks is equivalent to that of the one-sided Gaussian IC for a particular range of channel parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors present radial entropy profiles of the intracluster medium (ICM) for a collection of 239 clusters taken from the Chandra X-ray Observatory's Data Archive and find that most ICM entropy profiles are well fitted by a model which is a power law at large radii and approaches a constant value at small radii.
Abstract: We present radial entropy profiles of the intracluster medium (ICM) for a collection of 239 clusters taken from the Chandra X-ray Observatory's Data Archive. Entropy is of great interest because it controls ICM global properties and records the thermal history of a cluster. Entropy is therefore a useful quantity for studying the effects of feedback on the cluster environment and investigating any breakdown of cluster self-similarity. We find that most ICM entropy profiles are well fitted by a model which is a power law at large radii and approaches a constant value at small radii: K(r) = K {sub 0} + K {sub 100}(r/100 kpc){sup {alpha}}, where K {sub 0} quantifies the typical excess of core entropy above the best-fitting power law found at larger radii. We also show that the K {sub 0} distributions of both the full archival sample and the primary Highest X-Ray Flux Galaxy Cluster Sample of Reiprich (2001) are bimodal with a distinct gap between K {sub 0} {approx} 30-50 keV cm{sup 2} and population peaks at K {sub 0} {approx} 15 keV cm{sup 2} and K {sub 0} {approx} 150 keV cm{sup 2}. The effects of point-spread function smearing and angular resolution on best-fitmore » K {sub 0} values are investigated using mock Chandra observations and degraded entropy profiles, respectively. We find that neither of these effects is sufficient to explain the entropy-profile flattening we measure at small radii. The influence of profile curvature and number of radial bins on best-fit K {sub 0} is also considered, and we find no indication that K {sub 0} is significantly impacted by either. For completeness, we include previously unpublished optical spectroscopy of H{alpha} and [N II] emission lines discussed in Cavagnolo et al. (2008a). All data and results associated with this work are publicly available via the project Web site.« less

Journal ArticleDOI
TL;DR: An overview of current palmprint research is provided, describing in particular capture devices, preprocessing, verification algorithms, palmprint-related fusion, algorithms especially designed for real-time palmprint identification in large databases and measures for protecting palmprint systems and users' privacy.

Journal ArticleDOI
TL;DR: The importance of the eye region and the impact of gaze on the most significant aspects of face processing has been discussed in this paper, where the existence of a neuronal eye detector mechanism is discussed as well as the links between eye gaze and social cognition impairments in autism.

Journal ArticleDOI
TL;DR: The Chandra COSMOS Survey (C-COSMS) is a large, 1.8Ms, Chandra program that has imaged the central 0.5 deg^2 area with an effective exposure of ~160 ks as discussed by the authors.
Abstract: The Chandra COSMOS Survey (C-COSMOS) is a large, 1.8 Ms, Chandra program that has imaged the central 0.5 deg^2 of the COSMOS field (centered at 10 ^h , +02 ^o ) with an effective exposure of ~160 ks, and an outer 0.4 deg^2 area with an effective exposure of ~80 ks. The limiting source detection depths are 1.9 × 10^(–16) erg cm^(–2) s^(–1) in the soft (0.5-2 keV) band, 7.3 × 10^(–16) erg cm^(–2) s^(–1) in the hard (2-10 keV) band, and 5.7 × 10^(–16) erg cm^(–2) s^(–1) in the full (0.5-10 keV) band. Here we describe the strategy, design, and execution of the C-COSMOS survey, and present the catalog of 1761 point sources detected at a probability of being spurious of <2 × 10^(–5) (1655 in the full, 1340 in the soft, and 1017 in the hard bands). By using a grid of 36 heavily (~50%) overlapping pointing positions with the ACIS-I imager, a remarkably uniform (±12%) exposure across the inner 0.5 deg^2 field was obtained, leading to a sharply defined lower flux limit. The widely different point-spread functions obtained in each exposure at each point in the field required a novel source detection method, because of the overlapping tiling strategy, which is described in a companion paper. This method produced reliable sources down to a 7-12 counts, as verified by the resulting logN-logS curve, with subarcsecond positions, enabling optical and infrared identifications of virtually all sources, as reported in a second companion paper. The full catalog is described here in detail and is available online.

Journal ArticleDOI
TL;DR: In this article, the authors present multiband photometry of 185 type-Ia supernovae (SNe Ia), with over 11,500 observations acquired between 2001 and 2008 at the F. L. Whipple Observatory of Harvard-Smithsonian Center for Astrophysics (CfA).
Abstract: We present multiband photometry of 185 type-Ia supernovae (SNe Ia), with over 11,500 observations. These were acquired between 2001 and 2008 at the F. L. Whipple Observatory of the Harvard-Smithsonian Center for Astrophysics (CfA). This sample contains the largest number of homogeneously observed and reduced nearby SNe Ia (z 0.08) published to date. It more than doubles the nearby sample, bringing SN Ia cosmology to the point where systematic uncertainties dominate. Our natural system photometry has a precision of 0.02 mag in BVRIr'i' and 0.04 mag in U for points brighter than 17.5 mag. We also estimate a systematic uncertainty of 0.03 mag in our SN Ia standard system BVRIr'i' photometry and 0.07 mag for U. Comparisons of our standard system photometry with published SN Ia light curves and comparison stars, where available for the same SN, reveal agreement at the level of a few hundredths mag in most cases. We find that 1991bg-like SNe Ia are sufficiently distinct from other SNe Ia in their color and light-curve-shape/luminosity relation that they should be treated separately in light-curve/distance fitter training samples. The CfA3 sample will contribute to the development of better light-curve/distance fitters, particularly in the few dozen cases where near-infrared photometry has been obtained and, together, can help disentangle host-galaxy reddening from intrinsic supernova color, reducing the systematic uncertainty in SN Ia distances due to dust.

Journal ArticleDOI
TL;DR: This paper investigates the error rate performance of FSO systems for K-distributed atmospheric turbulence channels and discusses potential advantages of spatial diversity deployments at the transmitter and/or receiver, and presents efficient approximated closed-form expressions for the average bit-error rate (BER) of single-input multiple-output (SIMO) FSO Systems.
Abstract: Optical wireless, also known as free-space optics, has received much attention in recent years as a cost-effective, license-free and wide-bandwidth access technique for high data rates applications. The performance of free-space optical (FSO) communication, however, severely suffers from turbulence-induced fading caused by atmospheric conditions. Multiple laser transmitters and/or receivers can be placed at both ends to mitigate the turbulence fading and exploit the advantages of spatial diversity. Spatial diversity is particularly crucial for strong turbulence channels in which single-input single-output (SISO) link performs extremely poor. Atmospheric-induced strong turbulence fading in outdoor FSO systems can be modeled as a multiplicative random process which follows the K distribution. In this paper, we investigate the error rate performance of FSO systems for K-distributed atmospheric turbulence channels and discuss potential advantages of spatial diversity deployments at the transmitter and/or receiver. We further present efficient approximated closed-form expressions for the average bit-error rate (BER) of single-input multiple-output (SIMO) FSO systems. These analytical tools are reliable alternatives to time-consuming Monte Carlo simulation of FSO systems where BER targets as low as 10-9 are typically aimed to achieve.

Proceedings ArticleDOI
12 May 2009
TL;DR: Previous work on several important aerodynamic effects impacting quadrotor flight in regimes beyond nominal hover conditions are investigated and control techniques are presented that compensate for them accordingly.
Abstract: Quadrotor helicopters have become increasingly important in recent years as platforms for both research and commercial unmanned aerial vehicle applications. This paper extends previous work on several important aerodynamic effects impacting quadrotor flight in regimes beyond nominal hover conditions. The implications of these effects on quadrotor performance are investigated and control techniques are presented that compensate for them accordingly. The analysis and control systems are validated on the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control quadrotor helicopter testbed by performing the quadrotor equivalent of the stall turn aerobatic maneuver. Flight results demonstrate the accuracy of the aerodynamic models and improved control performance with the proposed control schemes.

Journal ArticleDOI
TL;DR: A comprehensive and self-contained simplified review of the quantum computing scheme of Phys.
Abstract: We present a comprehensive and self-contained simplified review of the quantum computing scheme of Raussendorf et al. [Phys. Rev. Lett. 98, 190504 (2007); N. J. Phys. 9, 199 (2007)], which features a two-dimensional nearest-neighbor coupled lattice of qubits, a threshold error rate approaching 1%, natural asymmetric and adjustable strength error correction, and low overhead arbitrarily long-range logical gates. These features make it one of the best and most practical quantum computing schemes devised to date. We restrict the discussion to direct manipulation of the surface code using the stabilizer formalism, both of which we also briefly review, to make the scheme accessible to a broad audience.

Journal ArticleDOI
01 Aug 2009
TL;DR: This paper proposes k-automorphism to protect against multiple structural attacks and develops an algorithm (called KM) that ensures k-Automorphism and discusses an extension of KM to handle "dynamic" releases of the data.
Abstract: The growing popularity of social networks has generated interesting data management and data mining problems. An important concern in the release of these data for study is their privacy, since social networks usually contain personal information. Simply removing all identifiable personal information (such as names and social security number) before releasing the data is insufficient. It is easy for an attacker to identify the target by performing different structural queries. In this paper we propose k-automorphism to protect against multiple structural attacks and develop an algorithm (called KM) that ensures k-automorphism. We also discuss an extension of KM to handle "dynamic" releases of the data. Extensive experiments show that the algorithm performs well in terms of protection it provides.

Proceedings ArticleDOI
25 Oct 2009
TL;DR: The protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side.
Abstract: We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. Our protocol is interactive: after the initial preparation of quantum states, the client and server use two-way classical communication which enables the client to drive the computation, giving single-qubit measurement instructions to the server, depending on previous measurement outcomes. Our protocol works for inputs and outputs that are either classical or quantum. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant. We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation. By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. Our protocol is the first universal scheme which detects a cheating server, as well as the first protocol which does not require any quantum computation whatsoever on the client's side. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation.

Journal ArticleDOI
TL;DR: The authors argue that people protect the belief in a controlled, nonrandom world by imbuing their social, physical, and metaphysical environments with order and structure when their sense of personal control is threatened.
Abstract: We propose that people protect the belief in a controlled, nonrandom world by imbuing their social, physical, and metaphysical environments with order and structure when their sense of personal control is threatened. We demonstrate that when personal control is threatened, people can preserve a sense of order by (a) perceiving patterns in noise or adhering to superstitions and conspiracies, (b) defending the legitimacy of the sociopolitical institutions that offer control, or (c) believing in an interventionist God. We also present evidence that these processes of compensatory control help people cope with the anxiety and discomfort that lacking personal control fuels, that it is lack of personal control specifically and not general threat or negativity that drives these processes, and that these various forms of compensatory control are ultimately substitutable for one another. Our model of compensatory control offers insight into a wide variety of phenomena, from prejudice to the idiosyncratic rituals o...

Journal ArticleDOI
TL;DR: It is suggested future research should refine existing tools, determine their validity and usefulness across ethnic and subethnic groups, and identify which aspects of acculturation these scales and indices reliably measure.

Proceedings ArticleDOI
17 Aug 2009
TL;DR: This work designs and implements a novel mobile social networking middleware named MobiClique, which distinguishes itself from other mobile social software by removing the need for a central server to conduct exchanges, by leveraging existing social networks to bootstrap the system, and by taking advantage of the social network overlay to disseminate content.
Abstract: We consider a mobile ad hoc network setting where Bluetooth enabled mobile devices communicate directly with other devices as they meet opportunistically. We design and implement a novel mobile social networking middleware named MobiClique. MobiClique forms and exploits ad hoc social networks to disseminate content using a store-carry-forward technique. Our approach distinguishes itself from other mobile social software by removing the need for a central server to conduct exchanges, by leveraging existing social networks to bootstrap the system, and by taking advantage of the social network overlay to disseminate content. We also propose an open API to encourage third-party application development. We discuss the system architecture and three example applications. We show experimentally that MobiClique successfully builds and maintains an ad hoc social network leveraging contact opportunities between friends and people sharing interest(s) for content exchanges. Our experience also provides insight into some of the key challenges and short-comings that researchers face when designing and deploying similar systems.

Journal ArticleDOI
TL;DR: In this paper, a case study of an aboriginal community in Taiwan to illustrate the links between tourism and other livelihood strategies is presented, and a sustainable livelihood approach is introduced as being more practical, especially in the common situation in which communities and individuals sustain themselves by multiple activities rather than discrete jobs.