scispace - formally typeset
Search or ask a question

Showing papers by "University of Waterloo published in 2010"


Journal ArticleDOI
TL;DR: A survey of cloud computing is presented, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges to provide a better understanding of the design challenges of cloud Computing and identify important research directions in this increasingly important area.
Abstract: Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. However, despite the fact that cloud computing offers huge opportunities to the IT industry, the development of cloud computing technology is currently at its infancy, with many issues still to be addressed. In this paper, we present a survey of cloud computing, highlighting its key concepts, architectural principles, state-of-the-art implementation as well as research challenges. The aim of this paper is to provide a better understanding of the design challenges of cloud computing and identify important research directions in this increasingly important area.

3,465 citations


Journal ArticleDOI
TL;DR: A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class.
Abstract: Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.

2,921 citations


Journal ArticleDOI
TL;DR: A detailed examination of the key aspects of pilot studies for phase III trials including the general reasons for conducting a pilot study, the relationships between pilot studies, proof-of-concept studies, and adaptive designs, and some suggestions on how to report the results of pilot investigations using the CONSORT format.
Abstract: Pilot studies for phase III trials - which are comparative randomized trials designed to provide preliminary evidence on the clinical efficacy of a drug or intervention - are routinely performed in many clinical areas. Also commonly know as "feasibility" or "vanguard" studies, they are designed to assess the safety of treatment or interventions; to assess recruitment potential; to assess the feasibility of international collaboration or coordination for multicentre trials; to increase clinical experience with the study medication or intervention for the phase III trials. They are the best way to assess feasibility of a large, expensive full-scale study, and in fact are an almost essential pre-requisite. Conducting a pilot prior to the main study can enhance the likelihood of success of the main study and potentially help to avoid doomed main studies. The objective of this paper is to provide a detailed examination of the key aspects of pilot studies for phase III trials including: 1) the general reasons for conducting a pilot study; 2) the relationships between pilot studies, proof-of-concept studies, and adaptive designs; 3) the challenges of and misconceptions about pilot studies; 4) the criteria for evaluating the success of a pilot study; 5) frequently asked questions about pilot studies; 7) some ethical aspects related to pilot studies; and 8) some suggestions on how to report the results of pilot investigations using the CONSORT format.

2,365 citations


Journal ArticleDOI
TL;DR: GelMA hydrogels could be useful for creating complex, cell- responsive microtissues, such as endothelialized microvasculature, or for other applications that require cell-responsive microengineered hydrogELs.

1,871 citations


Journal ArticleDOI
TL;DR: Li-S batteries have received everincreasing attention recently due to their high theoretical specific energy density, which is 3 to 5 times higher than that of Li ion batteries based on intercalation reactions as discussed by the authors.
Abstract: Rechargeable Li–S batteries have received ever-increasing attention recently due to their high theoretical specific energy density, which is 3 to 5 times higher than that of Li ion batteries based on intercalation reactions. Li–S batteries may represent a next-generation energy storage system, particularly for large scale applications. The obstacles to realize this high energy density mainly include high internal resistance, self-discharge and rapid capacity fading on cycling. These challenges can be met to a large degree by designing novel sulfur electrodes with “smart” nanostructures. This highlight provides an overview of major developments of positive electrodes based on this concept.

1,731 citations


Journal ArticleDOI
TL;DR: In this article, positive electrodes for Li-ion and lithium batteries have been under intense scrutiny since the advent of the Li ion cell in 1991, and a growing interest in developing Li−sulfur and Li−air batteries that have the potential for vastly increased capacity and energy density, which is needed to power large scale systems.
Abstract: Positive electrodes for Li-ion and lithium batteries (also termed “cathodes”) have been under intense scrutiny since the advent of the Li-ion cell in 1991. This is especially true in the past decade. Early on, carbonaceous materials dominated the negative electrode and hence most of the possible improvements in the cell were anticipated at the positive terminal; on the other hand, major developments in negative electrode materials made in the last portion of the decade with the introduction of nanocomposite Sn/C/Co alloys and Si−C composites have demanded higher capacity positive electrodes to match. Much of this was driven by the consumer market for small portable electronic devices. More recently, there has been a growing interest in developing Li−sulfur and Li−air batteries that have the potential for vastly increased capacity and energy density, which is needed to power large-scale systems. These require even more complex assemblies at the positive electrode in order to achieve good properties. This r...

1,566 citations


Journal ArticleDOI
TL;DR: In this article, a methodology has been proposed for optimally allocating different types of renewable distributed generation (DG) units in the distribution system so as to minimize annual energy loss.
Abstract: It is widely accepted that renewable energy sources are the key to a sustainable energy supply infrastructure since they are both inexhaustible and nonpolluting. A number of renewable energy technologies are now commercially available, the most notable being wind power, photovoltaic, solar thermal systems, biomass, and various forms of hydraulic power. In this paper, a methodology has been proposed for optimally allocating different types of renewable distributed generation (DG) units in the distribution system so as to minimize annual energy loss. The methodology is based on generating a probabilistic generation-load model that combines all possible operating conditions of the renewable DG units with their probabilities, hence accommodating this model in a deterministic planning problem. The planning problem is formulated as mixed integer nonlinear programming (MINLP), with an objective function for minimizing the system's annual energy losses. The constraints include the voltage limits, the feeders' capacity, the maximum penetration limit, and the discrete size of the available DG units. This proposed technique has been applied to a typical rural distribution system with different scenarios, including all possible combinations of the renewable DG units. The results show that a significant reduction in annual energy losses is achieved for all the proposed scenarios.

1,243 citations


Journal ArticleDOI
TL;DR: An overview of bacterially assisted phytoremediation is provided here for both organic and metallic contaminants, with the intent of providing some insight into how these bacteria aid phytorenmediation so that future field studies might be facilitated.

969 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present six areas of tourism in which VR may prove particularly valuable: planning and management, marketing, entertainment, education, accessibility, and heritage preservation, and numerous suggestions for future research are presented.

937 citations


Journal ArticleDOI
Th. de Graauw1, Th. de Graauw2, Frank Helmich2, Thomas G. Phillips3  +176 moreInstitutions (20)
TL;DR: The Heterodyne Instrument for the Far-Infrared (HIFI) was launched onboard ESA's Herschel Space Observatory in May 2009 as mentioned in this paper, which is a set of 7 heterodyne receivers that are electronically tuneable, covering 480-1250 GHz with SIS mixers and the 1410-1910 GHz range with hot electron bolometer mixers.
Abstract: Aims. This paper describes the Heterodyne Instrument for the Far-Infrared (HIFI) that was launched onboard ESA's Herschel Space Observatory in May 2009. Methods. The instrument is a set of 7 heterodyne receivers that are electronically tuneable, covering 480-1250 GHz with SIS mixers and the 1410-1910 GHz range with hot electron bolometer (HEB) mixers. The local oscillator (LO) subsystem comprises a Ka-band synthesizer followed by 14 chains of frequency multipliers and 2 chains for each frequency band. A pair of auto-correlators and a pair of acousto-optical spectrometers process the two IF signals from the dual-polarization, single-pixel front-ends to provide instantaneous frequency coverage of 2 × 4 GHz, with a set of resolutions (125 kHz to 1 MHz) that are better than 0.1 km s-1. Results. After a successful qualification and a pre-launch TB/TV test program, the flight instrument is now in-orbit and completed successfully the commissioning and performance verification phase. The in-orbit performance of the receivers matches the pre-launch sensitivities. We also report on the in-orbit performance of the receivers and some first results of HIFI's operations.

828 citations


Journal ArticleDOI
TL;DR: This review concentrates mainly on post-2002, new OHC effects data in Arctic wildlife and fish, and is largely based on recently available effects data for populations of several top trophic level species, including seabirds and Arctic charr.

Journal ArticleDOI
30 Sep 2010-Nature
TL;DR: Deterministic production of three-qubit Greenberger–Horne–Zeilinger (GHZ) states with fidelity of 88 per cent is demonstrated, demonstrating the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code.
Abstract: Quantum entanglement, in which the states of two or more particles are inextricably linked, is a key requirement for quantum computation. In superconducting devices, two-qubit entangled states have been used to implement simple quantum algorithms. The availability of three-qubit states, which can be entangled in two fundamentally different ways (the GHZ and W states), would be a significant advance because they should make it possible to perform error correction and infer scalability to the higher numbers of qubits needed for a practical quantum-information-processing device. Two groups now report the generation of three-qubit entanglement. John Martinis and colleagues create and measure both GHZ and W-type states. Leonardo DiCarlo and colleagues generate the GHZ state and demonstrate the first step of basic quantum error correction by encoding a logical qubit into a manifold of GHZ-like states using a repetition code. Quantum entanglement is a key resource for technologies such as quantum communication and computation. A major question for solid-state quantum information processing is whether an engineered system can display the three-qubit entanglement necessary for quantum error correction. A positive answer to this question is now provided. A circuit quantum electrodynamics device has been used to demonstrate deterministic production of three-qubit entangled states and the first step of basic quantum error correction. Traditionally, quantum entanglement has been central to foundational discussions of quantum mechanics. The measurement of correlations between entangled particles can have results at odds with classical behaviour. These discrepancies grow exponentially with the number of entangled particles1. With the ample experimental2,3,4 confirmation of quantum mechanical predictions, entanglement has evolved from a philosophical conundrum into a key resource for technologies such as quantum communication and computation5. Although entanglement in superconducting circuits has been limited so far to two qubits6,7,8,9, the extension of entanglement to three, eight and ten qubits has been achieved among spins10, ions11 and photons12, respectively. A key question for solid-state quantum information processing is whether an engineered system could display the multi-qubit entanglement necessary for quantum error correction, which starts with tripartite entanglement. Here, using a circuit quantum electrodynamics architecture13,14, we demonstrate deterministic production of three-qubit Greenberger–Horne–Zeilinger (GHZ) states15 with fidelity of 88 per cent, measured with quantum state tomography. Several entanglement witnesses detect genuine three-qubit entanglement by violating biseparable bounds by 830 ± 80 per cent. We demonstrate the first step of basic quantum error correction, namely the encoding of a logical qubit into a manifold of GHZ-like states using a repetition code. The integration of this encoding with decoding and error-correcting steps in a feedback loop will be the next step for quantum computing with integrated circuits.

Journal ArticleDOI
TL;DR: This paper gives a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature and provides recommendations as to which pairing- friendly curves to choose to best satisfy a variety of performance and security requirements.
Abstract: Elliptic curves with small embedding degree and large prime-order subgroup are key ingredients for implementing pairing-based cryptographic systems. Such “pairing-friendly” curves are rare and thus require specific constructions. In this paper we give a single coherent framework that encompasses all of the constructions of pairing-friendly elliptic curves currently existing in the literature. We also include new constructions of pairing-friendly curves that improve on the previously known constructions for certain embedding degrees. Finally, for all embedding degrees up to 50, we provide recommendations as to which pairing-friendly curves to choose to best satisfy a variety of performance and security requirements.

Journal ArticleDOI
TL;DR: This paper proposes a novel extension of the MF approach, namely the MF-FDOG, to detect retinal blood vessels, and achieves competitive vessel detection results as compared with those state-of-the-art schemes but with much lower complexity.

Journal ArticleDOI
TL;DR: Denitrifying bioreactors are an approach where solid carbon substrates are added into the flow path of contaminated water as mentioned in this paper, which act as a C and energy source to support denitrification.

Journal ArticleDOI
TL;DR: The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert, and there is room for improvement as the best performing system does not reach the performance of thehuman expert.
Abstract: The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each other on the same data. In this work we present the results of the first international microaneurysm detection competition, organized in the context of the Retinopathy Online Challenge (ROC), a multiyear online competition for various aspects of DR detection. For this competition, we compare the results of five different methods, produced by five different teams of researchers on the same set of data. The evaluation was performed in a uniform manner using an algorithm presented in this work. The set of data used for the competition consisted of 50 training images with available reference standard and 50 test images where the reference standard was withheld by the organizers (M. Niemeijer, B. van Ginneken, and M. D. AbrA?moff). The results obtained on the test data was submitted through a website after which standardized evaluation software was used to determine the performance of each of the methods. A human expert detected microaneurysms in the test set to allow comparison with the performance of the automatic methods. The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert. There is room for improvement as the best performing system does not reach the performance of the human expert. The data associated with the ROC microaneurysm detection competition will remain publicly available and the website will continue accepting submissions.

Journal ArticleDOI
TL;DR: The WiggleZ Dark Energy Survey as discussed by the authors is a survey of 240 000 emission-line galaxies in the distant Universe, measured with the AAOmega spectrograph on the 3.9m Anglo-Australian Telescope (AAT).
Abstract: The WiggleZ Dark Energy Survey is a survey of 240 000 emission-line galaxies in the distant Universe, measured with the AAOmega spectrograph on the 3.9-m Anglo-Australian Telescope (AAT). The primary aim of the survey is to precisely measure the scale of baryon acoustic oscillations (BAO) imprinted on the spatial distribution of these galaxies at look-back times of 4–8 Gyr. The target galaxies are selected using ultraviolet (UV) photometry from the Galaxy Evolution Explorer satellite, with a flux limit of NUV < 22.8 mag . We also require that the targets are detected at optical wavelengths, specifically in the range 20.0 < r < 22.5 mag . We use the Lyman break method applied to the UV colours, with additional optical colour limits, to select high-redshift galaxies. The galaxies generally have strong emission lines, permitting reliable redshift measurements in relatively short exposure times on the AAT. The median redshift of the galaxies is z_(med)= 0.6 . The redshift range containing 90 per cent of the galaxies is 0.2 < z < 1.0 . The survey will sample a volume of ~1 Gpc^3 over a projected area on the sky of 1000 deg^2, with an average target density of 350 deg^(−2). Detailed forecasts indicate that the survey will measure the BAO scale to better than 2 per cent and the tangential and radial acoustic wave scales to approximately 3 and 5 per cent, respectively. Combining the WiggleZ constraints with existing cosmic microwave background measurements and the latest supernova data, the marginalized uncertainties in the cosmological model are expected to be σ(Ω_m) = 0.02 and σ(w) = 0.07 (for a constant w model). The WiggleZ measurement of w will constitute a robust, precise and independent test of dark energy models. This paper provides a detailed description of the survey and its design, as well as the spectroscopic observations, data reduction and redshift measurement techniques employed. It also presents an analysis of the properties of the target galaxies, including emission-line diagnostics which show that they are mostly extreme starburst galaxies, and Hubble Space Telescope images, which show that they contain a high fraction of interacting or distorted systems. In conjunction with this paper, we make a public data release of data for the first 100 000 galaxies measured for the project.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a methodology for allocating an ESS in a distribution system with a high penetration of wind energy, aiming to maximize the benefits for both the DG owner and the utility by sizing the ESS to accommodate all amounts of spilled wind energy and by then allocating it within the system in order to minimize the annual cost of the electricity.
Abstract: Environmental concerns and fuel cost uncertainties associated with the use of conventional energy sources have resulted in rapid growth in the amount of wind energy connected to distribution grids. However, based on Ontario's standard offer program (SOP), the utility has the right to curtail (spill) wind energy in order to avoid any violation of the system constraints. This means that any increase in wind energy production over a specific limit might be met with an increase in the wind energy curtailed. In spite of their cost, energy storage systems (ESSs) are considered to be a viable solution to this problem. This paper proposes a methodology for allocating an ESS in a distribution system with a high penetration of wind energy. The ultimate goal is to maximize the benefits for both the DG owner and the utility by sizing the ESS to accommodate all amounts of spilled wind energy and by then allocating it within the system in order to minimize the annual cost of the electricity. In addition, a cost/benefit analysis has been conducted in order to verify the feasibility of installing an ESS from the perspective of both the utility and the DG owner.

Journal ArticleDOI
TL;DR: This work presents a simple and robust, chemically controlled process for synthesizing size-controlled noble metal or bimetallic nanocrystallites embedded within the porous structure of ordered mesoporous carbon (OMC).
Abstract: Shape- and size-controlled supported metal and intermetallic nanocrystallites are of increasing interest because of their catalytic and electrocatalytic properties. In particular, intermetallics PtX (X 5 Bi, Pb, Pd, Ru) are very attractive because of their high activity as fuel-cell anode catalysts for formic acid or methanol oxidation. These are normally synthesized using high-temperature techniques, but rigorous size control is very challenging. Even low-temperature techniques typically produce nanoparticles with dimensions much greater than the optimum <6 nm required for fuel cell catalysis. Here, we present a simple and robust, chemically controlled process for synthesizing size-controlled noble metal or bimetallic nanocrystallites embedded within the porous structure of ordered mesoporous carbon (OMC). By using surface-modified ordered mesoporous carbon to trap the metal precursors, nanocrystallites are formed with monodisperse sizes as low as 1.5 nm, which can be tuned up to ∼3.5 nm. To the best of our knowledge, 3-nm ordered mesoporous carbon-supported PtBi nanoparticles exhibit the highest mass activity for formic acid oxidation reported to date, and over double that of Pt–Au.

Journal ArticleDOI
TL;DR: The present study illustrates that data are needed on the distribution in the aquatic environment of both the parent compound and the biologically active metabolites of pharmaceuticals.
Abstract: Antidepressants are a widely prescribed group of pharmaceuticals that can be biotransformed in humans to biologically active metabolites. In the present study, the distribution of six antidepressants (venlafaxine, bupropion, fluoxetine, sertraline, citalopram, and paroxetine) and five of their metabolites was determined in a municipal wastewater treatment plant (WWTP) and at sites downstream of two WWTPs in the Grand River watershed in southern Ontario, Canada. Fathead minnows (Pimephales promelas) caged in the Grand River downstream of a WWTP were also evaluated for accumulated antidepressants. Finally, drinking water was analyzed from a treatment plant that takes its water from the Grand River 17 km downstream of a WWTP. In municipal wastewater, the antidepressant compounds present in the highest concentrations (i.e., >0.5 microg/L) were venlafaxine and its two demethylation products, O- and N-desmethyl venlafaxine. Removal rates of the target analytes in a WWTP were approximately 40%. These compounds persisted in river water samples collected at sites up to several kilometers downstream of discharges from WWTPs. Venlafaxine, citalopram, and sertraline, and demethylated metabolites were detected in fathead minnows caged 10 m below the discharge from a WWTP, but concentrations were all < microg/kg wet weight. Venlafaxine and bupropion were detected at very low (<0.005 microg/L) concentrations in untreated drinking water, but these compounds were not detected in treated drinking water. The present study illustrates that data are needed on the distribution in the aquatic environment of both the parent compound and the biologically active metabolites of pharmaceuticals.

Journal ArticleDOI
TL;DR: A polyacrylamide hydrogel-based sensor functionalized with a thymine-rich DNA that can simultaneously detect and remove mercury from water and is resistant to nuclease and can be rehydrated from dried gels for storage and DNA protection.
Abstract: Mercury is a highly toxic environmental pollutant with bioaccumulative properties. Therefore, new materials are required to not only detect but also effectively remove mercury from environmental sources such as water. We herein describe a polyacrylamide hydrogel-based sensor functionalized with a thymine-rich DNA that can simultaneously detect and remove mercury from water. Detection is achieved by selective binding of Hg(2+) between two thymine bases, inducing a hairpin structure where, upon addition of SYBR Green I dye, green fluorescence is observed. In the absence of Hg(2+), however, addition of the dye results in yellow fluorescence. Using the naked eye, the detection limit in a 50 mL water sample is 10 nM Hg(2+). This sensor can be regenerated using a simple acid treatment and can remove Hg(2+) from water at a rate of approximately 1 h(-1). This sensor was also used to detect and remove Hg(2+) from samples of Lake Ontario water spiked with mercury. In addition, these hydrogel-based sensors are resistant to nuclease and can be rehydrated from dried gels for storage and DNA protection. Similar methods can be used to functionalize hydrogels with other nucleic acids, proteins, and small molecules for environmental and biomedical applications.

Journal ArticleDOI
TL;DR: In this article, the scaling relationship between jet power and synchrotron luminosity was investigated using Chandra X-ray and VLAR radio data for 21 giant elliptical galaxies.
Abstract: Using Chandra X-ray and Very Large Array radio data, we investigate the scaling relationship between jet power, P{sub jet}, and synchrotron luminosity, P{sub radio}. We expand the sample presented in BIrzan et al. to lower radio power by incorporating measurements for 21 giant elliptical galaxies (gEs) to determine if the BIrzan et al. P{sub jet}-P {sub radio} scaling relations are continuous in form and scatter from gEs up to brightest cluster galaxies. We find a mean scaling relation of P {sub jet} {approx} 5.8 x 10{sup 43}(P{sub radio}/10{sup 40}){sup 0.70} erg s{sup -1} which is continuous over {approx}6-8 decades in P{sub jet} and P{sub radio} with a scatter of {approx} 0.7 dex. Our mean scaling relationship is consistent with the model presented in Willott et al. if the typical fraction of lobe energy in non-radiating particles to that in relativistic electrons is {approx}>100. We identify several gEs whose radio luminosities are unusually large for their jet powers and have radio sources which extend well beyond the densest parts of their X-ray halos. We suggest that these radio sources are unusually luminous because they were unable to entrain appreciable amounts of gas.

Journal ArticleDOI
30 Apr 2010-Science
TL;DR: The monsoon circulation provides an effective pathway for pollution from Asia, India, and Indonesia to enter the global stratosphere, using satellite observations of hydrogen cyanide (HCN), a tropospheric pollutant produced in biomass burning.
Abstract: Transport of air from the troposphere to the stratosphere occurs primarily in the tropics, associated with the ascending branch of the Brewer-Dobson circulation. Here, we identify the transport of air masses from the surface, through the Asian monsoon, and deep into the stratosphere, using satellite observations of hydrogen cyanide (HCN), a tropospheric pollutant produced in biomass burning. A key factor in this identification is that HCN has a strong sink from contact with the ocean; much of the air in the tropical upper troposphere is relatively depleted in HCN, and hence, broad tropical upwelling cannot be the main source for the stratosphere. The monsoon circulation provides an effective pathway for pollution from Asia, India, and Indonesia to enter the global stratosphere.

Journal ArticleDOI
TL;DR: In this article, a donor-acceptor polymer semiconductor (PDPP-TBT) was proposed for low-bandgap OTFTs with balanced hole and electron mobilities of 0.35 cm2 V1s-1 and 0.40 cm 2 V-1s -1, respectively.
Abstract: A new, solution-processable, low-bandgap, diketopyrrolopyrrole- benzothiadiazole-based, donor-acceptor polymer semiconductor (PDPP-TBT) is reported. This polymer exhibits ambipolar charge transport when used as a single component active semiconductor in OTFTs with balanced hole and electron mobilities of 0.35 cm2 V-1s-1 and 0.40 cm 2 V-1s-1, respectively. This polymer has the potential for ambipolar transistor-based complementary circuits in printed electronics.

Journal ArticleDOI
TL;DR: In this article, the authors define a quantum walk as a time-homogeneous quantum process on a graph, which can be defined either in continuous or discrete time and can be obtained as the limit of a sequence of discrete-time random walks.
Abstract: Quantum walk is one of the main tools for quantum algorithms. Defined by analogy to classical random walk, a quantum walk is a time-homogeneous quantum process on a graph. Both random and quantum walks can be defined either in continuous or discrete time. But whereas a continuous-time random walk can be obtained as the limit of a sequence of discrete-time random walks, the two types of quantum walk appear fundamentally different, owing to the need for extra degrees of freedom in the discrete-time case.

Journal ArticleDOI
TL;DR: In this article, the authors studied the properties of the holographic CFT dual to Gauss-Bonnet gravity in general D(≥ 5) dimensions and established the AdS/CFT dictionary and in particular related the couplings of the gravitational theory to the universal couplings arising in correlators of the stress tensor of the dual CFT.
Abstract: We study the properties of the holographic CFT dual to Gauss-Bonnet gravity in general D(≥ 5) dimensions. We establish the AdS/CFT dictionary and in particular relate the couplings of the gravitational theory to the universal couplings arising in correlators of the stress tensor of the dual CFT. This allows us to examine constraints on the gravitational couplings by demanding consistency of the CFT. In particular, one can demand positive energy fluxes in scattering processes or the causal propagation of fluctuations. We also examine the holographic hydrodynamics, commenting on the shear viscosity as well as the relaxation time. The latter allows us to consider causality constraints arising from the second-order truncated theory of hydrodynamics.

Journal ArticleDOI
TL;DR: The impact of contact angle on the biocompatibility of tissue engineering substrates, blood-contacting devices, dental implants, intraocular lenses, and contact lens materials is reviewed.
Abstract: Biomaterials may be defined as artificial materials that can mimic, store, or come into close contact with living biological cells or fluids and are becoming increasingly popular in the medical, biomedical, optometric, dental, and pharmaceutical industries. Within the ophthalmic industry, the best example of a biomaterial is a contact lens, which is worn by approximately 125 million people worldwide. For biomaterials to be biocompatible, they cannot illicit any type of unfavorable response when exposed to the tissue they contact. A characteristic that significantly influences this response is that related to surface wettability, which is often determined by measuring the contact angle of the material. This article reviews the impact of contact angle on the biocompatibility of tissue engineering substrates, blood-contacting devices, dental implants, intraocular lenses, and contact lens materials.

Book ChapterDOI
05 Dec 2010
TL;DR: The polynomial commitment schemes are useful tools to reduce the communication cost in cryptographic protocols and are applied to four problems in cryptography: verifiable secret sharing, zero-knowledge sets, credentials and content extraction signatures.
Abstract: We introduce and formally define polynomial commitment schemes, and provide two efficient constructions. A polynomial commitment scheme allows a committer to commit to a polynomial with a short string that can be used by a verifier to confirm claimed evaluations of the committed polynomial. Although the homomorphic commitment schemes in the literature can be used to achieve this goal, the sizes of their commitments are linear in the degree of the committed polynomial. On the other hand, polynomial commitments in our schemes are of constant size (single elements). The overhead of opening a commitment is also constant; even opening multiple evaluations requires only a constant amount of communication overhead. Therefore, our schemes are useful tools to reduce the communication cost in cryptographic protocols. On that front, we apply our polynomial commitment schemes to four problems in cryptography: verifiable secret sharing, zero-knowledge sets, credentials and content extraction signatures.

Journal ArticleDOI
TL;DR: In this paper, the authors present a calibration of TCCON data using WMO-scale in-strumentation aboard aircraft that measured profiles over four Total Carbon Column Observing Network (TCCON) stations.
Abstract: The Total Carbon Column Observing Network (TCCON) produces precise measurements of the column av- erage dry-air mole fractions of CO2, CO, CH4, N2O and H2O at a variety of sites worldwide. These observations rely on spectroscopic parameters that are not known with suffi- cient accuracy to compute total columns that can be used in combination with in situ measurements. The TCCON must therefore be calibrated to World Meteorological Orga- nization (WMO) in situ trace gas measurement scales. We present a calibration of TCCON data using WMO-scale in- strumentation aboard aircraft that measured profiles over four TCCON stations during 2008 and 2009. These calibrations are compared with similar observations made in 2004 and 2006. The results indicate that a single, global calibration factor for each gas accurately captures the TCCON total col- umn data within error.

01 Jan 2010
TL;DR: In this paper, the authors explore the strategies and dynamics of scaling up social innovations and propose a distinctive model of system transformation associated with a small but important group of social innovations, dependent on discontinuous and cross-scale change.
Abstract: This article explores the strategies and dynamics of scaling up social innovations. Social innovation is a complex process that profoundly changes the basic routines, resource and authority flows, or beliefs of the social system in which it occurs. Various applications of marketing and diffusion theory are helpful to some extent in understanding the trajectories or successful strategies associated with social innovation. It seems unwise, however, to rely solely on a market model to understand the dynamics of scaling social innovation, in view of the complex nature of the supply-demand relation with respect to the social innovation market. Instead, the authors propose a distinctive model of system transformation associated with a small but important group of social innovations and dependent on discontinuous and cross-scale change. This paper focuses on the challenge of scaling up social innovations in general and in particular the dynamics of going to scale.