scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 2010"


Journal ArticleDOI
TL;DR: In this paper, the authors report the first observation of the behaviour of a PT optical coupled system that judiciously involves a complex index potential, and observe both spontaneous PT symmetry breaking and power oscillations violating left-right symmetry.
Abstract: One of the fundamental axioms of quantum mechanics is associated with the Hermiticity of physical observables 1 . In the case of the Hamiltonian operator, this requirement not only implies real eigenenergies but also guarantees probability conservation. Interestingly, a wide class of non-Hermitian Hamiltonians can still show entirely real spectra. Among these are Hamiltonians respecting parity‐time (PT) symmetry 2‐7 . Even though the Hermiticity of quantum observables was never in doubt, such concepts have motivated discussions on several fronts in physics, including quantum field theories 8 , nonHermitian Anderson models 9 and open quantum systems 10,11 , to mention a few. Although the impact of PT symmetry in these fields is still debated, it has been recently realized that optics can provide a fertile ground where PT-related notions can be implemented and experimentally investigated 12‐15 . In this letter we report the first observation of the behaviour of a PT optical coupled system that judiciously involves a complex index potential. We observe both spontaneous PT symmetry breaking and power oscillations violating left‐right symmetry. Our results may pave the way towards a new class of PT-synthetic materials with intriguing and unexpected properties that rely on non-reciprocal light propagation and tailored transverse energy flow. Before we introduce the concept of spacetime reflection in optics, we first briefly outline some of the basic aspects of this symmetry within the context of quantum mechanics. In general, a Hamiltonian HD p 2 =2mCV(x

3,097 citations


Journal ArticleDOI
TL;DR: A classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains and shows how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class.
Abstract: Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. Often, however, we have plentiful labeled training data from a source domain but wish to learn a classifier which performs well on a target domain with a different distribution and little or no labeled training data. In this work we investigate two questions. First, under what conditions can a classifier trained from source data be expected to perform well on target data? Second, given a small amount of labeled target data, how should we combine it during training with the large amount of labeled source data to achieve the lowest target error at test time? We address the first question by bounding a classifier's target error in terms of its source error and the divergence between the two domains. We give a classifier-induced divergence measure that can be estimated from finite, unlabeled samples from the domains. Under the assumption that there exists some hypothesis that performs well in both domains, we show that this quantity together with the empirical source error characterize the target error of a source-trained classifier. We answer the second question by bounding the target error of a model which minimizes a convex combination of the empirical source and target errors. Previous theoretical work has considered minimizing just the source error, just the target error, or weighting instances from the two domains equally. We show how to choose the optimal combination of source and target error as a function of the divergence, the sample sizes of both domains, and the complexity of the hypothesis class. The resulting bound generalizes the previously studied cases and is always at least as tight as a bound which considers minimizing only the target error or an equal weighting of source and target errors.

2,921 citations


Book ChapterDOI
24 Jun 2010
TL;DR: This paper deals with the single image scale-up problem using sparse-representation modeling, and assumes a local Sparse-Land model on image patches, serving as regularization, to recover an original image from its blurred and down-scaled noisy version.
Abstract: This paper deals with the single image scale-up problem using sparse-representation modeling. The goal is to recover an original image from its blurred and down-scaled noisy version. Since this problem is highly ill-posed, a prior is needed in order to regularize it. The literature offers various ways to address this problem, ranging from simple linear space-invariant interpolation schemes (e.g., bicubic interpolation), to spatially-adaptive and non-linear filters of various sorts. We embark from a recently-proposed successful algorithm by Yang et. al. [1,2], and similarly assume a local Sparse-Land model on image patches, serving as regularization. Several important modifications to the above-mentioned solution are introduced, and are shown to lead to improved results. These modifications include a major simplification of the overall process both in terms of the computational complexity and the algorithm architecture, using a different training approach for the dictionary-pair, and introducing the ability to operate without a training-set by boot-strapping the scale-up task from the given low-resolution image. We demonstrate the results on true images, showing both visual and PSNR improvements.

2,667 citations


Journal ArticleDOI
Andre Franke1, Dermot P.B. McGovern2, Jeffrey C. Barrett3, Kai Wang4, Graham L. Radford-Smith5, Tariq Ahmad6, Charlie W. Lees7, Tobias Balschun1, James Lee8, Rebecca L. Roberts9, Carl A. Anderson3, Joshua C. Bis10, Suzanne Bumpstead3, David Ellinghaus1, Eleonora M. Festen11, Michel Georges12, Todd Green13, Talin Haritunians2, Luke Jostins3, Anna Latiano14, Christopher G. Mathew15, Grant W. Montgomery5, Natalie J. Prescott15, Soumya Raychaudhuri13, Jerome I. Rotter2, Philip Schumm16, Yashoda Sharma17, Lisa A. Simms5, Kent D. Taylor2, David C. Whiteman5, Cisca Wijmenga11, Robert N. Baldassano4, Murray L. Barclay9, Theodore M. Bayless18, Stephan Brand19, Carsten Büning20, Albert Cohen21, Jean Frederick Colombel22, Mario Cottone, Laura Stronati, Ted Denson23, Martine De Vos24, Renata D'Incà, Marla Dubinsky2, Cathryn Edwards25, Timothy H. Florin26, Denis Franchimont27, Richard B. Gearry9, Jürgen Glas19, Jürgen Glas22, Jürgen Glas28, André Van Gossum27, Stephen L. Guthery29, Jonas Halfvarson30, Hein W. Verspaget31, Jean-Pierre Hugot32, Amir Karban33, Debby Laukens24, Ian C. Lawrance34, Marc Lémann32, Arie Levine35, Cécile Libioulle12, Edouard Louis12, Craig Mowat36, William G. Newman37, Julián Panés, Anne M. Phillips36, Deborah D. Proctor17, Miguel Regueiro38, Richard K Russell39, Paul Rutgeerts40, Jeremy D. Sanderson41, Miquel Sans, Frank Seibold42, A. Hillary Steinhart43, Pieter C. F. Stokkers44, Leif Törkvist45, Gerd A. Kullak-Ublick46, David C. Wilson7, Thomas D. Walters43, Stephan R. Targan2, Steven R. Brant18, John D. Rioux47, Mauro D'Amato45, Rinse K. Weersma11, Subra Kugathasan48, Anne M. Griffiths43, John C. Mansfield49, Severine Vermeire40, Richard H. Duerr38, Mark S. Silverberg43, Jack Satsangi7, Stefan Schreiber1, Judy H. Cho17, Vito Annese14, Hakon Hakonarson4, Mark J. Daly13, Miles Parkes8 
TL;DR: A meta-analysis of six Crohn's disease genome-wide association studies and a series of in silico analyses highlighted particular genes within these loci implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP.
Abstract: We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10⁻⁸). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.

2,482 citations


Journal ArticleDOI
TL;DR: An overview of the theory and currently known techniques for multi-cell MIMO (multiple input multiple output) cooperation in wireless networks is presented and a few promising and quite fundamental research avenues are also suggested.
Abstract: This paper presents an overview of the theory and currently known techniques for multi-cell MIMO (multiple input multiple output) cooperation in wireless networks. In dense networks where interference emerges as the key capacity-limiting factor, multi-cell cooperation can dramatically improve the system performance. Remarkably, such techniques literally exploit inter-cell interference by allowing the user data to be jointly processed by several interfering base stations, thus mimicking the benefits of a large virtual MIMO array. Multi-cell MIMO cooperation concepts are examined from different perspectives, including an examination of the fundamental information-theoretic limits, a review of the coding and signal processing algorithmic developments, and, going beyond that, consideration of very practical issues related to scalability and system-level integration. A few promising and quite fundamental research avenues are also suggested.

1,911 citations


Journal ArticleDOI
22 Apr 2010
TL;DR: This paper surveys the various options such training has to offer, up to the most recent contributions and structures of the MOD, the K-SVD, the Generalized PCA and others.
Abstract: Sparse and redundant representation modeling of data assumes an ability to describe signals as linear combinations of a few atoms from a pre-specified dictionary. As such, the choice of the dictionary that sparsifies the signals is crucial for the success of this model. In general, the choice of a proper dictionary can be done using one of two ways: i) building a sparsifying dictionary based on a mathematical model of the data, or ii) learning a dictionary to perform best on a training set. In this paper we describe the evolution of these two paradigms. As manifestations of the first approach, we cover topics such as wavelets, wavelet packets, contourlets, and curvelets, all aiming to exploit 1-D and 2-D mathematical models for constructing effective dictionaries for signals and images. Dictionary learning takes a different route, attaching the dictionary to a set of examples it is supposed to serve. From the seminal work of Field and Olshausen, through the MOD, the K-SVD, the Generalized PCA and others, this paper surveys the various options such training has to offer, up to the most recent contributions and structures.

1,345 citations


Book
01 Jan 2010
TL;DR: Theories are made easier to understand with 200 illustrative examples, and students can test their understanding with over 350 end-of-chapter review questions.
Abstract: Understand the structure, behavior, and limitations of logic machines with this thoroughly updated third edition. Many new topics are included, such as CMOS gates, logic synthesis, logic design for emerging nanotechnologies, digital system testing, and asynchronous circuit design, to bring students up-to-speed with modern developments. The intuitive examples and minimal formalism of the previous edition are retained, giving students a text that is logical and easy to follow, yet rigorous. Kohavi and Jha begin with the basics, and then cover combinational logic design and testing, before moving on to more advanced topics in finite-state machine design and testing. Theory is made easier to understand with 200 illustrative examples, and students can test their understanding with over 350 end-of-chapter review questions.

1,315 citations


Journal ArticleDOI
TL;DR: The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.
Abstract: We consider efficient methods for the recovery of block-sparse signals-ie, sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce We then show that a block-version of the orthogonal matching pursuit algorithm recovers block -sparse signals in no more than steps if the block-coherence is sufficiently small The same condition on block-coherence is shown to guarantee successful recovery through a mixed -optimization approach This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem

1,289 citations


Journal ArticleDOI
TL;DR: The exploration and exploitation framework has attracted substantial interest from scholars studying phenomena such as organizational learning, knowledge management, innovation, organizational design, and strategic alliances as discussed by the authors, and it has become an essential lens for interpreting various behaviors and outcomes within and across organizations.
Abstract: Jim March's framework of exploration and exploitation has drawn substantial interest from scholars studying phenomena such as organizational learning, knowledge management, innovation, organizational design, and strategic alliances. This framework has become an essential lens for interpreting various behaviors and outcomes within and across organizations. Despite its straightforwardness, this framework has generated debates concerning the definition of exploration and exploitation, and their measurement, antecedents, and consequences. We critically review the growing literature on exploration and exploitation, discuss various perspectives, raise conceptual and empirical concerns, underscore challenges for further development of this literature, and provide directions for future research.

1,241 citations


Journal ArticleDOI
TL;DR: This paper considers the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum, and proposes a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms.
Abstract: Conventional sub-Nyquist sampling methods for analog signals exploit prior information about the spectral support. In this paper, we consider the challenging problem of blind sub-Nyquist sampling of multiband signals, whose unknown frequency support occupies only a small portion of a wide spectrum. Our primary design goals are efficient hardware implementation and low computational load on the supporting digital processing. We propose a system, named the modulated wideband converter, which first multiplies the analog signal by a bank of periodic waveforms. The product is then low-pass filtered and sampled uniformly at a low rate, which is orders of magnitude smaller than Nyquist. Perfect recovery from the proposed samples is achieved under certain necessary and sufficient conditions. We also develop a digital architecture, which allows either reconstruction of the analog input, or processing of any band of interest at a low rate, that is, without interpolating to the high Nyquist rate. Numerical simulations demonstrate many engineering aspects: robustness to noise and mismodeling, potential hardware simplifications, real-time performance for signals with time-varying support and stability to quantization effects. We compare our system with two previous approaches: periodic nonuniform sampling, which is bandwidth limited by existing hardware devices, and the random demodulator, which is restricted to discrete multitone signals and has a high computational load. In the broader context of Nyquist sampling, our scheme has the potential to break through the bandwidth barrier of state-of-the-art analog conversion technologies such as interleaved converters.

1,186 citations


Journal ArticleDOI
01 Jan 2010-Database
TL;DR: A key focus is on gene-set analyses, which leverage GeneCards’ unique wealth of combinatorial annotations, which address a host of applications, including microarray data analysis, cross-database annotation mapping and gene-disorder associations for drug targeting.
Abstract: GeneCards (www.genecards.org) is a comprehensive, authoritative compendium of annotative information about human genes, widely used for nearly 15 years. Its gene-centric content is automatically mined and integrated from over 80 digital sources, resulting in a web-based deep-linked card for each of >73,000 human gene entries, encompassing the following categories: protein coding, pseudogene, RNA gene, genetic locus, cluster and uncategorized. We now introduce GeneCards Version 3, featuring a speedy and sophisticated search engine and a revamped, technologically enabling infrastructure, catering to the expanding needs of biomedical researchers. A key focus is on gene-set analyses, which leverage GeneCards' unique wealth of combinatorial annotations. These include the GeneALaCart batch query facility, which tabulates user-selected annotations for multiple genes and GeneDecks, which identifies similar genes with shared annotations, and finds set-shared annotations by descriptor enrichment analysis. Such set-centric features address a host of applications, including microarray data analysis, cross-database annotation mapping and gene-disorder associations for drug targeting. We highlight the new Version 3 database architecture, its multi-faceted search engine, and its semi-automated quality assurance system. Data enhancements include an expanded visualization of gene expression patterns in normal and cancer tissues, an integrated alternative splicing pattern display, and augmented multi-source SNPs and pathways sections. GeneCards now provides direct links to gene-related research reagents such as antibodies, recombinant proteins, DNA clones and inhibitory RNAs and features gene-related drugs and compounds lists. We also portray the GeneCards Inferred Functionality Score annotation landscape tool for scoring a gene's functional information status. Finally, we delineate examples of applications and collaborations that have benefited from the GeneCards suite. Database URL: www.genecards.org.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A new type of saliency is proposed – context-aware saliency – which aims at detecting the image regions that represent the scene and a detection algorithm is presented which is based on four principles observed in the psychological literature.
Abstract: We propose a new type of saliency – context-aware saliency – which aims at detecting the image regions that represent the scene. This definition differs from previous definitions whose goal is to either identify fixation points or detect the dominant object. In accordance with our saliency definition, we present a detection algorithm which is based on four principles observed in the psychological literature. The benefits of the proposed approach are evaluated in two applications where the context of the dominant objects is just as essential as the objects themselves. In image retargeting we demonstrate that using our saliency prevents distortions in the important regions. In summarization we show that our saliency helps to produce compact, appealing, and informative summaries.

Journal ArticleDOI
TL;DR: The updated version of 2009 European Association of Urology guidelines on ED and PE are presented to present, summarising the present information onED and PE.

Journal ArticleDOI
TL;DR: The time has therefore come for moving to the next phase of scientific inquiry in which constructs are being augmented by testing its relationships with antecedents, moderators and mediators, as well as relationships with other established constructs.

Journal ArticleDOI
TL;DR: The review highlights the main achievements reported in the last 3 years: harnessing the casein micelle, a natural nanovehicle of nutrients, for delivering hydrophobic bioactives, and discovering unique nanotubes based on enzymatic hydrolysis of α-la.
Abstract: Milk proteins are natural vehicles for bioactives. Many of their structural and physicochemical properties facilitate their functionality in delivery systems. These properties include binding of ions and small molecules, excellent surface and self-assembly properties; superb gelation properties; pH-responsive gel swelling behavior, useful for programmable release; interactions with other macromolecules to form complexes and conjugates with synergistic combinations of properties; various shielding capabilities, essential for protecting sensitive payload; biocompatibility and biodegradability, enabling to control the bioaccessibility of the bioactive, and promote its bioavailability. The review highlights the main achievements reported in the last 3 years: harnessing the casein micelle, a natural nanovehicle of nutrients, for delivering hydrophobic bioactives; discovering unique nanotubes based on enzymatic hydrolysis of α-la; introduction of novel encapsulation techniques based on cold-set gelation for delivering heat-sensitive bioactives including probiotics; developments and use of Maillard reaction based conjugates of milk proteins and polysaccharides for encapsulating bioactives; introduction of β-lg–pectin nanocomplexes for delivery of hydrophobic nutraceuticals in clear acid beverages; development of core-shell nanoparticles made of heat-aggregated β-lg, nanocoated by beet-pectin, for bioactive delivery; synergizing the surface properties of whey proteins with stabilization properties of polysaccharides in advanced W/O/W and O/W/O double emulsions; application of milk proteins for drug targeting, including lactoferrin or bovine serum albumin conjugated nanoparticles for effective in vivo drug delivery across the blood-brain barrier; beta casein nanoparticles for targeting gastric cancer; fatty acid-coated bovine serum albumin nanoparticles for intestinal delivery, and Maillard conjugates of casein and resistant starch for colon targeting. Major future challenges are spot-lighted.

Journal ArticleDOI
TL;DR: A global, network-based method for prioritizing disease genes and inferring protein complex associations, which is called PRINCE, and applies to study three multi-factorial diseases for which some causal genes have been found already: prostate cancer, alzheimer and type 2 diabetes mellitus.
Abstract: A fundamental challenge in human health is the identification of disease-causing genes. Recently, several studies have tackled this challenge via a network-based approach, motivated by the observation that genes causing the same or similar diseases tend to lie close to one another in a network of protein-protein or functional interactions. However, most of these approaches use only local network information in the inference process and are restricted to inferring single gene associations. Here, we provide a global, network-based method for prioritizing disease genes and inferring protein complex associations, which we call PRINCE. The method is based on formulating constraints on the prioritization function that relate to its smoothness over the network and usage of prior information. We exploit this function to predict not only genes but also protein complex associations with a disease of interest. We test our method on gene-disease association data, evaluating both the prioritization achieved and the protein complexes inferred. We show that our method outperforms extant approaches in both tasks. Using data on 1,369 diseases from the OMIM knowledgebase, our method is able (in a cross validation setting) to rank the true causal gene first for 34% of the diseases, and infer 139 disease-related complexes that are highly coherent in terms of the function, expression and conservation of their member proteins. Importantly, we apply our method to study three multi-factorial diseases for which some causal genes have been found already: prostate cancer, alzheimer and type 2 diabetes mellitus. PRINCE's predictions for these diseases highly match the known literature, suggesting several novel causal genes and protein complexes for further investigation.

Journal ArticleDOI
TL;DR: Targeting LOXL2 with an inhibitory monoclonal antibody (AB0023) was efficacious in both primary and metastatic xenograft models of cancer, as well as in liver and lung fibrosis models and outperformed the small-molecule lysyl oxidase inhibitor β-aminoproprionitrile.
Abstract: We have identified a new role for the matrix enzyme lysyl oxidase-like-2 (LOXL2) in the creation and maintenance of the pathologic microenvironment of cancer and fibrotic disease. Our analysis of biopsies from human tumors and fibrotic lung and liver tissues revealed an increase in LOXL2 in disease-associated stroma and limited expression in healthy tissues. Targeting LOXL2 with an inhibitory monoclonal antibody (AB0023) was efficacious in both primary and metastatic xenograft models of cancer, as well as in liver and lung fibrosis models. Inhibition of LOXL2 resulted in a marked reduction in activated fibroblasts, desmoplasia and endothelial cells, decreased production of growth factors and cytokines and decreased transforming growth factor-beta (TGF-beta) pathway signaling. AB0023 outperformed the small-molecule lysyl oxidase inhibitor beta-aminoproprionitrile. The efficacy and safety of LOXL2-specific AB0023 represents a new therapeutic approach with broad applicability in oncologic and fibrotic diseases.

Posted Content
TL;DR: In this article, a condition on the measurement/sensing matrix is introduced, which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries.
Abstract: This article presents novel results concerning the recovery of signals from undersampled data in the common situation where such signals are not sparse in an orthonormal basis or incoherent dictionary, but in a truly redundant dictionary. This work thus bridges a gap in the literature and shows not only that compressed sensing is viable in this context, but also that accurate recovery is possible via an L1-analysis optimization problem. We introduce a condition on the measurement/sensing matrix, which is a natural generalization of the now well-known restricted isometry property, and which guarantees accurate recovery of signals that are nearly sparse in (possibly) highly overcomplete and coherent dictionaries. This condition imposes no incoherence restriction on the dictionary and our results may be the first of this kind. We discuss practical examples and the implications of our results on those applications, and complement our study by demonstrating the potential of L1-analysis for such problems.

Journal ArticleDOI
TL;DR: A genome-wide meta-analysis of gene sets (groups of genes that encode the same biological pathway or process) in 410 samples from patients with symptomatic Parkinson’s and subclinical disease and healthy controls identified 10 gene sets that were all associated with PD.
Abstract: Parkinson’s disease affects 5 million people worldwide, but the molecular mechanisms underlying its pathogenesis are still unclear. Here, we report a genome-wide meta-analysis of gene sets (groups of genes that encode the same biological pathway or process) in 410 samples from patients with symptomatic Parkinson’s and subclinical disease and healthy controls. We analyzed 6.8 million raw data points from nine genome-wide expression studies, and 185 laser-captured human dopaminergic neuron and substantia nigra transcriptomes, followed by two-stage replication on three platforms. We found 10 gene sets with previously unknown associations with Parkinson’s disease. These gene sets pinpoint defects in mitochondrial electron transport, glucose utilization, and glucose sensing and reveal that they occur early in disease pathogenesis. Genes controlling cellular bioenergetics that are expressed in response to peroxisome proliferator–activated receptor γ coactivator-1α (PGC-1α) are underexpressed in Parkinson’s disease patients. Activation of PGC-1α results in increased expression of nuclear-encoded subunits of the mitochondrial respiratory chain and blocks the dopaminergic neuron loss induced by mutant α-synuclein or the pesticide rotenone in cellular disease models. Our systems biology analysis of Parkinson’s disease identifies PGC-1α as a potential therapeutic target for early intervention.

Journal ArticleDOI
25 Feb 2010
TL;DR: The role of this recent model in image processing, its rationale, and models related to it are reviewed; ways to employ these tools for various image-processing tasks are discussed and several applications in which state-of-the-art results are obtained.
Abstract: Much of the progress made in image processing in the past decades can be attributed to better modeling of image content and a wise deployment of these models in relevant applications. This path of models spans from the simple l2-norm smoothness through robust, thus edge preserving, measures of smoothness (e.g. total variation), and until the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. We discuss ways to employ these tools for various image-processing tasks and present several applications in which state-of-the-art results are obtained.

Journal ArticleDOI
TL;DR: The rationale for the development of reconfigurable manufacturing systems, which possess the advantages both of dedicated lines and of flexible systems, has been explained in this article, and a rigorous mathematical method is introduced for designing RMS with this recommended structure.

Journal ArticleDOI
TL;DR: The results showed that the nanosensor array could differentiate between ‘healthy’ and ‘cancerous’ breath, and between the breath of patients having different cancer types, and could lead to the development of an inexpensive, easy-to-use, portable, non-invasive tool that overcomes many of the deficiencies associated with the currently available diagnostic methods for cancer.
Abstract: Tumour growth is accompanied by gene and/or protein changes that may lead to peroxidation of the cell membrane species and, hence, to the emission of volatile organic compounds (VOCs). In this study, we investigated the ability of a nanosensor array to discriminate between breath VOCs that characterise healthy states and the most widespread cancer states in the developed world: lung, breast, colorectal, and prostate cancers. Exhaled alveolar breath was collected from 177 volunteers aged 20–75 years (patients with lung, colon, breast, and prostate cancers and healthy controls). Breath from cancerous subjects was collected before any treatment. The healthy population was healthy according to subjective patient's data. The breath of volunteers was examined by a tailor-made array of cross-reactive nanosensors based on organically functionalised gold nanoparticles and gas chromatography linked to the mass spectrometry technique (GC-MS). The results showed that the nanosensor array could differentiate between ‘healthy’ and ‘cancerous’ breath, and, furthermore, between the breath of patients having different cancer types. Moreover, the nanosensor array could distinguish between the breath patterns of different cancers in the same statistical analysis, irrespective of age, gender, lifestyle, and other confounding factors. The GC-MS results showed that each cancer could have a unique pattern of VOCs, when compared with healthy states, but not when compared with other cancer types. The reported results could lead to the development of an inexpensive, easy-to-use, portable, non-invasive tool that overcomes many of the deficiencies associated with the currently available diagnostic methods for cancer.

Journal ArticleDOI
TL;DR: The advantages of sparse dictionaries are discussed, and an efficient algorithm for training them are presented, and the advantages of the proposed structure for 3-D image denoising are demonstrated.
Abstract: An efficient and flexible dictionary structure is proposed for sparse and redundant signal representation. The proposed sparse dictionary is based on a sparsity model of the dictionary atoms over a base dictionary, and takes the form D = ? A, where ? is a fixed base dictionary and A is sparse. The sparse dictionary provides efficient forward and adjoint operators, has a compact representation, and can be effectively trained from given example data. In this, the sparse structure bridges the gap between implicit dictionaries, which have efficient implementations yet lack adaptability, and explicit dictionaries, which are fully adaptable but non-efficient and costly to deploy. In this paper, we discuss the advantages of sparse dictionaries, and present an efficient algorithm for training them. We demonstrate the advantages of the proposed structure for 3-D image denoising.

Journal ArticleDOI
Georges Aad1, Brad Abbott2, Jalal Abdallah3, A. A. Abdelalim4  +3098 moreInstitutions (192)
TL;DR: In this article, the authors used the ATLAS detector to detect dijet asymmetry in the collisions of lead ions at the Large Hadron Collider and found that the transverse energies of dijets in opposite hemispheres become systematically more unbalanced with increasing event centrality, leading to a large number of events which contain highly asymmetric di jets.
Abstract: By using the ATLAS detector, observations have been made of a centrality-dependent dijet asymmetry in the collisions of lead ions at the Large Hadron Collider. In a sample of lead-lead events with a per-nucleon center of mass energy of 2.76 TeV, selected with a minimum bias trigger, jets are reconstructed in fine-grained, longitudinally segmented electromagnetic and hadronic calorimeters. The transverse energies of dijets in opposite hemispheres are observed to become systematically more unbalanced with increasing event centrality leading to a large number of events which contain highly asymmetric dijets. This is the first observation of an enhancement of events with such large dijet asymmetries, not observed in proton-proton collisions, which may point to an interpretation in terms of strong jet energy loss in a hot, dense medium.

Proceedings ArticleDOI
13 Jun 2010
TL;DR: A scale-invariant version of the heat kernel descriptor that can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling.
Abstract: One of the biggest challenges in non-rigid shape retrieval and comparison is the design of a shape descriptor that would maintain invariance under a wide class of transformations the shape can undergo. Recently, heat kernel signature was introduced as an intrinsic local shape descriptor based on diffusion scale-space analysis. In this paper, we develop a scale-invariant version of the heat kernel descriptor. Our construction is based on a logarithmically sampled scale-space in which shape scaling corresponds, up to a multiplicative constant, to a translation. This translation is undone using the magnitude of the Fourier transform. The proposed scale-invariant local descriptors can be used in the bag-of-features framework for shape retrieval in the presence of transformations such as isometric deformations, missing data, topological noise, and global and local scaling. We get significant performance improvement over state-of-the-art algorithms on recently established non-rigid shape retrieval benchmarks.

Journal ArticleDOI
TL;DR: It is shown that graphite spontaneously exfoliates into single-layer graphene in chlorosulphonic acid, and dissolves at isotropic concentrations as high as approximately 2 mg ml(-1), which is an order of magnitude higher than previously reported values.
Abstract: Graphene combines unique electronic properties and surprising quantum effects with outstanding thermal and mechanical properties. Many potential applications, including electronics and nanocomposites, require that graphene be dispersed and processed in a fluid phase. Here, we show that graphite spontaneously exfoliates into single-layer graphene in chlorosulphonic acid, and dissolves at isotropic concentrations as high as approximately 2 mg ml(-1), which is an order of magnitude higher than previously reported values. This occurs without the need for covalent functionalization, surfactant stabilization, or sonication, which can compromise the properties of graphene or reduce flake size. We also report spontaneous formation of liquid-crystalline phases at high concentrations ( approximately 20-30 mg ml(-1)). Transparent, conducting films are produced from these dispersions at 1,000 Omega square(-1) and approximately 80% transparency. High-concentration solutions, both isotropic and liquid crystalline, could be particularly useful for making flexible electronics as well as multifunctional fibres.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive review of state-of-the-art methods and their applications in the field of water resources planning and management.
Abstract: During the last two decades, the water resources planning and management profession has seen a dramatic increase in the development and application of various types of evolutionary algorithms (EAs). This observation is especially true for application of genetic algorithms, arguably the most popular of the several types of EAs. Generally speaking, EAs repeatedly prove to be flexible and powerful tools in solving an array of complex water resources problems. This paper provides a comprehensive review of state-of-the-art methods and their applications in the field of water resources planning and management. A primary goal in this ASCE Task Committee effort is to identify in an organized fashion some of the seminal contributions of EAs in the areas of water distribution systems, urban drainage and sewer systems, water supply and wastewater treatment, hydrologic and fluvial modeling, groundwater systems, and parameter identification. The paper also identifies major challenges and opportunities for the future, ...

Journal ArticleDOI
TL;DR: It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.
Abstract: We describe a general scheme for solving nonconvex optimization problems, where in each iteration the nonconvex feasible set is approximated by an inner convex approximation. The latter is defined using an upper bound on the nonconvex constraint functions. Under appropriate conditions, a monotone convergence to a KKT point is established. The scheme is applied to truss topology design (TTD) problems, where the nonconvex constraints are associated with bounds on displacements and stresses. It is shown that the approximate convex problem solved at each inner iteration can be cast as a conic quadratic programming problem, hence large scale TTD problems can be efficiently solved by the proposed method.

Journal ArticleDOI
TL;DR: This work uses recently released sequences from the 1000 Genomes Project to identify two western African-specific missense mutations in the neighboring APOL1 gene, and demonstrates that these are more strongly associated with ESKD than previously reported MYH9 variants.
Abstract: MYH9 has been proposed as a major genetic risk locus for a spectrum of nondiabetic end stage kidney disease (ESKD). We use recently released sequences from the 1000 Genomes Project to identify two western African-specific missense mutations (S342G and I384M) in the neighboring APOL1 gene, and demonstrate that these are more strongly associated with ESKD than previously reported MYH9 variants. The APOL1 gene product, apolipoprotein L-1, has been studied for its roles in trypanosomal lysis, autophagic cell death, lipid metabolism, as well as vascular and other biological activities. We also show that the distribution of these newly identified APOL1 risk variants in African populations is consistent with the pattern of African ancestry ESKD risk previously attributed to MYH9.Mapping by admixture linkage disequilibrium (MALD) localized an interval on chromosome 22, in a region that includes the MYH9 gene, which was shown to contain African ancestry risk variants associated with certain forms of ESKD (Kao et al. 2008; Kopp et al. 2008). MYH9 encodes nonmuscle myosin heavy chain IIa, a major cytoskeletal nanomotor protein expressed in many cell types, including podocyte cells of the renal glomerulus. Moreover, 39 different coding region mutations in MYH9 have been identified in patients with a group of rare syndromes, collectively termed the Giant Platelet Syndromes, with clear autosomal dominant inheritance, and various clinical manifestations, sometimes also including glomerular pathology and chronic kidney disease (Kopp 2010; Sekine et al. 2010). Accordingly, MYH9 was further explored in these studies as the leading candidate gene responsible for the MALD signal. Dense mapping of MYH9 identified individual single nucleotide polymorphisms (SNPs) and sets of such SNPs grouped as haplotypes that were found to be highly associated with a large and important group of ESKD risk phenotypes, which as a consequence were designated as MYH9-associated nephropathies (Bostrom and Freedman 2010). These included HIV-associated nephropathy (HIVAN), primary nonmonogenic forms of focal segmental glomerulosclerosis, and hypertension affiliated chronic kidney disease not attributed to other etiologies (Bostrom and Freedman 2010). The MYH9 SNP and haplotype associations observed with these forms of ESKD yielded the largest odds ratios (OR) reported to date for the association of common variants with common disease risk (Winkler et al. 2010). Two specific MYH9 variants (rs5750250 of S-haplotype and rs11912763 of F-haplotype) were designated as most strongly predictive on the basis of Receiver Operating Characteristic analysis (Nelson et al. 2010). These MYH9 association studies were then also extended to earlier stage and related kidney disease phenotypes and to population groups with varying degrees of recent African ancestry admixture (Behar et al. 2010; Freedman et al. 2009a, b; Nelson et al. 2010), and led to the expectation of finding a functional African ancestry causative variant within MYH9. However, despite intensive efforts including re-sequencing of the MYH9 gene no suggested functional mutation has been identified (Nelson et al. 2010; Winkler et al. 2010). This led us to re-examine the interval surrounding MYH9 and to the detection of novel missense mutations with predicted functional effects in the neighboring APOL1 gene, which are significantly more associated with ESKD than all previously reported SNPs in MYH9.

Book
09 Dec 2010
TL;DR: The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what it finds to be the most interesting and accessible research directions, with an emphasis on works from the last two decades.
Abstract: A large class of problems in symbolic computation can be expressed as the task of computing some polynomials; and arithmetic circuits form the most standard model for studying the complexity of such computations. This algebraic model of computation attracted a large amount of research in the last five decades, partially due to its simplicity and elegance. Being a more structured model than Boolean circuits, one could hope that the fundamental problems of theoretical computer science, such as separating P from NP, will be easier to solve for arithmetic circuits. However, in spite of the appearing simplicity and the vast amount of mathematical tools available, no major breakthrough has been seen. In fact, all the fundamental questions are still open for this model as well. Nevertheless, there has been a lot of progress in the area and beautiful results have been found, some in the last few years. As examples we mention the connection between polynomial identity testing and lower bounds of Kabanets and Impagliazzo, the lower bounds of Raz for multilinear formulas, and two new approaches for proving lower bounds: Geometric Complexity Theory and Elusive Functions. The goal of this monograph is to survey the field of arithmetic circuit complexity, focusing mainly on what we find to be the most interesting and accessible research directions. We aim to cover the main results and techniques, with an emphasis on works from the last two decades. In particular, we discuss the recent lower bounds for multilinear circuits and formulas, the advances in the question of deterministically checking polynomial identities, and the results regarding reconstruction of arithmetic circuits. We do, however, also cover part of the classical works on arithmetic circuits. In order to keep this monograph at a reasonable length, we do not give full proofs of most theorems, but rather try to convey the main ideas behind each proof and demonstrate it, where possible, by proving some special cases.