scispace - formally typeset
Search or ask a question

Showing papers by "Technion – Israel Institute of Technology published in 2009"


Journal ArticleDOI
TL;DR: A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Abstract: We consider the class of iterative shrinkage-thresholding algorithms (ISTA) for solving linear inverse problems arising in signal/image processing. This class of methods, which can be viewed as an extension of the classical gradient algorithm, is attractive due to its simplicity and thus is adequate for solving large-scale problems even with dense matrix data. However, such methods are also known to converge quite slowly. In this paper we present a new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically. Initial promising numerical results for wavelet-based image deblurring demonstrate the capabilities of FISTA which is shown to be faster than ISTA by several orders of magnitude.

11,413 citations


Journal ArticleDOI
TL;DR: GOrilla is a web-based application that identifies enriched GO terms in ranked lists of genes, without requiring the user to provide explicit target and background sets, and its unique features and advantages over other threshold free enrichment tools include rigorous statistics, fast running time and an effective graphical representation.
Abstract: Since the inception of the GO annotation project, a variety of tools have been developed that support exploring and searching the GO database In particular, a variety of tools that perform GO enrichment analysis are currently available Most of these tools require as input a target set of genes and a background set and seek enrichment in the target set compared to the background set A few tools also exist that support analyzing ranked lists The latter typically rely on simulations or on union-bound correction for assigning statistical significance to the results GOrilla is a web-based application that identifies enriched GO terms in ranked lists of genes, without requiring the user to provide explicit target and background sets This is particularly useful in many typical cases where genomic data may be naturally represented as a ranked list of genes (eg by level of expression or of differential expression) GOrilla employs a flexible threshold statistical approach to discover GO terms that are significantly enriched at the top of a ranked gene list Building on a complete theoretical characterization of the underlying distribution, called mHG, GOrilla computes an exact p-value for the observed enrichment, taking threshold multiple testing into account without the need for simulations This enables rigorous statistical analysis of thousand of genes and thousands of GO terms in order of seconds The output of the enrichment analysis is visualized as a hierarchical structure, providing a clear view of the relations between enriched GO terms GOrilla is an efficient GO analysis tool with unique features that make a useful addition to the existing repertoire of GO enrichment tools GOrilla's unique features and advantages over other threshold free enrichment tools include rigorous statistics, fast running time and an effective graphical representation GOrilla is publicly available at: http://cbl-gorillacstechnionacil

3,157 citations


Journal ArticleDOI
TL;DR: The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.
Abstract: A full-rank matrix ${\bf A}\in \mathbb{R}^{n\times m}$ with $n

2,372 citations


Journal ArticleDOI
Lorenzo Galluzzi1, Lorenzo Galluzzi2, Lorenzo Galluzzi3, Stuart A. Aaronson4, John M. Abrams5, Emad S. Alnemri6, David W. Andrews7, Eric H. Baehrecke8, Nicolas G. Bazan9, Mikhail V. Blagosklonny10, Klas Blomgren11, Klas Blomgren12, Christoph Borner13, Dale E. Bredesen14, Dale E. Bredesen15, Catherine Brenner16, Maria Castedo1, Maria Castedo3, Maria Castedo2, John A. Cidlowski17, Aaron Ciechanover18, Gerald M. Cohen19, V De Laurenzi20, R De Maria21, Mohanish Deshmukh22, Brian David Dynlacht23, Wafik S. El-Deiry24, Richard A. Flavell25, Richard A. Flavell26, Simone Fulda27, Carmen Garrido28, Carmen Garrido2, Pierre Golstein29, Pierre Golstein16, Pierre Golstein2, Marie-Lise Gougeon30, Douglas R. Green, Hinrich Gronemeyer31, Hinrich Gronemeyer16, Hinrich Gronemeyer2, György Hajnóczky6, J. M. Hardwick32, Michael O. Hengartner33, Hidenori Ichijo34, Marja Jäättelä, Oliver Kepp1, Oliver Kepp2, Oliver Kepp3, Adi Kimchi35, Daniel J. Klionsky36, Richard A. Knight37, Sally Kornbluth38, Sharad Kumar, Beth Levine26, Beth Levine5, Stuart A. Lipton, Enrico Lugli17, Frank Madeo39, Walter Malorni21, Jean-Christophe Marine40, Seamus J. Martin41, Jan Paul Medema42, Patrick Mehlen43, Patrick Mehlen16, Gerry Melino19, Gerry Melino44, Ute M. Moll45, Ute M. Moll46, Eugenia Morselli1, Eugenia Morselli3, Eugenia Morselli2, Shigekazu Nagata47, Donald W. Nicholson48, Pierluigi Nicotera19, Gabriel Núñez36, Moshe Oren35, Josef M. Penninger49, Shazib Pervaiz50, Marcus E. Peter51, Mauro Piacentini44, Jochen H. M. Prehn52, Hamsa Puthalakath53, Gabriel A. Rabinovich54, Rosario Rizzuto55, Cecília M. P. Rodrigues56, David C. Rubinsztein57, Thomas Rudel58, Luca Scorrano59, Hans-Uwe Simon60, Hermann Steller26, Hermann Steller61, J. Tschopp62, Yoshihide Tsujimoto63, Peter Vandenabeele64, Ilio Vitale2, Ilio Vitale1, Ilio Vitale3, Karen H. Vousden65, Richard J. Youle17, Junying Yuan66, Boris Zhivotovsky67, Guido Kroemer3, Guido Kroemer2, Guido Kroemer1 
Institut Gustave Roussy1, French Institute of Health and Medical Research2, University of Paris-Sud3, Icahn School of Medicine at Mount Sinai4, University of Texas Southwestern Medical Center5, Thomas Jefferson University6, McMaster University7, University of Massachusetts Medical School8, LSU Health Sciences Center New Orleans9, Roswell Park Cancer Institute10, University of Gothenburg11, Boston Children's Hospital12, University of Freiburg13, University of California, San Francisco14, Buck Institute for Research on Aging15, Centre national de la recherche scientifique16, National Institutes of Health17, Technion – Israel Institute of Technology18, University of Leicester19, University of Chieti-Pescara20, Istituto Superiore di Sanità21, University of North Carolina at Chapel Hill22, New York University23, University of Pennsylvania24, Yale University25, Howard Hughes Medical Institute26, University of Ulm27, University of Burgundy28, Aix-Marseille University29, Pasteur Institute30, University of Strasbourg31, Johns Hopkins University32, University of Zurich33, University of Tokyo34, Weizmann Institute of Science35, University of Michigan36, University College London37, Duke University38, University of Graz39, Ghent University40, Trinity College, Dublin41, University of Amsterdam42, University of Lyon43, University of Rome Tor Vergata44, Stony Brook University45, University of Göttingen46, Kyoto University47, Merck & Co.48, Austrian Academy of Sciences49, National University of Singapore50, University of Chicago51, Royal College of Surgeons in Ireland52, La Trobe University53, University of Buenos Aires54, University of Padua55, University of Lisbon56, University of Cambridge57, University of Würzburg58, University of Geneva59, University of Bern60, Rockefeller University61, University of Lausanne62, Osaka University63, University of California, San Diego64, University of Glasgow65, Harvard University66, Karolinska Institutet67
TL;DR: A nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls is provided and the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells is emphasized.
Abstract: Cell death is essential for a plethora of physiological processes, and its deregulation characterizes numerous human diseases Thus, the in-depth investigation of cell death and its mechanisms constitutes a formidable challenge for fundamental and applied biomedical research, and has tremendous implications for the development of novel therapeutic strategies It is, therefore, of utmost importance to standardize the experimental procedures that identify dying and dead cells in cell cultures and/or in tissues, from model organisms and/or humans, in healthy and/or pathological scenarios Thus far, dozens of methods have been proposed to quantify cell death-related parameters However, no guidelines exist regarding their use and interpretation, and nobody has thoroughly annotated the experimental settings for which each of these techniques is most appropriate Here, we provide a nonexhaustive comparison of methods to detect cell death with apoptotic or nonapoptotic morphologies, their advantages and pitfalls These guidelines are intended for investigators who study cell death, as well as for reviewers who need to constructively critique scientific reports that deal with cellular demise Given the difficulties in determining the exact number of cells that have passed the point-of-no-return of the signaling cascades leading to cell death, we emphasize the importance of performing multiple, methodologically unrelated assays to quantify dying and dead cells

2,218 citations


Journal ArticleDOI
TL;DR: A fast algorithm is derived for the constrained TV-based image deblurring problem with box constraints by combining an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA).
Abstract: This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.

1,981 citations


Journal ArticleDOI
TL;DR: Data collected demonstrate that there is a strong association between GBA mutations and Parkinson's disease, and those with a GBA mutation presented earlier with the disease, were more likely to have affected relatives, and were morelikely to have atypical clinical manifestations.
Abstract: Background Recent studies indicate an increased frequency of mutations in the gene encoding glucocerebrosidase (GBA), a deficiency of which causes Gaucher's disease, among patients with Parkinson's disease. We aimed to ascertain the frequency of GBA mutations in an ethnically diverse group of patients with Parkinson's disease. Methods Sixteen centers participated in our international, collaborative study: five from the Americas, six from Europe, two from Israel, and three from Asia. Each center genotyped a standard DNA panel to permit comparison of the genotyping results across centers. Genotypes and phenotypic data from a total of 5691 patients with Parkinson's disease (780 Ashkenazi Jews) and 4898 controls (387 Ashkenazi Jews) were analyzed, with multivariate logistic-regression models and the Mantel–Haenszel procedure used to estimate odds ratios across centers. Results All 16 centers could detect two GBA mutations, L444P and N370S. Among Ashkenazi Jewish subjects, either mutation was found in 15% of p...

1,629 citations



Journal ArticleDOI
TL;DR: It is shown that an array of sensors based on gold nanoparticles can rapidly distinguish the breath of lung cancer patients from the Breath of healthy individuals in an atmosphere of high humidity.
Abstract: Conventional diagnostic methods for lung cancer 1,2 are unsuitable for widespread screening 2,3 because they are expensive and occasionally miss tumours. Gas chromatography/mass spectrometry studies have shown that several volatile organic compounds, which normally appear at levels of 1–20 ppb in healthy human breath, are elevated to levels between 10 and 100 ppb in lung cancer patients 4–6 . Here we show that an array of sensors based on gold nanoparticles can rapidly distinguish the breath of lung cancer patients from the breath of healthy individuals in an atmosphere of high humidity. In combination with solidphase microextraction 7 , gas chromatography/mass spectrometry was used to identify 42 volatile organic compounds that represent lung cancer biomarkers. Four of these were used to train and optimize the sensors, demonstrating good agreement between patient and simulated breath samples. Our results show that sensors based on gold nanoparticles could form the basis of an inexpensive and non-invasive diagnostic tool for lung cancer. Lung cancer accounts for 28% of cancer-related deaths.

1,088 citations


Journal ArticleDOI
TL;DR: This paper develops a general framework for robust and efficient recovery of nonlinear but structured signal models, in which x lies in a union of subspaces, and presents an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal.
Abstract: Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2/lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.

966 citations


Book
12 Jun 2009
TL;DR: Information Theoretic Security surveys the research dating back to the 1970s which forms the basis of applying this technique in modern systems to achieve secrecy for a basic wire-tap channel model as well as for its extensions to multiuser networks.
Abstract: Security is one of the most important issues in communications. Security issues arising in communication networks include confidentiality, integrity, authentication and non-repudiation. Attacks on the security of communication networks can be divided into two basic types: passive attacks and active attacks. An active attack corresponds to the situation in which a malicious actor intentionally disrupts the system. A passive attack corresponds to the situation in which a malicious actor attempts to interpret source information without injecting any information or trying to modify the information; i.e., passive attackers listen to the transmission without modifying it. Information Theoretic Security focuses on confidentiality issues, in which passive attacks are of primary concern. The information theoretic approach to achieving secure communication opens a promising new direction toward solving wireless networking security problems. Compared to contemporary cryptosystems, information theoretic approaches offer advantages such as eliminating the key management issue; are less vulnerable to the man-in-the-middle and achieve provable security that is robust to powerful eavesdroppers possessing unlimited computational resources, knowledge of the communication strategy employed including coding and decoding algorithms, and access to communication systems either through perfect or noisy channels. Information Theoretic Security surveys the research dating back to the 1970s which forms the basis of applying this technique in modern systems. It proceeds to provide an overview of how information theoretic approaches are developed to achieve secrecy for a basic wire-tap channel model as well as for its extensions to multiuser networks. It is an invaluable resource for students and researchers working in network security, information theory and communications.

877 citations


Journal ArticleDOI
TL;DR: This paper shows how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation, and results show that the proposed method is very successful in providing super- resolution on general sequences.
Abstract: Super-resolution reconstruction proposes a fusion of several low-quality images into one higher quality result with better optical resolution. Classic super-resolution techniques strongly rely on the availability of accurate motion estimation for this fusion task. When the motion is estimated inaccurately, as often happens for nonglobal motion fields, annoying artifacts appear in the super-resolved outcome. Encouraged by recent developments on the video denoising problem, where state-of-the-art algorithms are formed with no explicit motion estimation, we seek a super-resolution algorithm of similar nature that will allow processing sequences with general motion patterns. In this paper, we base our solution on the Nonlocal-Means (NLM) algorithm. We show how this denoising method is generalized to become a relatively simple super-resolution algorithm with no explicit motion estimation. Results on several test movies show that the proposed method is very successful in providing super-resolution on general sequences.

Journal ArticleDOI
TL;DR: This paper describes how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples, and develops a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery.
Abstract: We address the problem of reconstructing a multiband signal from its sub-Nyquist pointwise samples, when the band locations are unknown. Our approach assumes an existing multi-coset sampling. To date, recovery methods for this sampling strategy ensure perfect reconstruction either when the band locations are known, or under strict restrictions on the possible spectral supports. In this paper, only the number of bands and their widths are assumed without any other limitations on the support. We describe how to choose the parameters of the multi-coset sampling so that a unique multiband signal matches the given samples. To recover the signal, the continuous reconstruction is replaced by a single finite-dimensional problem without the need for discretization. The resulting problem is studied within the framework of compressed sensing, and thus can be solved efficiently using known tractable algorithms from this emerging area. We also develop a theoretical lower bound on the average sampling rate required for blind signal reconstruction, which is twice the minimal rate of known-spectrum recovery. Our method ensures perfect reconstruction for a wide class of signals sampled at the minimal rate, and provides a first systematic study of compressed sensing in a truly analog setting. Numerical experiments are presented demonstrating blind sampling and reconstruction with minimal sampling rate.

Journal ArticleDOI
TL;DR: Early treatment with rasagiline at a dose of 1 mg per day provided benefits that were consistent with a possible disease-modifying effect, but early treatment with the two doses were associated with different outcomes, the study results must be interpreted with caution.
Abstract: In this double-blind trial, we examined the possibility that rasagiline has diseasemodifying effects in Parkinson’s disease. A total of 1176 subjects with untreated Parkinson’s disease were randomly assigned to receive rasagiline (at a dose of either 1 mg or 2 mg per day) for 72 weeks (the early-start group) or placebo for 36 weeks followed by rasagiline (at a dose of either 1 mg or 2 mg per day) for 36 weeks (the delayed-start group). To determine a positive result with either dose, the early-start treatment group had to meet each of three hierarchical end points of the primary analysis based on the Unified Parkinson’s Disease Rating Scale (UPDRS, a 176-point scale, with higher numbers indicating more severe disease): superiority to placebo in the rate of change in the UPDRS score between weeks 12 and 36, superiority to delayed-start treatment in the change in the score between baseline and week 72, and noninferiority to delayed-start treatment in the rate of change in the score between weeks 48 and 72. Results Early-start treatment with rasagiline at a dose of 1 mg per day met all end points in the primary analysis: a smaller mean (±SE) increase (rate of worsening) in the UPDRS score between weeks 12 and 36 (0.09±0.02 points per week in the early-start group vs. 0.14±0.01 points per week in the placebo group, P = 0.01), less worsening in the score between baseline and week 72 (2.82±0.53 points in the early-start group vs. 4.52±0.56 points in the delayed-start group, P = 0.02), and noninferiority between the two groups with respect to the rate of change in the UPDRS score between weeks 48 and 72 (0.085±0.02 points per week in the early-start group vs. 0.085±0.02 points per week in the delayed-start group, P<0.001). All three end points were not met with rasagiline at a dose of 2 mg per day, since the change in the UPDRS score between baseline and week 72 was not significantly different in the two groups (3.47±0.50 points in the earlystart group and 3.11±0.50 points in the delayed-start group, P = 0.60). Conclusions Early treatment with rasagiline at a dose of 1 mg per day provided benefits that were consistent with a possible disease-modifying effect, but early treatment with rasagiline at a dose of 2 mg per day did not. Because the two doses were associated with different outcomes, the study results must be interpreted with caution. (ClinicalTrials. gov number, NCT00256204.)

Journal ArticleDOI
TL;DR: Converging findings show that when people make decisions based on experience, rare events tend to have less impact than they deserve according to their objective probabilities.

Journal ArticleDOI
TL;DR: Mutations in SAMHD1 are described as the cause of Aicardi-Goutières syndrome at the AGS5 locus and data is presented to show that SAM HD1 may act as a negative regulator of the cell-intrinsic antiviral response.
Abstract: Aicardi-Goutieres syndrome is a mendelian mimic of congenital infection and also shows overlap with systemic lupus erythematosus at both a clinical and biochemical level The recent identification of mutations in TREX1 and genes encoding the RNASEH2 complex and studies of the function of TREX1 in DNA metabolism have defined a previously unknown mechanism for the initiation of autoimmunity by interferon-stimulatory nucleic acid Here we describe mutations in SAMHD1 as the cause of AGS at the AGS5 locus and present data to show that SAMHD1 may act as a negative regulator of the cell-intrinsic antiviral response

Journal ArticleDOI
07 Aug 2009-Science
TL;DR: Both direct measurements and computer simulations showed that NMDA spikes are the dominant mechanism by which distal synaptic input leads to firing of the neuron and provide the substrate for complex parallel processing of top-down input arriving at the tuft.
Abstract: Tuft dendrites are the main target for feedback inputs innervating neocortical layer 5 pyramidal neurons, but their properties remain obscure. We report the existence of N-methyl-D-aspartate (NMDA) spikes in the fine distal tuft dendrites that otherwise did not support the initiation of calcium spikes. Both direct measurements and computer simulations showed that NMDA spikes are the dominant mechanism by which distal synaptic input leads to firing of the neuron and provide the substrate for complex parallel processing of top-down input arriving at the tuft. These data lead to a new unifying view of integration in pyramidal neurons in which all fine dendrites, basal and tuft, integrate inputs locally through the recruitment of NMDA receptor channels relative to the fixed apical calcium and axosomatic sodium integration points.

Journal ArticleDOI
TL;DR: It is established that T cells could independently express IL-22 even with low expression levels of IL-17, which argues for a functional specialization of T cells such that "T17" and "T22" T-cells may drive different features of epidermal pathology in inflammatory skin diseases.
Abstract: Background Psoriasis and atopic dermatitis (AD) are common inflammatory skin diseases. An upregulated T H 17/IL-23 pathway was demonstrated in psoriasis. Although potential involvement of T H 17 T cells in AD was suggested during acute disease, the role of these cells in chronic AD remains unclear. Objective To examine differences in IL-23/T H 17 signal between these diseases and establish relative frequencies of T-cell subsets in AD. Methods Skin biopsies and peripheral blood were collected from patients with chronic AD (n = 12) and psoriasis (n = 13). Relative frequencies of CD4 + and CD8 + T-cell subsets within these 2 compartments were examined by intracellular cytokine staining and flow cytometry. Results In peripheral blood, no significant difference was found in percentages of different T-cell subsets between these diseases. In contrast, psoriatic skin had significantly increased frequencies of T H 1 and T H 17 T cells compared with AD, whereas T H 2 T cells were significantly elevated in AD. Distinct IL-22–producing CD4 + and CD8 + T-cell populations were significantly increased in AD skin compared with psoriasis. IL-22 + CD8 + T-cell frequency correlated with AD disease severity. Conclusion Our data established that T cells could independently express IL-22 even with low expression levels of IL-17. This argues for a functional specialization of T cells such that "T17" and "T22" T-cells may drive different features of epidermal pathology in inflammatory skin diseases, including induction of antimicrobial peptides for "T17" T cells and epidermal hyperplasia for "T22" T-cells. Given the clinical correlation with disease severity, further characterization of "T22" T cells is warranted, and may have future therapeutic implications.

Journal ArticleDOI
TL;DR: This paper presents an alternative characterization of the secrecy capacity of the multiple-antenna wiretap channel under a more general matrix constraint on the channel input using a channel-enhancement argument.
Abstract: The secrecy capacity of the multiple-antenna wiretap channel under the average total power constraint was recently characterized, independently, by Khisti and Wornell and Oggier and Hassibi using a Sato-like argument and matrix analysis tools. This paper presents an alternative characterization of the secrecy capacity of the multiple-antenna wiretap channel under a more general matrix constraint on the channel input using a channel-enhancement argument. This characterization is by nature information-theoretic and is directly built on the intuition regarding to the optimal transmission strategy in this communication scenario.

Journal ArticleDOI
TL;DR: It is shown that single-walled nanotubes form true thermodynamic solutions in superacids, and the full phase diagram is reported, allowing the rational design of fluid-phase assembly processes for bottom-up assembly of nanot tubes and nanorods into functional materials.
Abstract: Translating the unique characteristics of individual single-walled carbon nanotubes into macroscopic materials such as fibres and sheets has been hindered by ineffective assembly. Fluid-phase assembly is particularly attractive, but the ability to dissolve nanotubes in solvents has eluded researchers for over a decade. Here, we show that single-walled nanotubes form true thermodynamic solutions in superacids, and report the full phase diagram, allowing the rational design of fluid-phase assembly processes. Single-walled nanotubes dissolve spontaneously in chlorosulphonic acid at weight concentrations of up to 0.5 wt%, 1,000 times higher than previously reported in other acids. At higher concentrations, they form liquid-crystal phases that can be readily processed into fibres and sheets of controlled morphology. These results lay the foundation for bottom-up assembly of nanotubes and nanorods into functional materials.

Journal ArticleDOI
TL;DR: The background for the different methods, the use in basic pain experiments on healthy volunteers, how they can be applied in drug profiling, and the applications in clinical practice are described.

Journal ArticleDOI
TL;DR: Irrigation of soil-grown plants with nanoparticle suspensions had mostly insignificant inhibitory effects on long-term shoot production, and a possible developmental adaptation is suggested.
Abstract: A laboratory investigation was conducted to determine whether colloidal suspensions of inorganic nanoparticulate materials of natural or industrial origin in the external water supplied to the primary root of maize seedlings (Zea mays L.) could interfere with water transport and induce associated leaf responses. Water flow through excised roots was reduced, together with root hydraulic conductivity, within minutes of exposure to colloidal suspensions of naturally derived bentonite clay or industrially produced TiO2 nanoparticles. Similar nanoparticle additions to the hydroponic solution surrounding the primary root of intact seedlings rapidly inhibited leaf growth and transpiration. The reduced water availability caused by external nanoparticles and the associated leaf responses appeared to involve a rapid physical inhibition of apoplastic flow through nanosized root cell wall pores rather than toxic effects. Thus: (1) bentonite and TiO2 treatments also reduced the hydraulic conductivity of cell wall ghosts of killed roots left after hot alcohol disruption of the cell membranes; and (2) the average particle exclusion diameter of root cell wall pores was reduced from 6.6 to 3.0 nm by prior nanoparticle treatments. Irrigation of soil-grown plants with nanoparticle suspensions had mostly insignificant inhibitory effects on long-term shoot production, and a possible developmental adaptation is suggested.

Journal ArticleDOI
TL;DR: In this article, the authors apply a dialectic perspective on innovation to overcome limitations of dichotomous reasoning and to gain a more valid account of innovation, pointing out that individuals, teams and organizations need to self-regulate and manage conflicting demands of innovation and that multiple pathways can lead to idea generation and innovation.
Abstract: Innovation, the development and intentional introduction of new and useful ideas by individuals, teams, and organizations, lies at the heart of human adaptation. Decades of research in different disciplines and at different organizational levels have produced a wealth of knowledge about how innovation emerges and the factors that facilitate and inhibit innovation. We propose that this knowledge needs integration. In an initial step toward this goal, we apply a dialectic perspective on innovation to overcome limitations of dichotomous reasoning and to gain a more valid account of innovation. We point out that individuals, teams, and organizations need to self-regulate and manage conflicting demands of innovation and that multiple pathways can lead to idea generation and innovation. By scrutinizing the current use of the concept of organizational ambidexterity and extending it to individuals and teams, we develop a framework to help guide and facilitate future research and practice. Readers expecting specific and universal prescriptions of how to innovate will be disappointed as current research does not allow such inferences. Rather, we think innovation research should focus on developing and testing principles of innovation management in addition to developing decision aids for organizational practice. To this end, we put forward key propositions and action principles of innovation management.

Journal ArticleDOI
TL;DR: This work proposes a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts, which represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence.
Abstract: Adequate representation of natural language semantics requires access to vast amounts of common sense and domain-specific world knowledge. Prior work in the field was based on purely statistical techniques that did not make use of background knowledge, on limited lexicographic knowledge bases such as WordNet, or on huge manual efforts such as the CYC project. Here we propose a novel method, called Explicit Semantic Analysis (ESA), for fine-grained semantic interpretation of unrestricted natural language texts. Our method represents meaning in a high-dimensional space of concepts derived from Wikipedia, the largest encyclopedia in existence. We explicitly represent the meaning of any text in terms of Wikipedia-based concepts. We evaluate the effectiveness of our method on text categorization and on computing the degree of semantic relatedness between fragments of natural language text. Using ESA results in significant improvements over the previous state of the art in both tasks. Importantly, due to the use of natural concepts, the ESA model is easy to explain to human users.

Journal ArticleDOI
TL;DR: In this article, a tutorial review aims to provide sufficient intuitive background to draw more researchers to look into the fundamental aspects of device physics and engineering, which can be used in a wide range of applications such as display, radio-frequency tags, and solar cells.
Abstract: Semiconducting polymers and small molecules form an extremely flexible class of amorphous materials that can be used in a wide range of applications, some of which are display, radio-frequency tags, and solar cells. The rapid progress towards functional devices is occurring despite the lack of sufficient understanding of the physical processes and very little experience in device engineering. This tutorial review aims to provide sufficient intuitive background to draw more researchers to look into the fundamental aspects of device physics and engineering.

Journal Article
TL;DR: This work considers regularized support vector machines and shows that they are precisely equivalent to a new robust optimization formulation, thus establishing robustness as the reason regularized SVMs generalize well and gives a new proof of consistency of (kernelized) SVMs.
Abstract: We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization provides a robust optimization interpretation for the success of regularized SVMs. We use this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.

Journal ArticleDOI
TL;DR: This paper introduces a novel framework for adaptive enhancement and spatiotemporal upscaling of videos containing complex activities without explicit need for accurate motion estimation based on multidimensional kernel regression, which significantly widens the applicability of super-resolution methods to a broad variety of video sequences containing complex motions.
Abstract: The need for precise (subpixel accuracy) motion estimates in conventional super-resolution has limited its applicability to only video sequences with relatively simple motions such as global translational or affine displacements. In this paper, we introduce a novel framework for adaptive enhancement and spatiotemporal upscaling of videos containing complex activities without explicit need for accurate motion estimation. Our approach is based on multidimensional kernel regression, where each pixel in the video sequence is approximated with a 3-D local (Taylor) series, capturing the essential local behavior of its spatiotemporal neighborhood. The coefficients of this series are estimated by solving a local weighted least-squares problem, where the weights are a function of the 3-D space-time orientation in the neighborhood. As this framework is fundamentally based upon the comparison of neighboring pixels in both space and time, it implicitly contains information about the local motion of the pixels across time, therefore rendering unnecessary an explicit computation of motions of modest size. The proposed approach not only significantly widens the applicability of super-resolution methods to a broad variety of video sequences containing complex motions, but also yields improved overall performance. Using several examples, we illustrate that the developed algorithm has super-resolution capabilities that provide improved optical resolution in the output, while being able to work on general input video with essentially arbitrary motion.

Journal ArticleDOI
TL;DR: In this paper, the authors review the state of the art in the analysis of route choice behavior within the discrete choice modeling framework, and present research directions show growing interest in understanding the role of choice set size and composition on model estimation and flow prediction, while past research directions illustrate larger efforts toward the enhancement of stochastic route choice models rather than toward the development of realistic choice set generation methods.
Abstract: Modeling route choice behavior is problematic, but essential to appraise travelers' perceptions of route characteristics, to forecast travelers' behavior under hypothetical scenarios, to predict future traffic conditions on transportation networks and to understand travelers' reaction and adaptation to sources of information. This paper reviews the state of the art in the analysis of route choice behavior within the discrete choice modeling framework. The review covers both choice set generation and choice process, since present research directions show growing interest in understanding the role of choice set size and composition on model estimation and flow prediction, while past research directions illustrate larger efforts toward the enhancement of stochastic route choice models rather than toward the development of realistic choice set generation methods. This paper also envisions future research directions toward the improvement in amount and quality of collected data, the consideration of the latent nature of the set of alternatives, the definition of route relevance and choice set efficiency measures, the specification of models able to contextually account for taste heterogeneity and substitution patterns, and the adoption of random constraint approaches to represent jointly choice set formation and choice process.

Proceedings Article
19 Sep 2009
TL;DR: A new admissible heuristic called the landmark cut heuristic is introduced, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.
Abstract: Current heuristic estimators for classical domain-independent planning are usually based on one of four ideas: delete relaxations, critical paths, abstractions, and, most recently, landmarks. Previously, these different ideas for deriving heuristic functions were largely unconnected. We prove that admissible heuristics based on these ideas are in fact very closely related. Exploiting this relationship, we introduce a new admissible heuristic called the landmark cut heuristic, which compares favourably with the state of the art in terms of heuristic accuracy and overall performance.

Journal ArticleDOI
08 Apr 2009-Pain
TL;DR: DNIC is a ‘bottom-up’ activation of the pain-modulatory mechanism, as part of the descending endogenous analgesia (EA) system, and has been identified as an advanced psychophysical measure with high clinical relevancy in the characterization of one's capacity to modulate pain and consequently one’s susceptibility to acquire pain disorders.
Abstract: The exploration of endogenous analgesia (EA) via descending pain-modulatory systems started about three decades ago. The generation of analgesia in the rat by periaquaductal grey (PAG) stimulation was the first evidence for the existence of endogenous analgesic capabilities as a normal function of the central nervous system, exerting both inhibitory and facilatory effects (for review, see [5]). Consequent evidence demonstrated an important final common descending modulatory site in the brainstem, the rostral ventromedial medulla (RVM), which receives signals directly from the PAG, with both bearing opioid receptors. Subsequently, the RVM forwards signals downward to the spinal cord (for review, see [11]). This dorsolateral funiculus descending inhibitory pain pathway, consisting of serotonergic and noradrenergic neurons, is under ‘top-down’ cerebral control, mediating modulation of pain perception by emotional, motivational, and cognitive factors [5,11]. Further important evidence in this regard came in the late 1970s from Le Bars and his colleagues [21,22], who were the first to associate the effectiveness of the commonly known ‘pain-inhibits-pain’ counter-irritation phenomena with this EA mechanism. They reported that activity in the dorsal horn and trigeminal nuclei is inhibited by the application of noxious electrical stimuli to remote body areas in anaesthetized rats [21,22]. This phenomenon was termed ‘diffuse noxious inhibitory controls’ (DNICs). Both electrophysiological and anatomical data support the involvement of the subnucleus reticularis dorsalis (SRD) in the caudal medulla in spino-bulbo-spinal loops that are exclusively activated by neurons with a ‘whole-body receptive field’ [23]. Their descending projections pass through the dorsolateral funiculus and terminate in the dorsal horn at all levels of the spinal cord. Thus, DNIC is a ‘bottom-up’ activation of the pain-modulatory mechanism, as part of the descending endogenous analgesia (EA) system. In recent years, a DNIC-like effect, also commonly termed HNCS (heterotopic noxious conditioning stimulation), has been identified as an advanced psychophysical measure with high clinical relevancy in the characterization of one’s capacity to modulate pain and consequently one’s susceptibility to acquire pain disorders.

Journal ArticleDOI
TL;DR: In this article, the authors characterized the cardiomyocyte differentiation potential of human induced pluripotent stem (hiPS) cells and studied the molecular, structural, and functional properties of the generated hiPS-derived Cardiomyocytes.
Abstract: Background— The ability to derive human induced pluripotent stem (hiPS) cell lines by reprogramming of adult fibroblasts with a set of transcription factors offers unique opportunities for basic and translational cardiovascular research. In the present study, we aimed to characterize the cardiomyocyte differentiation potential of hiPS cells and to study the molecular, structural, and functional properties of the generated hiPS-derived cardiomyocytes. Methods and Results— Cardiomyocyte differentiation of the hiPS cells was induced with the embryoid body differentiation system. Gene expression studies demonstrated that the cardiomyocyte differentiation process of the hiPS cells was characterized by an initial increase in mesoderm and cardiomesoderm markers, followed by expression of cardiac-specific transcription factors and finally by cardiac-specific structural genes. Cells in the contracting embryoid bodies were stained positively for cardiac troponin-I, sarcomeric α-actinin, and connexin-43. Reverse-tra...