Showing papers by "École Polytechnique Fédérale de Lausanne published in 2017"
••
TL;DR: The glmmTMB package fits many types of GLMMs and extensions, including models with continuously distributed responses, but here the authors focus on count responses and its ability to estimate the Conway-Maxwell-Poisson distribution parameterized by the mean is unique.
Abstract: Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions We present a new package, glmmTMB, and compare it to other R packages that fit zero-inflated mixed models The glmmTMB package fits many types of GLMMs and extensions, including models with continuously distributed responses, but here we focus on count responses glmmTMB is faster than glmmADMB, MCMCglmm, and brms, and more flexible than INLA and mgcv for zero-inflated modeling One unique feature of glmmTMB (among packages that fit zero-inflated mixed models) is its ability to estimate the Conway-Maxwell-Poisson distribution parameterized by the mean Overall, its most appealing features for new users may be the combination of speed, flexibility, and its interface’s similarity to lme4
4,497 citations
••
University of Udine1, University of Lugano2, École Polytechnique Fédérale de Lausanne3, Leipzig University4, University of Paris5, University of North Texas6, Princeton University7, National Research Council8, International School for Advanced Studies9, Cornell University10, University of Lincoln11, University of Milan12, École Polytechnique13, International Centre for Theoretical Physics14, University of Paderborn15, University of Oxford16, Jožef Stefan Institute17, University of Padua18, Sapienza University of Rome19, Vietnam Academy of Science and Technology20, University of British Columbia21, University of Lorraine22, Centre national de la recherche scientifique23, University of Zurich24, École Normale Supérieure25, Université Paris-Saclay26, Wake Forest University27, Temple University28
TL;DR: Recent extensions and improvements are described, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
Abstract: Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudopotential and projector-augmented-wave approaches Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement their ideas In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software
3,638 citations
••
TL;DR: In this article, the authors examined the methods used to synthesize transition metal dichalcogenides (TMDCs) and their properties with particular attention to their charge density wave, superconductive and topological phases, along with their applications in devices with enhanced mobility and with the use of strain engineering to improve their properties.
Abstract: Graphene is very popular because of its many fascinating properties, but its lack of an electronic bandgap has stimulated the search for 2D materials with semiconducting character. Transition metal dichalcogenides (TMDCs), which are semiconductors of the type MX2, where M is a transition metal atom (such as Mo or W) and X is a chalcogen atom (such as S, Se or Te), provide a promising alternative. Because of its robustness, MoS2 is the most studied material in this family. TMDCs exhibit a unique combination of atomic-scale thickness, direct bandgap, strong spin–orbit coupling and favourable electronic and mechanical properties, which make them interesting for fundamental studies and for applications in high-end electronics, spintronics, optoelectronics, energy harvesting, flexible electronics, DNA sequencing and personalized medicine. In this Review, the methods used to synthesize TMDCs are examined and their properties are discussed, with particular attention to their charge density wave, superconductive and topological phases. The use of TMCDs in nanoelectronic devices is also explored, along with strategies to improve charge carrier mobility, high frequency operation and the use of strain engineering to tailor their properties. Two-dimensional transition metal dichalcogenides (TMDCs) exhibit attractive electronic and mechanical properties. In this Review, the charge density wave, superconductive and topological phases of TMCDs are discussed, along with their synthesis and applications in devices with enhanced mobility and with the use of strain engineering to improve their properties.
3,436 citations
••
University of Udine1, École Polytechnique Fédérale de Lausanne2, University of Lugano3, Leipzig University4, University of Paris5, University of North Texas6, Princeton University7, National Research Council8, International School for Advanced Studies9, Cornell University10, University of Lincoln11, University of Milan12, École Polytechnique13, International Centre for Theoretical Physics14, University of Paderborn15, University of Oxford16, Jožef Stefan Institute17, University of Padua18, Sapienza University of Rome19, Vietnam Academy of Science and Technology20, University of British Columbia21, University of Lorraine22, Centre national de la recherche scientifique23, University of Zurich24, École Normale Supérieure25, Université Paris-Saclay26, Wake Forest University27, Temple University28
TL;DR: Quantum ESPRESSO as discussed by the authors is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the-art electronic-structure techniques, based on density functional theory, density functional perturbation theory, and many-body perturbations theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches.
Abstract: Quantum ESPRESSO is an integrated suite of open-source computer codes for quantum simulations of materials using state-of-the art electronic-structure techniques, based on density-functional theory, density-functional perturbation theory, and many-body perturbation theory, within the plane-wave pseudo-potential and projector-augmented-wave approaches. Quantum ESPRESSO owes its popularity to the wide variety of properties and processes it allows to simulate, to its performance on an increasingly broad array of hardware architectures, and to a community of researchers that rely on its capabilities as a core open-source development platform to implement theirs ideas. In this paper we describe recent extensions and improvements, covering new methodologies and property calculators, improved parallelization, code modularization, and extended interoperability both within the distribution and with external software.
2,818 citations
••
TL;DR: In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques as mentioned in this paper.
Abstract: Many scientific fields study data with an underlying structure that is non-Euclidean. Some examples include social networks in computational social sciences, sensor networks in communications, functional networks in brain imaging, regulatory networks in genetics, and meshed surfaces in computer graphics. In many applications, such geometric data are large and complex (in the case of social networks, on the scale of billions) and are natural targets for machine-learning techniques. In particular, we would like to use deep neural networks, which have recently proven to be powerful tools for a broad range of problems from computer vision, natural-language processing, and audio analysis. However, these tools have been most successful on data with an underlying Euclidean or grid-like structure and in cases where the invariances of these structures are built into networks used to model them.
2,565 citations
••
21 Jul 2017TL;DR: The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers and outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images.
Abstract: Given a state-of-the-art deep neural network classifier, we show the existence of a universal (image-agnostic) and very small perturbation vector that causes natural images to be misclassified with high probability We propose a systematic algorithm for computing universal perturbations, and show that state-of-the-art deep neural networks are highly vulnerable to such perturbations, albeit being quasi-imperceptible to the human eye We further empirically analyze these universal perturbations and show, in particular, that they generalize very well across neural networks The surprising existence of universal perturbations reveals important geometric correlations among the high-dimensional decision boundary of classifiers It further outlines potential security breaches with the existence of single directions in the input space that adversaries can possibly exploit to break a classifier on most natural images
2,081 citations
••
TL;DR: In this paper, the authors proposed a deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems, which combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure.
Abstract: In this paper, we propose a novel deep convolutional neural network (CNN)-based algorithm for solving ill-posed inverse problems. Regularized iterative algorithms have emerged as the standard approach to ill-posed inverse problems in the past few decades. These methods produce excellent results, but can be challenging to deploy in practice due to factors including the high computational cost of the forward and adjoint operators and the difficulty of hyperparameter selection. The starting point of this paper is the observation that unrolled iterative methods have the form of a CNN (filtering followed by pointwise non-linearity) when the normal operator ( $H^{*}H$ , where $H^{*}$ is the adjoint of the forward imaging operator, $H$ ) of the forward model is a convolution. Based on this observation, we propose using direct inversion followed by a CNN to solve normal-convolutional inverse problems. The direct inversion encapsulates the physical model of the system, but leads to artifacts when the problem is ill posed; the CNN combines multiresolution decomposition and residual learning in order to learn to remove these artifacts while preserving image structure. We demonstrate the performance of the proposed network in sparse-view reconstruction (down to 50 views) on parallel beam X-ray computed tomography in synthetic phantoms as well as in real experimental sinograms. The proposed network outperforms total variation-regularized iterative reconstruction for the more realistic phantoms and requires less than a second to reconstruct a $512\times 512$ image on the GPU.
1,757 citations
••
TL;DR: One-year stable perovskite devices are shown by engineering an ultra-stable 2D/3D (HOOC(CH2)4NH3)2PbI4/CH3NH3Pb mezzanine junction, which will enable the timely commercialization of perovSKite solar cells.
Abstract: Despite the impressive photovoltaic performances with power conversion efficiency beyond 22%, perovskite solar cells are poorly stable under operation, failing by far the market requirements. Various technological approaches have been proposed to overcome the instability problem, which, while delivering appreciable incremental improvements, are still far from a market-proof solution. Here we show one-year stable perovskite devices by engineering an ultra-stable 2D/3D (HOOC(CH2)4NH3)2PbI4/CH3NH3PbI3 perovskite junction. The 2D/3D forms an exceptional gradually-organized multi-dimensional interface that yields up to 12.9% efficiency in a carbon-based architecture, and 14.6% in standard mesoporous solar cells. To demonstrate the up-scale potential of our technology, we fabricate 10 × 10 cm2 solar modules by a fully printable industrial-scale process, delivering 11.2% efficiency stable for >10,000 h with zero loss in performances measured under controlled standard conditions. This innovative stable and low-cost architecture will enable the timely commercialization of perovskite solar cells. Up-scaling represents a key challenge for photovoltaics based on metal halide perovskites. Using a composite of 2D and 3D perovskites in combination with a printable carbon black/graphite counter electrode; Granciniet al., report 11.2% efficient modules stable over 10,000 hours.
1,531 citations
••
[...]
Broad Institute1, Massachusetts Institute of Technology2, Howard Hughes Medical Institute3, University of Cambridge4, European Bioinformatics Institute5, Wellcome Trust Sanger Institute6, Harvard University7, Weizmann Institute of Science8, University of Zurich9, Laboratory of Molecular Biology10, Utrecht University11, École Polytechnique Fédérale de Lausanne12, University of Pennsylvania13, Heidelberg University14, German Cancer Research Center15, Ludwig Maximilian University of Munich16, John Radcliffe Hospital17, Newcastle University18, Stanford University19, University of Oxford20, University of California, San Francisco21, Allen Institute for Brain Science22, Karolinska Institutet23, Royal Institute of Technology24, Icahn School of Medicine at Mount Sinai25, University of Cape Town26, University Medical Center Groningen27, Radboud University Nijmegen28, Kettering University29, University of Edinburgh30, Babraham Institute31, New York University32, Netherlands Cancer Institute33, Ragon Institute of MGH, MIT and Harvard34, University of Texas Health Science Center at Houston35, Technische Universität München36, Technical University of Denmark37, University of California, Berkeley38, King's College London39, California Institute of Technology40
TL;DR: An open comprehensive reference map of the molecular state of cells in healthy human tissues would propel the systematic study of physiological states, developmental trajectories, regulatory circuitry and interactions of cells, and also provide a framework for understanding cellular dysregulation in human disease.
Abstract: The recent advent of methods for high-throughput single-cell molecular profiling has catalyzed a growing sense in the scientific community that the time is ripe to complete the 150-year-old effort to identify all cell types in the human body. The Human Cell Atlas Project is an international collaborative effort that aims to define all human cell types in terms of distinctive molecular profiles (such as gene expression profiles) and to connect this information with classical cellular descriptions (such as location and morphology). An open comprehensive reference map of the molecular state of cells in healthy human tissues would propel the systematic study of physiological states, developmental trajectories, regulatory circuitry and interactions of cells, and also provide a framework for understanding cellular dysregulation in human disease. Here we describe the idea, its potential utility, early proofs-of-concept, and some design considerations for the Human Cell Atlas, including a commitment to open data, code, and community.
1,391 citations
••
University of Natural Resources and Life Sciences, Vienna1, Karlsruhe Institute of Technology2, École Polytechnique Fédérale de Lausanne3, Center for International Forestry Research4, University of Turin5, Czech University of Life Sciences Prague6, Academy of Sciences of the Czech Republic7, University of Naples Federico II8, Forestry Commission9, University of Bari10, University of Ljubljana11, Potsdam Institute for Climate Impact Research12
TL;DR: It is concluded that both ecosystems and society should be prepared for an increasingly disturbed future of forests, as warmer and drier conditions particularly facilitate fire, drought and insect disturbances, while warmer and wetter conditions increase disturbances from wind and pathogens.
Abstract: Forest disturbances are sensitive to climate. However, our understanding of disturbance dynamics in response to climatic changes remains incomplete, particularly regarding large-scale patterns, interaction effects and dampening feedbacks. Here we provide a global synthesis of climate change effects on important abiotic (fire, drought, wind, snow and ice) and biotic (insects and pathogens) disturbance agents. Warmer and drier conditions particularly facilitate fire, drought and insect disturbances, while warmer and wetter conditions increase disturbances from wind and pathogens. Widespread interactions between agents are likely to amplify disturbances, while indirect climate effects such as vegetation changes can dampen long-term disturbance sensitivities to climate. Future changes in disturbance are likely to be most pronounced in coniferous forests and the boreal biome. We conclude that both ecosystems and society should be prepared for an increasingly disturbed future of forests.
1,388 citations
••
TL;DR: Because photocurrents are near the theoretical maximum, the focus is on efforts to increase open-circuit voltage by means of improving charge-selective contacts and charge carrier lifetimes in perovskites via processes such as ion tailoring.
Abstract: The efficiencies of perovskite solar cells have gone from single digits to a certified 22.1% in a few years' time. At this stage of their development, the key issues concern how to achieve further improvements in efficiency and long-term stability. We review recent developments in the quest to improve the current state of the art. Because photocurrents are near the theoretical maximum, our focus is on efforts to increase open-circuit voltage by means of improving charge-selective contacts and charge carrier lifetimes in perovskites via processes such as ion tailoring. The challenges associated with long-term perovskite solar cell device stability include the role of testing protocols, ionic movement affecting performance metrics over extended periods of time, and determination of the best ways to counteract degradation mechanisms.
••
TL;DR: The InTBIR Participants and Investigators have provided informed consent for the study to take place in Poland.
Abstract: Additional co-authors: Endre Czeiter, Marek Czosnyka, Ramon Diaz-Arrastia, Jens P Dreier, Ann-Christine Duhaime, Ari Ercole, Thomas A van Essen, Valery L Feigin, Guoyi Gao, Joseph Giacino, Laura E Gonzalez-Lara, Russell L Gruen, Deepak Gupta, Jed A Hartings, Sean Hill, Ji-yao Jiang, Naomi Ketharanathan, Erwin J O Kompanje, Linda Lanyon, Steven Laureys, Fiona Lecky, Harvey Levin, Hester F Lingsma, Marc Maegele, Marek Majdan, Geoffrey Manley, Jill Marsteller, Luciana Mascia, Charles McFadyen, Stefania Mondello, Virginia Newcombe, Aarno Palotie, Paul M Parizel, Wilco Peul, James Piercy, Suzanne Polinder, Louis Puybasset, Todd E Rasmussen, Rolf Rossaint, Peter Smielewski, Jeannette Soderberg, Simon J Stanworth, Murray B Stein, Nicole von Steinbuchel, William Stewart, Ewout W Steyerberg, Nino Stocchetti, Anneliese Synnot, Braden Te Ao, Olli Tenovuo, Alice Theadom, Dick Tibboel, Walter Videtta, Kevin K W Wang, W Huw Williams, Kristine Yaffe for the InTBIR Participants and Investigators
••
ETH Zurich1, University of California, Merced2, University of Hong Kong3, Seoul National University4, The Chinese University of Hong Kong5, Chinese Academy of Sciences6, KAIST7, University of Illinois at Urbana–Champaign8, Harbin Institute of Technology9, Xiamen University10, Peking University11, University of Missouri12, University of Sydney13, Beijing University of Posts and Telecommunications14, Shandong University15, Australian National University16, Sejong University17, Pennsylvania State University18, Tampere University of Technology19, Indian Institute of Technology Kharagpur20, École Polytechnique Fédérale de Lausanne21, University of Electronic Science and Technology of China22
TL;DR: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results and gauges the state-of-the-art in single imagesuper-resolution.
Abstract: This paper reviews the first challenge on single image super-resolution (restoration of rich details in an low resolution image) with focus on proposed solutions and results. A new DIVerse 2K resolution image dataset (DIV2K) was employed. The challenge had 6 competitions divided into 2 tracks with 3 magnification factors each. Track 1 employed the standard bicubic downscaling setup, while Track 2 had unknown downscaling operators (blur kernel and decimation) but learnable through low and high res train images. Each competition had ∽100 registered participants and 20 teams competed in the final testing phase. They gauge the state-of-the-art in single image super-resolution.
••
TL;DR: Th Thin CuSCN films can replace organic hole-transporting layers that limit thermal stability of devices and demonstrate PSCs that achieve stabilized efficiencies exceeding 20% with copper(I) thiocyanate (CuSCN) as the hole extraction layer.
Abstract: Perovskite solar cells (PSCs) with efficiencies greater than 20% have been realized only with expensive organic hole-transporting materials. We demonstrate PSCs that achieve stabilized efficiencies exceeding 20% with copper(I) thiocyanate (CuSCN) as the hole extraction layer. A fast solvent removal method enabled the creation of compact, highly conformal CuSCN layers that facilitate rapid carrier extraction and collection. The PSCs showed high thermal stability under long-term heating, although their operational stability was poor. This instability originated from potential-induced degradation of the CuSCN/Au contact. The addition of a conductive reduced graphene oxide spacer layer between CuSCN and gold allowed PSCs to retain >95% of their initial efficiency after aging at a maximum power point for 1000 hours under full solar intensity at 60°C. Under both continuous full-sun illumination and thermal stress, CuSCN-based devices surpassed the stability of spiro-OMeTAD–based PSCs.
••
Michael R. Blanton1, Matthew A. Bershady2, Bela Abolfathi3, Franco D. Albareti4 +412 more•Institutions (91)
TL;DR: SDSS-IV as mentioned in this paper is a project encompassing three major spectroscopic programs: the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA), the Extended Baryon Oscillation Spectroscopic Survey (eBOSS), and the Time Domain Spectroscopy Survey (TDSS).
Abstract: We describe the Sloan Digital Sky Survey IV (SDSS-IV), a project encompassing three major spectroscopic programs. The Apache Point Observatory Galactic Evolution Experiment 2 (APOGEE-2) is observing hundreds of thousands of Milky Way stars at high resolution and high signal-to-noise ratios in the near-infrared. The Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey is obtaining spatially resolved spectroscopy for thousands of nearby galaxies (median $z\sim 0.03$). The extended Baryon Oscillation Spectroscopic Survey (eBOSS) is mapping the galaxy, quasar, and neutral gas distributions between $z\sim 0.6$ and 3.5 to constrain cosmology using baryon acoustic oscillations, redshift space distortions, and the shape of the power spectrum. Within eBOSS, we are conducting two major subprograms: the SPectroscopic IDentification of eROSITA Sources (SPIDERS), investigating X-ray AGNs and galaxies in X-ray clusters, and the Time Domain Spectroscopic Survey (TDSS), obtaining spectra of variable sources. All programs use the 2.5 m Sloan Foundation Telescope at the Apache Point Observatory; observations there began in Summer 2014. APOGEE-2 also operates a second near-infrared spectrograph at the 2.5 m du Pont Telescope at Las Campanas Observatory, with observations beginning in early 2017. Observations at both facilities are scheduled to continue through 2020. In keeping with previous SDSS policy, SDSS-IV provides regularly scheduled public data releases; the first one, Data Release 13, was made available in 2016 July.
••
TL;DR: The extrinsic regulation of angiogenesis by the tumour microenvironment is discussed, highlighting potential vulnerabilities that could be targeted to improve the applicability and reach of anti-angiogenic cancer therapies.
Abstract: Tumours display considerable variation in the patterning and properties of angiogenic blood vessels, as well as in their responses to anti-angiogenic therapy. Angiogenic programming of neoplastic tissue is a multidimensional process regulated by cancer cells in concert with a variety of tumour-associated stromal cells and their bioactive products, which encompass cytokines and growth factors, the extracellular matrix and secreted microvesicles. In this Review, we discuss the extrinsic regulation of angiogenesis by the tumour microenvironment, highlighting potential vulnerabilities that could be targeted to improve the applicability and reach of anti-angiogenic cancer therapies.
••
TL;DR: This review aims to provide a comprehensive description of the dFC approaches proposed so far, and point at the directions that the authors see as most promising for the future developments of the field.
••
German Cancer Research Center1, Université de Sherbrooke2, University Health Network3, University of Pittsburgh4, IMT Institute for Advanced Studies Lucca5, St. Jude Children's Research Hospital6, University of Toronto7, Zhejiang University of Technology8, Harvard University9, Utrecht University10, Université de Montréal11, National Research Council12, University of Washington13, University of Western Ontario14, École Polytechnique Fédérale de Lausanne15, ETSI16, Siemens17, University of Southern California18, King's College London19, University of Bordeaux20, Centre national de la recherche scientifique21, Copenhagen University Hospital22, University of Hamburg23, University of Basel24
TL;DR: The encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent) is reported, however, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups.
Abstract: Tractography based on non-invasive diffusion imaging is central to the study of human brain connectivity. To date, the approach has not been systematically validated in ground truth studies. Based on a simulated human brain data set with ground truth tracts, we organized an open international tractography challenge, which resulted in 96 distinct submissions from 20 research groups. Here, we report the encouraging finding that most state-of-the-art algorithms produce tractograms containing 90% of the ground truth bundles (to at least some extent). However, the same tractograms contain many more invalid than valid bundles, and half of these invalid bundles occur systematically across research groups. Taken together, our results demonstrate and confirm fundamental ambiguities inherent in tract reconstruction based on orientation information alone, which need to be considered when interpreting tractography and connectivity results. Our approach provides a novel framework for estimating reliability of tractography and encourages innovation to address its current limitations.
•
01 May 2017TL;DR: It is argued that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient.
Abstract: The ability to understand and manage social signals of a person we are communicating with is the core of social intelligence. Social intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for success in life. This paper argues that next-generation computing needs to include the essence of social intelligence - the ability to recognize human social signals and social behaviours like turn taking, politeness, and disagreement - in order to become more effective and more efficient. Although each one of us understands the importance of social signals in everyday life situations, and in spite of recent advances in machine analysis of relevant behavioural cues like blinks, smiles, crossed arms, laughter, and similar, design and development of automated systems for social signal processing (SSP) are rather difficult. This paper surveys the past efforts in solving these problems by a computer, it summarizes the relevant findings in social psychology, and it proposes a set of recommendations for enabling the development of the next generation of socially aware computing.
••
TL;DR: The perovskite solar cells (PSCs) have attracted much attention because of their rapid rise to 22% efficiencies as discussed by the authors, which could revolutionize the photovoltaic industry.
Abstract: Perovskite solar cells (PSCs) have attracted much attention because of their rapid rise to 22% efficiencies. Here, we review the rapid evolution of PSCs as they enter a new phase that could revolutionize the photovoltaic industry. In particular, we describe the properties that make perovskites so remarkable, and the current understanding of the PSC device physics, including the operation of state-of-the-art solar cells with efficiencies above 20%. The extraordinary progress of long-term stability is discussed and we provide an outlook on what the future of PSCs might soon bring the photovoltaic community. Some challenges remain in terms of reducing non-radiative recombination and increasing conductivity of the different device layers, and these will be discussed in depth in this review.
••
TL;DR: This work exploits the scalability of microresonator-based DKS frequency comb sources for massively parallel optical communications at both the transmitter and the receiver, and demonstrates the potential of these sources to replace the arrays of continuous-wave lasers that are currently used in high-speed communications.
Abstract: Solitons are waveforms that preserve their shape while propagating, as a result of a balance of dispersion and nonlinearity. Soliton-based data transmission schemes were investigated in the 1980s and showed promise as a way of overcoming the limitations imposed by dispersion of optical fibres. However, these approaches were later abandoned in favour of wavelength-division multiplexing schemes, which are easier to implement and offer improved scalability to higher data rates. Here we show that solitons could make a comeback in optical communications, not as a competitor but as a key element of massively parallel wavelength-division multiplexing. Instead of encoding data on the soliton pulse train itself, we use continuous-wave tones of the associated frequency comb as carriers for communication. Dissipative Kerr solitons (DKSs) (solitons that rely on a double balance of parametric gain and cavity loss, as well as dispersion and nonlinearity) are generated as continuously circulating pulses in an integrated silicon nitride microresonator via four-photon interactions mediated by the Kerr nonlinearity, leading to low-noise, spectrally smooth, broadband optical frequency combs. We use two interleaved DKS frequency combs to transmit a data stream of more than 50 terabits per second on 179 individual optical carriers that span the entire telecommunication C and L bands (centred around infrared telecommunication wavelengths of 1.55 micrometres). We also demonstrate coherent detection of a wavelength-division multiplexing data stream by using a pair of DKS frequency combs-one as a multi-wavelength light source at the transmitter and the other as the corresponding local oscillator at the receiver. This approach exploits the scalability of microresonator-based DKS frequency comb sources for massively parallel optical communications at both the transmitter and the receiver. Our results demonstrate the potential of these sources to replace the arrays of continuous-wave lasers that are currently used in high-speed communications. In combination with advanced spatial multiplexing schemes and highly integrated silicon photonic circuits, DKS frequency combs could bring chip-scale petabit-per-second transceivers into reach.
•
04 Dec 2017TL;DR: Krum is proposed, an aggregation rule that satisfies the resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers, which is argued to be the first provably Byzantine-resilient algorithm for distributed SGD.
Abstract: We study the resilience to Byzantine failures of distributed implementations of Stochastic Gradient Descent (SGD). So far, distributed machine learning frameworks have largely ignored the possibility of failures, especially arbitrary (i.e., Byzantine) ones. Causes of failures include software bugs, network asynchrony, biases in local datasets, as well as attackers trying to compromise the entire system. Assuming a set of n workers, up to f being Byzantine, we ask how resilient can SGD be, without limiting the dimension, nor the size of the parameter space. We first show that no gradient aggregation rule based on a linear combination of the vectors proposed by the workers (i.e, current approaches) tolerates a single Byzantine failure. We then formulate a resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers. We propose Krum, an aggregation rule that satisfies our resilience property, which we argue is the first provably Byzantine-resilient algorithm for distributed SGD. We also report on experimental evaluations of Krum.
••
TL;DR: A critical review of the business model literature can be found in this article with the goal of organizing the literature and achieving greater understanding of the larger picture in this increasingly important research area.
Abstract: Ever since the Internet boom of the mid-1990s, firms have been experimenting with new ways of doing business and achieving their goals, which has led to a branching of the scholarly literature on business models. Three interpretations of the meaning and function of “business models” have emerged from the management literature: (1) business models as attributes of real firms, (2) business models as cognitive/linguistic schemas, and (3) business models as formal conceptual representations of how a business functions. Relatedly, a provocative debate about the relationship between business models and strategy has fascinated many scholars. We offer a critical review of this now vast business model literature with the goal of organizing the literature and achieving greater understanding of the larger picture in this increasingly important research area. In addition to complementing and extending prior reviews, we also aim at a second and more important contribution: We aim at identifying the reasons behind the apparent lack of agreement in the interpretation of business models, and the relationship between business models and strategy. Whether strategy scholars consider business model research a new field may be due to the fact that the business model perspective may be challenging the assumptions of traditional theories of value creation and capture by focusing on value creation on the demand side and supply side, rather than focusing on value creation on the supply side only as these theories have done. We conclude by discussing how the business model perspective can contribute to research in different fields, offering future research directions.
••
TL;DR: In this article, a dye-sensitized solar cell (DSC) that achieves very high power-conversion efficiencies (PCEs) under ambient light conditions is presented.
Abstract: Solar cells that operate efficiently under indoor lighting are of great practical interest as they can serve as electric power sources for portable electronics and devices for wireless sensor networks or the Internet of Things. Here, we demonstrate a dye-sensitized solar cell (DSC) that achieves very high power-conversion efficiencies (PCEs) under ambient light conditions. Our photosystem combines two judiciously designed sensitizers, coded D35 and XY1, with the copper complex Cu(II/I)(tmby) as a redox shuttle (tmby, 4,4′,6,6′-tetramethyl-2,2′-bipyridine), and features a high open-circuit photovoltage of 1.1 V. The DSC achieves an external quantum efficiency for photocurrent generation that exceeds 90% across the whole visible domain from 400 to 650 nm, and achieves power outputs of 15.6 and 88.5 μW cm–2 at 200 and 1,000 lux, respectively, under illumination from a model Osram 930 warm-white fluorescent light tube. This translates into a PCE of 28.9%. A dye-sensitized solar cell that has been designed for efficient operation under indoor lighting could offer a convenient means for powering the Internet of Things.
••
TL;DR: This paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
Abstract: Historically, centrally computed algorithms have been the primary means of power system optimization and control. With increasing penetrations of distributed energy resources requiring optimization and control of power systems with many controllable devices, distributed algorithms have been the subject of significant research interest. This paper surveys the literature of distributed algorithms with applications to optimization and control of power systems. In particular, this paper reviews distributed algorithms for offline solution of optimal power flow (OPF) problems as well as online algorithms for real-time solution of OPF, optimal frequency control, optimal voltage control, and optimal wide-area control problems.
••
02 Jan 2017TL;DR: The relevant virtues and limitations of these devices are assessed, in terms of properties such as conductance dynamic range, (non)linearity and (a)symmetry of conductance response, retention, endurance, required switching power, and device variability.
Abstract: Dense crossbar arrays of non-volatile memory (NVM) devices represent one possible path for implementing massively-parallel and highly energy-efficient neuromorphic computing systems. We first revie...
••
TL;DR: How neurofeedback is being used in novel experimental and clinical paradigms from a multidisciplinary perspective, encompassing neuroscientific, neuroengineering and learning-science viewpoints is discussed.
Abstract: Neurofeedback is a psychophysiological procedure in which online feedback of neural activation is provided to the participant for the purpose of self-regulation. Learning control over specific neural substrates has been shown to change specific behaviours. As a progenitor of brain-machine interfaces, neurofeedback has provided a novel way to investigate brain function and neuroplasticity. In this Review, we examine the mechanisms underlying neurofeedback, which have started to be uncovered. We also discuss how neurofeedback is being used in novel experimental and clinical paradigms from a multidisciplinary perspective, encompassing neuroscientific, neuroengineering and learning-science viewpoints.
••
TL;DR: The data indicate that an increase in the abundance of a pro-inflammatory GMB taxon, Escherichia/Shigella, and a reduction in the abundances of an anti-inflammatoryTaxon, E. rectale, are possibly associated with a peripheral inflammatory state in patients with cognitive impairment and brain amyloidosis.
••
University of Göttingen1, City College of New York2, University of São Paulo3, University of Toronto4, University of Erlangen-Nuremberg5, Aalborg University6, Greifswald University Hospital7, Spaulding Rehabilitation Hospital8, Medical University of South Carolina9, University of Pennsylvania10, Technische Universität Ilmenau11, University of Oldenburg12, École Polytechnique Fédérale de Lausanne13, Paris 12 Val de Marne University14, University of New South Wales15, University of Aberdeen16, University of Trento17, University of Lisbon18, University of Kiel19, Ruhr University Bochum20, Technical University of Dortmund21, Ludwig Maximilian University of Munich22, Beth Israel Deaconess Medical Center23, Mannheim University of Applied Sciences24, University of Siena25, The Catholic University of America26, University College London27, University of Copenhagen28, Fukushima Medical University29, Massachusetts Institute of Technology30, University of Tübingen31
TL;DR: Structured interviews are provided and recommend their use in future controlled studies, in particular when trying to extend the parameters applied, to discuss recent regulatory issues, reporting practices and ethical issues.
••
TL;DR: The generation of polymer brushes by surface-initiated controlled radical polymerization (SI-CRP) techniques has become a powerful approach to tailor the chemical and physical properties of interfaces and has given rise to great advances in surface and interface engineering as mentioned in this paper.
Abstract: The generation of polymer brushes by surface-initiated controlled radical polymerization (SI-CRP) techniques has become a powerful approach to tailor the chemical and physical properties of interfaces and has given rise to great advances in surface and interface engineering. Polymer brushes are defined as thin polymer films in which the individual polymer chains are tethered by one chain end to a solid interface. Significant advances have been made over the past years in the field of polymer brushes. This includes novel developments in SI-CRP, as well as the emergence of novel applications such as catalysis, electronics, nanomaterial synthesis and biosensing. Additionally, polymer brushes prepared via SI-CRP have been utilized to modify the surface of novel substrates such as natural fibers, polymer nanofibers, mesoporous materials, graphene, viruses and protein nanoparticles. The last years have also seen exciting advances in the chemical and physical characterization of polymer brushes, as well as an ev...