Showing papers by "Linköping University published in 2016"
••
TL;DR: In this paper, the authors present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macro-autophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes.
Abstract: In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes.
For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure flux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defined as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (in most higher eukaryotes and some protists such as Dictyostelium) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the field understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy.
Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation, it is imperative to target by gene knockout or RNA interference more than one autophagy-related protein. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways implying that not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular assays, we hope to encourage technical innovation in the field.
5,187 citations
••
TL;DR: It is found that the most common software packages for fMRI analysis (SPM, FSL, AFNI) can result in false-positive rates of up to 70%.
Abstract: The most widely used task functional magnetic resonance imaging (fMRI) analyses use parametric statistical methods that depend on a variety of assumptions. In this work, we use real resting-state data and a total of 3 million random task group analyses to compute empirical familywise error rates for the fMRI software packages SPM, FSL, and AFNI, as well as a nonparametric permutation method. For a nominal familywise error rate of 5%, the parametric statistical methods are shown to be conservative for voxelwise inference and invalid for clusterwise inference. Our results suggest that the principal cause of the invalid cluster inferences is spatial autocorrelation functions that do not follow the assumed Gaussian shape. By comparison, the nonparametric permutation test is found to produce nominal results for voxelwise as well as clusterwise inference. These findings speak to the need of validating the statistical methods being used in the field of neuroimaging.
2,946 citations
••
University of Arizona1, Technische Universität München2, Ludwig Maximilian University of Munich3, Hospital Universitario La Paz4, Katholieke Universiteit Leuven5, Hebrew University of Jerusalem6, Innsbruck Medical University7, Poznan University of Medical Sciences8, Stanford University9, Oslo University Hospital10, University of Oslo11, BC Cancer Agency12, University of Texas MD Anderson Cancer Center13, Linköping University14, McGill University15, Cedars-Sinai Medical Center16, VA Boston Healthcare System17, Harvard University18
TL;DR: Among patients with platinum-sensitive, recurrent ovarian cancer, the median duration of progression-free survival was significantly longer amongThose receiving niraparib than among those receiving placebo, regardless of the presence or absence of gBRCA mutations or HRD status, with moderate bone marrow toxicity.
Abstract: Tesaro; Amgen; Genentech; Roche; AstraZeneca; Myriad Genetics; Merck; Gradalis; Cerulean; Vermillion; ImmunoGen; Pfizer; Bayer; Nu-Cana BioMed; INSYS Therapeutics; GlaxoSmithKline; Verastem; Mateon Therapeutics; Pharmaceutical Product Development; Clovis Oncology; Janssen/Johnson Johnson; Eli Lilly; Merck Sharp Dohme
1,686 citations
••
TL;DR: A nonfullerene-based polymer solar cell (PSC) that significantly outperforms fullerene -based PSCs with respect to the power-conversion efficiency and excellent thermal stability is demonstrated for the first time.
Abstract: A nonfullerene-based polymer solar cell (PSC) that significantly outperforms fullerene-based PSCs with respect to the power-conversion efficiency is demonstrated for the first time. An efficiency of >11%, which is among the top values in the PSC field, and excellent thermal stability is obtained using PBDB-T and ITIC as donor and acceptor, respectively.
1,662 citations
••
TL;DR: The proposed SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples, and an optimization strategy is proposed, based on the iterative Gauss-Seidel method, for efficient online learning.
Abstract: Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model.
We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.
1,616 citations
••
TL;DR: Perovskite quantum wells yield highly efficient LEDs spanning the visible and near-infrared as discussed by the authors. But their performance is not as good as those of traditional LEDs, and their lifetime is shorter.
Abstract: Perovskite quantum wells yield highly efficient LEDs spanning the visible and near-infrared.
1,419 citations
••
TL;DR: Discriminative Correlation Filters have demonstrated excellent performance for visual object tracking and the key to their success is the ability to efficiently exploit available negative data.
Abstract: Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data by including all shifted versions of a training sample. However, the underlying DCF formulation is restricted to single-resolution feature maps, significantly limiting its potential. In this paper, we go beyond the conventional DCF framework and introduce a novel formulation for training continuous convolution filters. We employ an implicit interpolation model to pose the learning problem in the continuous spatial domain. Our proposed formulation enables efficient integration of multi-resolution deep feature maps, leading to superior results on three object tracking benchmarks: OTB-2015 (+5.1% in mean OP), Temple-Color (+4.6% in mean OP), and VOT2015 (20% relative reduction in failure rate). Additionally, our approach is capable of sub-pixel localization, crucial for the task of accurate feature point tracking. We also demonstrate the effectiveness of our learning formulation in extensive feature point tracking experiments. Code and supplementary material are available at this http URL.
1,324 citations
••
08 Oct 2016TL;DR: In this article, discriminative correlation filters (DCF) have demonstrated excellent performance for visual object tracking, and the key to their success is the ability to efficiently exploit available negative data.
Abstract: Discriminative Correlation Filters (DCF) have demonstrated excellent performance for visual object tracking. The key to their success is the ability to efficiently exploit available negative data b ...
1,301 citations
••
1,178 citations
•
17 Nov 2016TL;DR: This is the first complete guide to the physical and engineering principles of Massive MIMO and will guide readers through key topics in multi-cell systems such as propagation modeling, multiplexing and de-multiplexing, channel estimation, power control, and performance evaluation.
Abstract: "Written by the pioneers of the concept, this is the first complete guide to the physical and engineering principles of Massive MIMO. Assuming only a basic background in communications and statisti ...
1,115 citations
••
TL;DR: In this article, fast and efficient charge separation is essential to achieve high power conversion efficiency in organic solar cells (OSCs), and in state-of-the-art OSCs, this is usually achieved by a significant driv
Abstract: Fast and efficient charge separation is essential to achieve high power conversion efficiency in organic solar cells (OSCs). In state-of-the-art OSCs, this is usually achieved by a significant driv ...
•
TL;DR: In this paper, a factorized convolution operator was introduced to reduce the number of parameters in the discriminative correlation filter (DCF) model and a compact generative model of the training sample distribution, which significantly reduced memory and time complexity, while providing better diversity of samples.
Abstract: In recent years, Discriminative Correlation Filter (DCF) based methods have significantly advanced the state-of-the-art in tracking. However, in the pursuit of ever increasing tracking performance, their characteristic speed and real-time capability have gradually faded. Further, the increasingly complex models, with massive number of trainable parameters, have introduced the risk of severe over-fitting. In this work, we tackle the key causes behind the problems of computational complexity and over-fitting, with the aim of simultaneously improving both speed and performance.
We revisit the core DCF formulation and introduce: (i) a factorized convolution operator, which drastically reduces the number of parameters in the model; (ii) a compact generative model of the training sample distribution, that significantly reduces memory and time complexity, while providing better diversity of samples; (iii) a conservative model update strategy with improved robustness and reduced complexity. We perform comprehensive experiments on four benchmarks: VOT2016, UAV123, OTB-2015, and TempleColor. When using expensive deep features, our tracker provides a 20-fold speedup and achieves a 13.0% relative gain in Expected Average Overlap compared to the top ranked method in the VOT2016 challenge. Moreover, our fast variant, using hand-crafted features, operates at 60 Hz on a single CPU, while obtaining 65.0% AUC on OTB-2015.
••
TL;DR: This overview article identifies 10 myths of Massive MIMO and explains why they are not true, and asks a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly.
Abstract: Wireless communications is one of the most successful technologies in modern years, given that an exponential growth rate in wireless traffic has been sustained for over a century (known as Cooper’s law). This trend will certainly continue, driven by new innovative applications; for example, augmented reality and the Internet of Things. Massive MIMO has been identified as a key technology to handle orders of magnitude more data traffic. Despite the attention it is receiving from the communication community, we have personally witnessed that Massive MIMO is subject to several widespread misunderstandings, as epitomized by following (fictional) abstract: “The Massive MIMO technology uses a nearly infinite number of high-quality antennas at the base stations. By having at least an order of magnitude more antennas than active terminals, one can exploit asymptotic behaviors that some special kinds of wireless channels have. This technology looks great at first sight, but unfortunately the signal processing complexity is off the charts and the antenna arrays would be so huge that it can only be implemented in millimeter-wave bands.” These statements are, in fact, completely false. In this overview article, we identify 10 myths and explain why they are not true. We also ask a question that is critical for the practical adoption of the technology and which will require intense future research activities to answer properly. We provide references to key technical papers that support our claims, while a further list of related overview and technical papers can be found at the Massive MIMO Info Point: http://massivemimo. eu
••
Norfolk and Norwich University Hospital1, University of East Anglia2, Leiden University Medical Center3, University of Barcelona4, Claude Bernard University Lyon 15, University of Kiel6, Tallaght Hospital7, University of Oxford8, University of Paris9, University of Pennsylvania10, Linköping University11, Charles University in Prague12, Lund University13, Ankara University14
TL;DR: The 2009 European League Against Rheumatism recommendations for the management of antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) have been updated and 15 recommendations were developed, covering general aspects, such as attaining remission.
Abstract: In this article, the 2009 European League Against Rheumatism (EULAR) recommendations for the management of antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) have been updated. The 2009 recommendations were on the management of primary small and medium vessel vasculitis. The 2015 update has been developed by an international task force representing EULAR, the European Renal Association and the European Vasculitis Society (EUVAS). The recommendations are based upon evidence from systematic literature reviews, as well as expert opinion where appropriate. The evidence presented was discussed and summarised by the experts in the course of a consensus-finding and voting process. Levels of evidence and grades of recommendations were derived and levels of agreement (strengths of recommendations) determined. In addition to the voting by the task force members, the relevance of the recommendations was assessed by an online voting survey among members of EUVAS. Fifteen recommendations were developed, covering general aspects, such as attaining remission and the need for shared decision making between clinicians and patients. More specific items relate to starting immunosuppressive therapy in combination with glucocorticoids to induce remission, followed by a period of remission maintenance; for remission induction in life-threatening or organ-threatening AAV, cyclophosphamide and rituximab are considered to have similar efficacy; plasma exchange which is recommended, where licensed, in the setting of rapidly progressive renal failure or severe diffuse pulmonary haemorrhage. These recommendations are intended for use by healthcare professionals, doctors in specialist training, medical students, pharmaceutical industries and drug regulatory organisations.
•
TL;DR: Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly fivefold improvement in 95%-likely per-user throughput over the small-cell scheme, and tenfold improvement when shadow fading is correlated.
Abstract: A Cell-Free Massive MIMO (multiple-input multiple-output) system comprises a very large number of distributed access points (APs)which simultaneously serve a much smaller number of users over the same time/frequency resources based on directly measured channel characteristics. The APs and users have only one antenna each. The APs acquire channel state information through time-division duplex operation and the reception of uplink pilot signals transmitted by the users. The APs perform multiplexing/de-multiplexing through conjugate beamforming on the downlink and matched filtering on the uplink. Closed-form expressions for individual user uplink and downlink throughputs lead to max-min power control algorithms. Max-min power control ensures uniformly good service throughout the area of coverage. A pilot assignment algorithm helps to mitigate the effects of pilot contamination, but power control is far more important in that regard.
Cell-Free Massive MIMO has considerably improved performance with respect to a conventional small-cell scheme, whereby each user is served by a dedicated AP, in terms of both 95%-likely per-user throughput and immunity to shadow fading spatial correlation. Under uncorrelated shadow fading conditions, the cell-free scheme provides nearly 5-fold improvement in 95%-likely per-user throughput over the small-cell scheme, and 10-fold improvement when shadow fading is correlated.
••
TL;DR: In this article, large scale synthesis and delamination of 2D Mo2CTx (where T is a surface termination group) has been achieved by selectively etching gallium from the recently discovered nanolaminated, ternary tra...
Abstract: Large scale synthesis and delamination of 2D Mo2CTx (where T is a surface termination group) has been achieved by selectively etching gallium from the recently discovered nanolaminated, ternary tra ...
•
TL;DR: This paper proposes a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation in a tracking-by-detection framework that obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state of theart tracker on VOT2014.
Abstract: Accurate scale estimation of a target is a challenging research problem in visual object tracking. Most state-of-the-art methods employ an exhaustive scale search to estimate the target size. The exhaustive search strategy is computationally expensive and struggles when encountered with large scale variations. This paper investigates the problem of accurate and robust scale estimation in a tracking-by-detection framework. We propose a novel scale adaptive tracking approach by learning separate discriminative correlation filters for translation and scale estimation. The explicit scale filter is learned online using the target appearance sampled at a set of different scales. Contrary to standard approaches, our method directly learns the appearance change induced by variations in the target scale. Additionally, we investigate strategies to reduce the computational cost of our approach.
Extensive experiments are performed on the OTB and the VOT2014 datasets. Compared to the standard exhaustive scale search, our approach achieves a gain of 2.5% in average overlap precision on the OTB dataset. Additionally, our method is computationally efficient, operating at a 50% higher frame rate compared to the exhaustive scale search. Our method obtains the top rank in performance by outperforming 19 state-of-the-art trackers on OTB and 37 state-of-the-art trackers on VOT2014.
••
Université Paris-Saclay1, Goddard Space Flight Center2, Commonwealth Scientific and Industrial Research Organisation3, National Oceanic and Atmospheric Administration4, National Institute of Geophysics and Volcanology5, Linköping University6, Netherlands Institute for Space Research7, Food and Agriculture Organization8, Stanford University9, University of Sheffield10, University of California, Irvine11, National Institute of Water and Atmospheric Research12, Max Planck Society13, École Polytechnique14, Yale University15, University of Victoria16, Jet Propulsion Laboratory17, Met Office18, International Institute for Applied Systems Analysis19, National Institute for Environmental Studies20, Oeschger Centre for Climate Change Research21, National Center for Atmospheric Research22, City University of New York23, Princeton University24, University of Bristol25, Lund University26, Japan Agency for Marine-Earth Science and Technology27, Université du Québec à Montréal28, University of Oslo29, Centre national de la recherche scientifique30, Massachusetts Institute of Technology31, Lawrence Berkeley National Laboratory32, University of Hohenheim33, Japan Meteorological Agency34, Auburn University35, Imperial College London36, Royal Netherlands Meteorological Institute37, VU University Amsterdam38, University of California, San Diego39, Environment Canada40, University of Toronto41, Northwest A&F University42
TL;DR: The Global Carbon Project (GCP) as discussed by the authors is a consortium of multi-disciplinary scientists, including atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions.
Abstract: . The global methane (CH4) budget is becoming an increasingly important component for managing realistic pathways to mitigate climate change. This relevance, due to a shorter atmospheric lifetime and a stronger warming potential than carbon dioxide, is challenged by the still unexplained changes of atmospheric CH4 over the past decade. Emissions and concentrations of CH4 are continuing to increase, making CH4 the second most important human-induced greenhouse gas after carbon dioxide. Two major difficulties in reducing uncertainties come from the large variety of diffusive CH4 sources that overlap geographically, and from the destruction of CH4 by the very short-lived hydroxyl radical (OH). To address these difficulties, we have established a consortium of multi-disciplinary scientists under the umbrella of the Global Carbon Project to synthesize and stimulate research on the methane cycle, and producing regular (∼ biennial) updates of the global methane budget. This consortium includes atmospheric physicists and chemists, biogeochemists of surface and marine emissions, and socio-economists who study anthropogenic emissions. Following Kirschke et al. (2013), we propose here the first version of a living review paper that integrates results of top-down studies (exploiting atmospheric observations within an atmospheric inverse-modelling framework) and bottom-up models, inventories and data-driven approaches (including process-based models for estimating land surface emissions and atmospheric chemistry, and inventories for anthropogenic emissions, data-driven extrapolations). For the 2003–2012 decade, global methane emissions are estimated by top-down inversions at 558 Tg CH4 yr−1, range 540–568. About 60 % of global emissions are anthropogenic (range 50–65 %). Since 2010, the bottom-up global emission inventories have been closer to methane emissions in the most carbon-intensive Representative Concentrations Pathway (RCP8.5) and higher than all other RCP scenarios. Bottom-up approaches suggest larger global emissions (736 Tg CH4 yr−1, range 596–884) mostly because of larger natural emissions from individual sources such as inland waters, natural wetlands and geological sources. Considering the atmospheric constraints on the top-down budget, it is likely that some of the individual emissions reported by the bottom-up approaches are overestimated, leading to too large global emissions. Latitudinal data from top-down emissions indicate a predominance of tropical emissions (∼ 64 % of the global budget, The most important source of uncertainty on the methane budget is attributable to emissions from wetland and other inland waters. We show that the wetland extent could contribute 30–40 % on the estimated range for wetland emissions. Other priorities for improving the methane budget include the following: (i) the development of process-based models for inland-water emissions, (ii) the intensification of methane observations at local scale (flux measurements) to constrain bottom-up land surface models, and at regional scale (surface networks and satellites) to constrain top-down inversions, (iii) improvements in the estimation of atmospheric loss by OH, and (iv) improvements of the transport models integrated in top-down inversions. The data presented here can be downloaded from the Carbon Dioxide Information Analysis Center ( http://doi.org/10.3334/CDIAC/GLOBAL_METHANE_BUDGET_2016_V1.1 ) and the Global Carbon Project.
••
University of Pretoria1, International Olympic Committee2, Qatar Airways3, Norwegian School of Sport Sciences4, University of Queensland5, Loughborough University6, Linköping University7, University of Illinois at Chicago8, Vrije Universiteit Brussel9, University of Sydney10, Vanderbilt University Medical Center11, University of Oslo12
TL;DR: An expert group to review the scientific evidence for the relationship of load and health outcomes in sport provides athletes, coaches and support staff with practical guidelines to manage load in sport.
Abstract: Athletes participating in elite sports are exposed to high training loads and increasingly saturated competition calendars. Emerging evidence indicates that poor load management is a major risk factor for injury. The International Olympic Committee convened an expert group to review the scientific evidence for the relationship of load (defined broadly to include rapid changes in training and competition load, competition calendar congestion, psychological load and travel) and health outcomes in sport. We summarise the results linking load to risk of injury in athletes, and provide athletes, coaches and support staff with practical guidelines to manage load in sport. This consensus statement includes guidelines for (1) prescription of training and competition load, as well as for (2) monitoring of training, competition and psychological load, athlete well-being and injury. In the process, we identified research priorities.
••
University of Ljubljana1, University of Birmingham2, Czech Technical University in Prague3, Linköping University4, Austrian Institute of Technology5, Carnegie Mellon University6, Parthenope University of Naples7, University of Isfahan8, Autonomous University of Madrid9, University of Ottawa10, University of Oxford11, Hong Kong Baptist University12, Kyiv Polytechnic Institute13, Middle East Technical University14, Hacettepe University15, King Abdullah University of Science and Technology16, Pohang University of Science and Technology17, University of Nottingham18, University at Albany, SUNY19, Chinese Academy of Sciences20, Dalian University of Technology21, Xi'an Jiaotong University22, Indian Institute of Space Science and Technology23, Hong Kong University of Science and Technology24, ASELSAN25, Commonwealth Scientific and Industrial Research Organisation26, Australian National University27, University of Missouri28, University of Verona29, Universidade Federal de Itajubá30, United States Naval Research Laboratory31, Marquette University32, Graz University of Technology33, Naver Corporation34, Imperial College London35, Electronics and Telecommunications Research Institute36, Zhejiang University37, University of Surrey38, Harbin Institute of Technology39, Lehigh University40
TL;DR: The Visual Object Tracking challenge VOT2016 goes beyond its predecessors by introducing a new semi-automatic ground truth bounding box annotation methodology and extending the evaluation system with the no-reset experiment.
Abstract: The Visual Object Tracking challenge VOT2016 aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 70 trackers are presented, with a large number of trackers being published at major computer vision conferences and journals in the recent years. The number of tested state-of-the-art trackers makes the VOT 2016 the largest and most challenging benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the Appendix. The VOT2016 goes beyond its predecessors by (i) introducing a new semi-automatic ground truth bounding box annotation methodology and (ii) extending the evaluation system with the no-reset experiment. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net).
••
TL;DR: A new trimethylaluminum vapor-based crosslinking method to render the nanocrystal films insoluble is applied, coupled with the natural confinement of injected charges within the perovskite crystals, facilitates electron-hole capture and gives rise to a remarkable electroluminescence yield.
Abstract: The preparation of highly efficient perovskite nanocrystal light-emitting diodes is shown. A new trimethylaluminum vapor-based crosslinking method to render the nanocrystal films insoluble is applied. The resulting near-complete nanocrystal film coverage, coupled with the natural confinement of injected charges within the perovskite crystals, facilitates electron-hole capture and give rise to a remarkable electroluminescence yield of 5.7%.
••
TL;DR: This work adapted Kahneman's seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding Effortful Listening (FUEL), which incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention.
Abstract: The Fifth Eriksholm Workshop on “Hearing Impairment and Cognitive Energy” was convened to develop a consensus among interdisciplinary experts about what is known on the topic, gaps in knowledge, the use of terminology, priorities for future research, and implications for practice. The general term cognitive energy was chosen to facilitate the broadest
possible discussion of the topic. It goes back to Titchener (1908) who described the effects of attention on perception; he used the term psychic energy for the notion that limited mental resources can be flexibly allocated among perceptual and mental activities. The workshop focused on three main areas: (1) theories, models, concepts, definitions, and frameworks; (2) methods and measures; and (3) knowledge translation. We defined effort as the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a task, with listening effort applying more specifically when tasks involve listening. We adapted Kahneman’s seminal (1973) Capacity Model of Attention to listening and proposed a heuristically useful Framework for Understanding
Effortful Listening (FUEL). Our FUEL incorporates the well-known relationship between cognitive demand and the supply of cognitive capacity that is the foundation of cognitive theories of attention. Our FUEL also incorporates a motivation dimension based on complementary theories of motivational intensity, adaptive gain control, and optimal performance, fatigue, and pleasure. Using a three-dimensional illustration, we highlight how listening effort depends not only on hearing difficulties and task demands but also on the listener’s motivation to expend mental effort in the challenging situations of everyday life.
••
University of Göttingen1, European Society of Cardiology2, University of Warwick3, Athens State University4, University of Ferrara5, Academy for Urban School Leadership6, University of Brescia7, Universidade Nova de Lisboa8, Charles University in Prague9, Bar-Ilan University10, Paris Diderot University11, Linköping University12, Semmelweis University13, Medical University of Łódź14, Cardiovascular Institute of the South15, Alexandria University16, University of Belgrade17, Lithuanian University of Health Sciences18, University of Graz19, University Clinical Hospital Mostar20
TL;DR: The European Society of Cardiology Heart Failure Long‐Term Registry (ESC‐HF‐LT‐R) was set up with the aim of describing the clinical epidemiology and the 1‐year outcomes of patients with heart failure with the added intention of comparing differences between countries.
Abstract: Aims
The European Society of Cardiology Heart Failure Long-Term Registry (ESC-HF-LT-R) was set up with the aim of describing the clinical epidemiology and the 1-year outcomes of patients with heart failure (HF) with the added intention of comparing differences between participating countries.
Methods and results
The ESC-HF-LT-R is a prospective, observational registry contributed to by 211 cardiology centres in 21 European and/or Mediterranean countries, all being member countries of the ESC. Between May 2011 and April 2013 it collected data on 12 440 patients, 40.5% of them hospitalized with acute HF (AHF) and 59.5% outpatients with chronic HF (CHF). The all-cause 1-year mortality rate was 23.6% for AHF and 6.4% for CHF. The combined endpoint of mortality or HF hospitalization within 1 year had a rate of 36% for AHF and 14.5% for CHF. All-cause mortality rates in the different regions ranged from 21.6% to 36.5% in patients with AHF, and from 6.9% to 15.6% in those with CHF. These differences in mortality between regions are thought reflect differences in the characteristics and/or management of these patients.
Conclusion
The ESC-HF-LT-R shows that 1-year all-cause mortality of patients with AHF is still high while the mortality of CHF is lower. This registry provides the opportunity to evaluate the management and outcomes of patients with HF and identify areas for improvement.
••
TL;DR: In this article, the optimal number of scheduled users in a massive MIMO system with arbitrary pilot reuse and random user locations is analyzed in a closed form, while simulations are used to show what happens at finite $M$, in different interference scenarios, with different pilot reuse factors, and for different processing schemes.
Abstract: Massive MIMO is a promising technique for increasing the spectral efficiency (SE) of cellular networks, by deploying antenna arrays with hundreds or thousands of active elements at the base stations and performing coherent transceiver processing. A common rule-of-thumb is that these systems should have an order of magnitude more antennas $M$ than scheduled users $K$ because the users’ channels are likely to be near-orthogonal when $M/K > 10$ . However, it has not been proved that this rule-of-thumb actually maximizes the SE. In this paper, we analyze how the optimal number of scheduled users $K^\star$ depends on $M$ and other system parameters. To this end, new SE expressions are derived to enable efficient system-level analysis with power control, arbitrary pilot reuse, and random user locations. The value of $K^\star$ in the large- $M$ regime is derived in closed form, while simulations are used to show what happens at finite $M$ , in different interference scenarios, with different pilot reuse factors, and for different processing schemes. Up to half the coherence block should be dedicated to pilots and the optimal $M/K$ is less than 10 in many cases of practical relevance. Interestingly, $K^\star$ depends strongly on the processing scheme and hence it is unfair to compare different schemes using the same $K$ .
••
TL;DR: A unified model in which immune tolerance to β cells can be broken by several environmental exposures that induce generation of hybrid peptides acting as neoautoantigens is suggested.
••
TL;DR: Training-related hamstring injury rates in male professional footballers over 13 consecutive seasons have increased substantially since 2001 but match-related injury rates have remained stable, and the challenge is for clubs to reduce training-related muscle injury rates without impairing match performance.
Abstract: Background There are limited data on hamstring injury rates over time in football. Aim To analyse time trends in hamstring injury rates in male professional footballers over 13 consecutive seasons and to distinguish the relative contribution of training and match injuries. Methods 36 clubs from 12 European countries were followed between 2001 and 2014. Team medical staff recorded individual player exposure and time-loss injuries. Injuries per 1000 h were compared as a rate ratio (RR) with 95% CI. Injury burden was the number of lay off days per 1000 h. Seasonal trend for injury was analysed using linear regression. Results A total of 1614 hamstring injuries were recorded; 22% of players sustained at least one hamstring injury during a season. The overall hamstring injury rate over the 13-year period was 1.20 injuries per 1000 h; the match injury rate (4.77) being 9 times higher than the training injury rate (0.51; RR 9.4; 95% CI 8.5 to 10.4). The time-trend analysis showed an annual average 2.3% year on year increase in the total hamstring injury rate over the 13-year period (R 2 =0.431, b=0.023, 95% CI 0.006 to 0.041, p=0.015). This increase over time was most pronounced for training injuries—these increased by 4.0% per year (R 2 =0.450, b=0.040, 95% CI 0.011 to 0.070, p=0.012). The average hamstring injury burden was 19.7 days per 1000 h (annual average increase 4.1%) (R 2 =0.437, b=0.041, 95% CI 0.010 to 0.072, p=0.014). Conclusions Training-related hamstring injury rates have increased substantially since 2001 but match-related injury rates have remained stable. The challenge is for clubs to reduce training-related hamstring injury rates without impairing match performance.
••
TL;DR: In this article, the authors investigated the effects of the presence of LiCl during the chemical etching of the MAX phase Ti3AlC2 into MXene Ti3C2Tx (T stands for surface termination) and found that the resulting MXene has Li+ cations in the interlayer space.
Abstract: Ti3C2 and other two-dimensional transition metal carbides known as MXenes are currently being explored for many applications involving intercalated ions, from electrochemical energy storage, to contaminant sorption from water, to selected ion sieving. We report here a systematic investigation of ion exchange in Ti3C2 MXene and its hydration/dehydration behavior. We have investigated the effects of the presence of LiCl during the chemical etching of the MAX phase Ti3AlC2 into MXene Ti3C2Tx (T stands for surface termination) and found that the resulting MXene has Li+ cations in the interlayer space. We successfully exchanged the Li+ cations with K+, Na+, Rb+, Mg2+, and Ca2+ (supported by X-ray photoelectron and energy-dispersive spectroscopy) and found that the exchanged material expands on the unit-cell level in response to changes in humidity, with the nature of expansion dependent on the intercalated cation, similar to behavior of clay minerals; stepwise expansions of the basal spacing were observed, wit...
••
TL;DR: Clinical psychologists should consider using modern information technology and evidence-based treatment programs as a complement to their other services, even though there will always be clients for whom face-to-face treatment is the best option.
Abstract: During the past 15 years, much progress has been made in developing and testing Internet-delivered psychological treatments. In particular, therapist-guided Internet treatments have been found to be effective for a wide range of psychiatric and somatic conditions in well over 100 controlled trials. These treatments require (a) a secure web platform, (b) robust assessment procedures, (c) treatment contents that can be text based or offered in other formats, and (d) a therapist role that differs from that in face-to-face therapy. Studies suggest that guided Internet treatments can be as effective as face-to-face treatments, lead to sustained improvements, work in clinically representative conditions, and probably are cost-effective. Despite these research findings, Internet treatment is not yet disseminated in most places, and clinical psychologists should consider using modern information technology and evidence-based treatment programs as a complement to their other services, even though there will always be clients for whom face-to-face treatment is the best option.
••
01 Jun 2016TL;DR: In this article, the authors propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights, which enables corrupted samples to be downweighted while increasing the impact of correct ones.
Abstract: Tracking-by-detection methods have demonstrated competitive performance in recent years. In these approaches, the tracking model heavily relies on the quality of the training set. Due to the limited amount of labeled training data, additional samples need to be extracted and labeled by the tracker itself. This often leads to the inclusion of corrupted training samples, due to occlusions, misalignments and other perturbations. Existing tracking-by-detection methods either ignore this problem, or employ a separate component for managing the training set. We propose a novel generic approach for alleviating the problem of corrupted training samples in tracking-bydetection frameworks. Our approach dynamically manages the training set by estimating the quality of the samples. Contrary to existing approaches, we propose a unified formulation by minimizing a single loss over both the target appearance model and the sample quality weights. The joint formulation enables corrupted samples to be downweighted while increasing the impact of correct ones. Experiments are performed on three benchmarks: OTB-2015 with 100 videos, VOT-2015 with 60 videos, and Temple-Color with 128 videos. On the OTB-2015, our unified formulation significantly improves the baseline, with a gain of 3:8% in mean overlap precision. Finally, our method achieves state-of-the-art results on all three datasets.
••
01 May 2016
TL;DR: In this paper, a chemical etching method was developed to produce porous two-dimensional (2D) Ti3C2Tx MXenes at room temperature in aqueous solutions.
Abstract: Herein we develop a chemical etching method to produce porous two-dimensional (2D) Ti3C2Tx MXenes at room temperature in aqueous solutions. The as-produced porous Ti3C2Tx (p-Ti3C2Tx) have larger sp ...