scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Catalonia published in 2018"


Proceedings ArticleDOI
15 Feb 2018
TL;DR: Graph Attention Networks (GATs) as mentioned in this paper leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
Abstract: We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

7,904 citations


Posted ContentDOI
Spyridon Bakas1, Mauricio Reyes, Andras Jakab2, Stefan Bauer3  +435 moreInstitutions (111)
TL;DR: This study assesses the state-of-the-art machine learning methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018, and investigates the challenge of identifying the best ML algorithms for each of these tasks.
Abstract: Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumoris a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses thestate-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross tota lresection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.

1,165 citations


Journal ArticleDOI
TL;DR: A mathematical expression is derived to compute PrediXcan results using summary data, and the effects of gene expression variation on human phenotypes in 44 GTEx tissues and >100 phenotypes are investigated.
Abstract: Scalable, integrative methods to understand mechanisms that link genetic variants with phenotypes are needed. Here we derive a mathematical expression to compute PrediXcan (a gene mapping approach) results using summary data (S-PrediXcan) and show its accuracy and general robustness to misspecified reference sets. We apply this framework to 44 GTEx tissues and 100+ phenotypes from GWAS and meta-analysis studies, creating a growing public catalog of associations that seeks to capture the effects of gene expression variation on human phenotypes. Replication in an independent cohort is shown. Most of the associations are tissue specific, suggesting context specificity of the trait etiology. Colocalized significant associations in unexpected tissues underscore the need for an agnostic scanning of multiple contexts to improve our ability to detect causal regulatory mechanisms. Monogenic disease genes are enriched among significant associations for related traits, suggesting that smaller alterations of these genes may cause a spectrum of milder phenotypes.

657 citations


Posted Content
TL;DR: A task-based hard attention mechanism that preserves previous tasks' information without affecting the current task's learning, and features the possibility to control both the stability and compactness of the learned knowledge, which makes it also attractive for online learning or network compression applications.
Abstract: Catastrophic forgetting occurs when a neural network loses the information learned in a previous task after training on subsequent tasks. This problem remains a hurdle for artificial intelligence systems with sequential learning capabilities. In this paper, we propose a task-based hard attention mechanism that preserves previous tasks' information without affecting the current task's learning. A hard attention mask is learned concurrently to every task, through stochastic gradient descent, and previous masks are exploited to condition such learning. We show that the proposed mechanism is effective for reducing catastrophic forgetting, cutting current rates by 45 to 80%. We also show that it is robust to different hyperparameter choices, and that it offers a number of monitoring capabilities. The approach features the possibility to control both the stability and compactness of the learned knowledge, which we believe makes it also attractive for online learning or network compression applications.

379 citations


Journal ArticleDOI
TL;DR: The analysis of a year of continuous data from three passive acoustic monitoring devices revealed species-dependent seasonal and spatial variation of a large variety of marine mammals in the Greenland and Barents Seas, showing the importance of monitoring Arctic underwater biodiversity for assessing the ecological changes under the scope of climate change.
Abstract: While the Greenland and Barents Seas are known habitats for several cetacean and pinniped species there is a lack of long-term monitoring data in this rapidly changing environment. Moreover, little is known of the ambient soundscapes, and increasing off-shore anthropogenic activities can influence the ecosystem and marine life. Baseline acoustic data is needed to better assess current and future soundscape and ecosystem conditions. The analysis of a year of continuous data from three passive acoustic monitoring devices revealed species-dependent seasonal and spatial variation of a large variety of marine mammals in the Greenland and Barents Seas. Sampling rates were 39 and 78 kHz in the respective locations, and all systems were operational at a duty cycle of 2 min on, 30 min off. The research presents a description of cetacean and pinniped acoustic detections along with a variety of unknown low-frequency tonal sounds, and ambient sound level measurements that fall within the scope of the European Marine Strategy Framework (MSFD). The presented data shows the importance of monitoring Arctic underwater biodiversity for assessing the ecological changes under the scope of climate change.

285 citations


Journal ArticleDOI
TL;DR: In this paper, a set of qualitative case studies were used in higher education institutions across seven countries (Brazil, Serbia, Latvia, South Africa, Spain, Syria, UK) to examine the extent to which transformation and learning on matters related to sustainable development may be integrated.

275 citations


Journal ArticleDOI
10 May 2018
TL;DR: The Tropospheric Ozone Assessment Report (TOAR) is an activity of the International Global Atmospheric Chemistry Project as mentioned in this paper, which provides a detailed view of ozone in the lower troposphere across East Asia and Europe.
Abstract: The Tropospheric Ozone Assessment Report (TOAR) is an activity of the International Global Atmospheric Chemistry Project. This paper is a component of the report, focusing on the present-day distribution and trends of tropospheric ozone relevant to climate and global atmospheric chemistry model evaluation. Utilizing the TOAR surface ozone database, several figures present the global distribution and trends of daytime average ozone at 2702 non-urban monitoring sites, highlighting the regions and seasons of the world with the greatest ozone levels. Similarly, ozonesonde and commercial aircraft observations reveal ozone’s distribution throughout the depth of the free troposphere. Long-term surface observations are limited in their global spatial coverage, but data from remote locations indicate that ozone in the 21st century is greater than during the 1970s and 1980s. While some remote sites and many sites in the heavily polluted regions of East Asia show ozone increases since 2000, many others show decreases and there is no clear global pattern for surface ozone changes since 2000. Two new satellite products provide detailed views of ozone in the lower troposphere across East Asia and Europe, revealing the full spatial extent of the spring and summer ozone enhancements across eastern China that cannot be assessed from limited surface observations. Sufficient data are now available (ozonesondes, satellite, aircraft) across the tropics from South America eastwards to the western Pacific Ocean, to indicate a likely tropospheric column ozone increase since the 1990s. The 2014–2016 mean tropospheric ozone burden (TOB) between 60˚N–60˚S from five satellite products is 300 Tg ± 4%. While this agreement is excellent, the products differ in their quantification of TOB trends and further work is required to reconcile the differences. Satellites can now estimate ozone’s global long-wave radiative effect, but evaluation is difficult due to limited in situ observations where the radiative effect is greatest.

274 citations


Journal ArticleDOI
TL;DR: This performance provides an experimental benchmark demonstrating the ability to realize the low-frequency science potential of the LISA mission, recently selected by the European Space Agency.
Abstract: In the months since the publication of the first results, the noise performance of LISA Pathfinder has improved because of reduced Brownian noise due to the continued decrease in pressure around the test masses, from a better correction of noninertial effects, and from a better calibration of the electrostatic force actuation. In addition, the availability of numerous long noise measurement runs, during which no perturbation is purposely applied to the test masses, has allowed the measurement of noise with good statistics down to 20 μ Hz . The Letter presents the measured differential acceleration noise figure, which is at ( 1.74 ± 0.01 ) fm s − 2 / √ Hz above 2 mHz and ( 6 ± 1 ) × 10 fm s − 2 / √ Hz at 20 μ Hz , and discusses the physical sources for the measured noise. This performance provides an experimental benchmark demonstrating the ability to realize the low-frequency science potential of the LISA mission, recently selected by the European Space Agency.

271 citations


Journal ArticleDOI
TL;DR: The design of a low complexity fuzzy logic controller of only 25-rules to be embedded in an energy management system for a residential grid-connected microgrid including renewable energy sources and storage capability is presented.
Abstract: This paper presents the design of a low complexity fuzzy logic controller of only 25-rules to be embedded in an energy management system for a residential grid-connected microgrid including renewable energy sources and storage capability. The system assumes that neither the renewable generation nor the load demand is controllable. The main goal of the design is to minimize the grid power profile fluctuations while keeping the battery state of charge within secure limits. Instead of using forecasting-based methods, the proposed approach use both the microgrid energy rate-of-change and the battery state of charge to increase, decrease, or maintain the power delivered/absorbed by the mains. The controller design parameters (membership functions and rule-base) are adjusted to optimize a pre-defined set of quality criteria of the microgrid behavior. A comparison with other proposals seeking the same goal is presented at simulation level, whereas the features of the proposed design are experimentally tested on a real residential microgrid implemented at the Public University of Navarre.

240 citations


Journal ArticleDOI
TL;DR: In this article, a series of optical spectroscopic and photometric observations of PSR J2215+5135, a "redback" binary MSP in a 4.14 hr orbit, and measure a drastic temperature contrast between the dark/cold (T N = 5660 K) and bright/hot (T D = 8080 K) sides of the companion star.
Abstract: New millisecond pulsars (MSPs) in compact binaries provide a good opportunity to search for the most massive neutron stars. Their main-sequence companion stars are often strongly irradiated by the pulsar, displacing the effective center of light from their barycenter and making mass measurements uncertain. We present a series of optical spectroscopic and photometric observations of PSR J2215+5135, a "redback" binary MSP in a 4.14 hr orbit, and measure a drastic temperature contrast between the dark/cold (T N = 5660 K) and bright/hot (T D = 8080 K) sides of the companion star. We find that the radial velocities depend systematically on the atmospheric absorption lines used to measure them. Namely, the semi-amplitude of the radial velocity curve (RVC) of J2215 measured with magnesium triplet lines is systematically higher than that measured with hydrogen Balmer lines, by 10%. We interpret this as a consequence of strong irradiation, whereby metallic lines dominate the dark side of the companion (which moves faster) and Balmer lines trace its bright (slower) side. Further, using a physical model of an irradiated star to fit simultaneously the two-species RVCs and the three-band light curves, we find a center-of-mass velocity of K 2 = 412.3 ± 5.0 km s−1 and an orbital inclination i = 639. Our model is able to reproduce the observed fluxes and velocities without invoking irradiation by an extended source. We measure masses of M 1 = 2.27 M ⊙ and M 2 = 0.33 M ⊙ for the neutron star and the companion star, respectively. If confirmed, such a massive pulsar would rule out some of the proposed equations of state for the neutron star interior.

235 citations


BookDOI
11 Nov 2018
TL;DR: Providing the core building blocks of conformance checking and describing its main applications, this book mainly addresses students specializing in business process management, researchers entering process mining and conformance Checking for the first time, and advanced professionals whose work involves process evaluation, modelling and optimization.
Abstract: This book introduces readers to the field of conformance checking as a whole and outlines the fundamental relation between modelled and recorded behaviour. Conformance checking interrelates the modelled and recorded behaviour of a given process and provides techniques and methods for comparing and analysing observed instances of a process in the presence of a model, independent of the model's origin. Its goal is to provide an overview of the essential techniques and methods in this field at an intuitive level, together with precise formalisations of its underlying principles. The book is divided into three parts, that are meant to cover different perspectives of the field of conformance checking. Part I presents a comprehensive yet accessible overview of the essential concepts used to interrelate modelled and recorded behaviour. It also serves as a reference for assessing how conformance checking efforts could be applied in specific domains. Next, Part II provides readers with detailed insights into algorithms for conformance checking, including the most commonly used formal notions and their instantiation for specific analysis questions. Lastly, Part III highlights applications that help to make sense of conformance checking results, thereby providing a necessary next step to increase the value of a given process model. They help to interpret the outcomes of conformance checking and incorporate them by means of enhancement and repair techniques. Providing the core building blocks of conformance checking and describing its main applications, this book mainly addresses students specializing in business process management, researchers entering process mining and conformance checking for the first time, and advanced professionals whose work involves process evaluation, modelling and optimization.

Journal ArticleDOI
TL;DR: The benefits and limitations of 3D food printing were critically reviewed from a different perspective while providing ample mechanisms to overcome those barriers.
Abstract: Background Digitalizing food using 3-Dimensional (3D) printing is an incipient sector that has a great potential of producing customized food with complex geometries, tailored texture and nutritional content. Yet, its application is still limited and the process utility is under the investigation of many researchers. Scope and approach The main objective of this review was to analyze and compare published articles pertaining 3D food printing to ensure how to reach compatibility between the huge varieties of food ingredients and their corresponding best printing parameters. Different from previously published reviews in the same journal by Lipton et al. (2015) and Liu et al. (2017), this review focuses in depth on optimizing extrusion based food printing which supports the widest array of food and maintains numerous shapes and textures. The benefits and limitations of 3D food printing were critically reviewed from a different perspective while providing ample mechanisms to overcome those barriers. Key findings and conclusions Four main obstacles hamper the printing process: ordinance and guidelines, food shelf life, ingredients restrictions and post processing. Unity and integrity between material properties and process parameters is the key for a best end product. For each group, specific criteria should be monitored: rheological, textural, physiochemical and sensorial properties of the material its self in accordance with the process parameters of nozzle diameter, nozzle height, printing speeds and temperature of printing. It is hoped that this paper will unlock further research on investigating a wider range of food printing ingredients and their influence on customer acceptability.

Journal ArticleDOI
31 Oct 2018-Nature
TL;DR: This study reveals a type of mechanical behaviour that enables epithelial sheets to sustain extreme stretching under constant tension, and shows that in epithelial cells this instability is triggered by a stretch-induced dilution of the actin cortex, and is rescued by the intermediate filament network.
Abstract: Fundamental biological processes are carried out by curved epithelial sheets that enclose a pressurized lumen. How these sheets develop and withstand three-dimensional deformations has remained unclear. Here we combine measurements of epithelial tension and shape with theoretical modelling to show that epithelial sheets are active superelastic materials. We produce arrays of epithelial domes with controlled geometry. Quantification of luminal pressure and epithelial tension reveals a tensional plateau over several-fold areal strains. These extreme strains in the tissue are accommodated by highly heterogeneous strains at a cellular level, in seeming contradiction to the measured tensional uniformity. This phenomenon is reminiscent of superelasticity, a behaviour that is generally attributed to microscopic material instabilities in metal alloys. We show that in epithelial cells this instability is triggered by a stretch-induced dilution of the actin cortex, and is rescued by the intermediate filament network. Our study reveals a type of mechanical behaviour—which we term active superelasticity—that enables epithelial sheets to sustain extreme stretching under constant tension.

Proceedings Article
03 Jul 2018
TL;DR: The 35th International Conference on Machine Learning (ICML) celebrated in Stockholmsmassan, Sweden, was held between 10 and 15 de juliol del 2018 as discussed by the authors.
Abstract: Comunicacio presentada a: 35th International Conference on Machine Learning, celebrat a Stockholmsmassan, Suecia, del 10 al 15 de juliol del 2018.

Journal ArticleDOI
Shivani Bhandari1, Shivani Bhandari2, Shivani Bhandari3, Evan Keane3  +188 moreInstitutions (36)
TL;DR: In this article, the authors report the discovery of four fast radio bursts (FRBs) in the ongoing SUrvey for Pulsars and Extragalactic Radio Bursts at the Parkes Radio Telescope.
Abstract: We report the discovery of four Fast Radio Bursts (FRBs) in the ongoing SUrvey for Pulsars and Extragalactic Radio Bursts at the Parkes Radio Telescope: FRBs 150610, 151206, 151230 and 160102. Our real-time discoveries have enabled us to conduct extensive, rapid multimessenger follow-up at 12 major facilities sensitive to radio, optical, X-ray, gamma-ray photons and neutrinos on time-scales ranging from an hour to a few months post-burst. No counterparts to the FRBs were found and we provide upper limits on afterglow luminosities. None of the FRBs were seen to repeat. Formal fits to all FRBs show hints of scattering while their intrinsic widths are unresolved in time. FRB 151206 is at low Galactic latitude, FRB 151230 shows a sharp spectral cut-off, and FRB 160102 has the highest dispersion measure (DM = 2596.1 ± 0.3 pc cm^−3) detected to date. Three of the FRBs have high dispersion measures (DM > 1500 pc cm^−3), favouring a scenario where the DM is dominated by contributions from the intergalactic medium. The slope of the Parkes FRB source counts distribution with fluences >2 Jy ms is $$\alpha =-2.2^{+0.6}_{-1.2}$$ and still consistent with a Euclidean distribution (α = −3/2). We also find that the all-sky rate is $$1.7^{+1.5}_{-0.9}\times 10^3$$FRBs/(4π sr)/day above $${\sim }2{\rm \, }\rm {Jy}{\rm \, }\rm {ms}$$ and there is currently no strong evidence for a latitude-dependent FRB sky rate.

Journal ArticleDOI
TL;DR: MCCNN as mentioned in this paper represents the convolution kernel itself as a multilayer perceptron, phrasing convolution as a Monte Carlo integration problem, using this notion to combine information from multiple samplings at different levels, and using Poisson disk sampling as a scalable means of hierarchical point cloud learning.
Abstract: Deep learning systems extensively use convolution operations to process input data. Though convolution is clearly defined for structured data such as 2D images or 3D volumes, this is not true for other data types such as sparse point clouds. Previous techniques have developed approximations to convolutions for restricted conditions. Unfortunately, their applicability is limited and cannot be used for general point clouds. We propose an efficient and effective method to learn convolutions for non-uniformly sampled point clouds, as they are obtained with modern acquisition techniques. Learning is enabled by four key novelties: first, representing the convolution kernel itself as a multilayer perceptron; second, phrasing convolution as a Monte Carlo integration problem, third, using this notion to combine information from multiple samplings at different levels; and fourth using Poisson disk sampling as a scalable means of hierarchical point cloud learning. The key idea across all these contributions is to guarantee adequate consideration of the underlying non-uniform sample distribution function from a Monte Carlo perspective. To make the proposed concepts applicable to real-world tasks, we furthermore propose an efficient implementation which significantly reduces the GPU memory required during the training process. By employing our method in hierarchical network architectures we can outperform most of the state-of-the-art networks on established point cloud segmentation, classification and normal estimation benchmarks. Furthermore, in contrast to most existing approaches, we also demonstrate the robustness of our method with respect to sampling variations, even when training with uniformly sampled data only. To support the direct application of these concepts, we provide a ready-to-use TensorFlow implementation of these layers at https://github.com/viscom-ulm/MCCNN.

Journal ArticleDOI
TL;DR: This tutorial paper reviews several machine learning concepts tailored to the optical networking industry and discusses algorithm choices, data and model management strategies, and integration into existing network control and management tools.
Abstract: Networks are complex interacting systems involving cloud operations, core and metro transport, and mobile connectivity all the way to video streaming and similar user applications.With localized and highly engineered operational tools, it is typical of these networks to take days to weeks for any changes, upgrades, or service deployments to take effect. Machine learning, a sub-domain of artificial intelligence, is highly suitable for complex system representation. In this tutorial paper, we review several machine learning concepts tailored to the optical networking industry and discuss algorithm choices, data and model management strategies, and integration into existing network control and management tools. We then describe four networking case studies in detail, covering predictive maintenance, virtual network topology management, capacity optimization, and optical spectral analysis.

Journal ArticleDOI
TL;DR: It is shown that higher cubic phase content leads to better translucency and stability in water steam, but at the expense of strength and toughness.

Journal ArticleDOI
17 Jul 2018
TL;DR: In this paper, a greedy stepwise algorithm for selection of balances or microbial signatures is presented. But the authors do not consider the compositional nature of the microbiome and the fact that it carries relative information, and instead of defining a microbial signature as a linear combination in real space corresponding to the abundances of a group of taxa, they consider microbial signatures given by the geometric means of data from two groups of organisms whose relative abundances or balance are associated with the response variable of interest.
Abstract: High-throughput sequencing technologies have revolutionized microbiome research by allowing the relative quantification of microbiome composition and function in different environments. In this work we focus on the identification of microbial signatures, groups of microbial taxa that are predictive of a phenotype of interest. We do this by acknowledging the compositional nature of the microbiome and the fact that it carries relative information. Thus, instead of defining a microbial signature as a linear combination in real space corresponding to the abundances of a group of taxa, we consider microbial signatures given by the geometric means of data from two groups of taxa whose relative abundances, or balance, are associated with the response variable of interest. In this work we present selbal, a greedy stepwise algorithm for selection of balances or microbial signatures that preserves the principles of compositional data analysis. We illustrate the algorithm with 16S rRNA abundance data from a Crohn's microbiome study and an HIV microbiome study. IMPORTANCE We propose a new algorithm for the identification of microbial signatures. These microbial signatures can be used for diagnosis, prognosis, or prediction of therapeutic response based on an individual's specific microbiota.

Journal ArticleDOI
TL;DR: In this paper, the structure and dynamics of one-dimensional binary Bose gases forming quantum droplets are studied by solving the corresponding amended Gross-Pitaevskii equation, and two physically different regimes are identified, corresponding to small droplets of an approximately Gaussian shape and large ''puddles'' with a broad flat-top plateau.
Abstract: The structure and dynamics of one-dimensional binary Bose gases forming quantum droplets is studied by solving the corresponding amended Gross-Pitaevskii equation. Two physically different regimes are identified, corresponding to small droplets of an approximately Gaussian shape and large ``puddles'' with a broad flat-top plateau. Small droplets collide quasielastically, featuring the solitonlike behavior. On the other hand, large colliding droplets may merge or suffer fragmentation, depending on their relative velocity. The frequency of a breathing excited state of droplets, as predicted by the dynamical variational approximation based on the Gaussian ansatz, is found to be in good agreement with numerical results. Finally, the stability diagram for a single droplet with respect to shape excitations with a given wave number is drawn, being consistent with preservation of the Weber number for large droplets.

Journal ArticleDOI
TL;DR: A comparison of the performances of all the GIMs created in the frame of IGS, and the main conclusion is the consistency of the results between so many different GIM techniques and implementations.
Abstract: In the context of the International GNSS Service (IGS), several IGS Ionosphere Associated Analysis Centers have developed different techniques to provide global ionospheric maps (GIMs) of vertical total electron content (VTEC) since 1998. In this paper we present a comparison of the performances of all the GIMs created in the frame of IGS. Indeed we compare the classical ones (for the ionospheric analysis centers CODE, ESA/ESOC, JPL and UPC) with the new ones (NRCAN, CAS, WHU). To assess the quality of them in fair and completely independent ways, two assessment methods are used: a direct comparison to altimeter data (VTEC-altimeter) and to the difference of slant total electron content (STEC) observed in independent ground reference stations (dSTEC-GPS). The main conclusion of this study, performed during one solar cycle, is the consistency of the results between so many different GIM techniques and implementations.

Journal ArticleDOI
TL;DR: Vibrational spectra revealed that the amylose-amylopectin skeleton present in the raw potato starch was missing in the potato powder but could be fully recovered upon water addition when the potato puree was prepared, indicating the important structural role of water molecules in the recovery of the initial molecular conformation.

Journal ArticleDOI
TL;DR: An analysis of self-organized network management, with an end-to-end perspective of the network, to survey how network management can significantly benefit from ML solutions.

Journal ArticleDOI
TL;DR: In this paper, a decision-making problem for a new aggregator type called Smart Energy Service Provider (SESP) to schedule flexible energy resources is formulated as an MILP problem and its performance has been tested by means of the simulation of test cases in a local market.

Journal ArticleDOI
TL;DR: In this paper, the authors examined the possible multi-microgrid architectures to form a grid of microgrids and compared the different architectures in terms of cost, scalability, protection, reliability, stability, communications and business models.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: In this article, the authors propose a novel Convolutional neural network (CNN) architecture which automatically discovers latent domains in visual datasets and exploits this information to learn robust target classifiers.
Abstract: Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.

Journal ArticleDOI
TL;DR: The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research, which include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review.
Abstract: Studies involving Monte Carlo simulations are common in both diagnostic and therapy medical physics research, as well as other fields of basic and applied science. As with all experimental studies, the conditions and parameters used for Monte Carlo simulations impact their scope, validity, limitations, and generalizability. Unfortunately, many published peer-reviewed articles involving Monte Carlo simulations do not provide the level of detail needed for the reader to be able to properly assess the quality of the simulations. The American Association of Physicists in Medicine Task Group #268 developed guidelines to improve reporting of Monte Carlo studies in medical physics research. By following these guidelines, manuscripts submitted for peer-review will include a level of relevant detail that will increase the transparency, the ability to reproduce results, and the overall scientific value of these studies. The guidelines include a checklist of the items that should be included in the Methods, Results, and Discussion sections of manuscripts submitted for peer-review. These guidelines do not attempt to replace the journal reviewer, but rather to be a tool during the writing and review process. Given the varied nature of Monte Carlo studies, it is up to the authors and the reviewers to use this checklist appropriately, being conscious of how the different items apply to each particular scenario. It is envisioned that this list will be useful both for authors and for reviewers, to help ensure the adequate description of Monte Carlo studies in the medical physics literature.

Journal ArticleDOI
TL;DR: In this paper, the fatigue response of PLA parts manufactured through fused filament fabrication (FFF) was analyzed through an L27 Taguchi experimental design, which is run for two different infills: linear and honeycomb.

Journal ArticleDOI
TL;DR: This paper is, to the best of the knowledge, the first study to propose a deep learning method for detecting FOG episodes in PD patients using a novel spectral data representation strategy which considers information from both the previous and current signal windows.
Abstract: Among Parkinsons disease (PD) motor symptoms, freezing of gait (FOG) may be the most incapacitating. FOG episodes may result in falls and reduce patients quality of life. Accurate assessment of FOG would provide objective information to neurologists about the patients condition and the symptoms characteristics, while it could enable non-pharmacologic support based on rhythmic cues.This paper is, to the best of our knowledge, the first study to propose a deep learning method for detecting FOG episodes in PD patients. This model is trained using a novel spectral data representation strategy which considers information from both the previous and current signal windows. Our approach was evaluated using data collected by a waist-placed inertial measurement unit from 21 PD patients who manifested FOG episodes. These data were also employed to reproduce the state-of-the-art methodologies, which served to perform a comparative study to our FOG monitoring system.The results of this study demonstrate that our approach successfully outperforms the state-of-the-art methods for automatic FOG detection. Precisely, the deep learning model achieved 90% for the geometric mean between sensitivity and specificity, whereas the state-of-the-art methods were unable to surpass the 83% for the same metric.

Journal ArticleDOI
TL;DR: In this article, the authors explored societies' abilities to adapt to twenty-first-century sea-level rise by integrating perspectives from coastal engineering, economics, finance and social sciences, and provided a comparative analysis of a set of cases that vary in terms of technological limits, economic and financial barriers to adaptation and social conflicts.
Abstract: Against the background of potentially substantial sea-level rise, one important question is to what extent are coastal societies able to adapt? This question is often answered in the negative by referring to sinking islands and submerged megacities. Although these risks are real, the picture is incomplete because it lacks consideration of adaptation. This Perspective explores societies’ abilities to adapt to twenty-first-century sea-level rise by integrating perspectives from coastal engineering, economics, finance and social sciences, and provides a comparative analysis of a set of cases that vary in terms of technological limits, economic and financial barriers to adaptation and social conflicts.