scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Catalonia published in 2017"


Journal ArticleDOI
TL;DR: The purpose of this review is to alert investigators to the dangers inherent in ignoring the compositional nature of the data, and point out that HTS datasets derived from microbiome studies can and should be treated as compositions at all stages of analysis.
Abstract: Datasets collected by high-throughput sequencing (HTS) of 16S rRNA gene amplimers, metagenomes or metatranscriptomes are commonplace and being used to study human disease states, ecological differences between sites, and the built environment. There is increasing awareness that microbiome datasets generated by HTS are compositional because they have an arbitrary total imposed by the instrument. However, many investigators are either unaware of this or assume specific properties of the compositional data. The purpose of this review is to alert investigators to the dangers inherent in ignoring the compositional nature of the data, and point out that HTS datasets derived from microbiome studies can and should be treated as compositions at all stages of analysis. We briefly introduce compositional data, illustrate the pathologies that occur when compositional data are analyzed inappropriately, and finally give guidance and point to resources and examples for the analysis of microbiome datasets using compositional data analysis.

1,511 citations


Posted Content
TL;DR: Graph Attention Networks (GATs) as discussed by the authors leverage masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations.
Abstract: We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighborhoods' features, we enable (implicitly) specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation (such as inversion) or depending on knowing the graph structure upfront. In this way, we address several key challenges of spectral-based graph neural networks simultaneously, and make our model readily applicable to inductive as well as transductive problems. Our GAT models have achieved or matched state-of-the-art results across four established transductive and inductive graph benchmarks: the Cora, Citeseer and Pubmed citation network datasets, as well as a protein-protein interaction dataset (wherein test graphs remain unseen during training).

1,016 citations


Proceedings ArticleDOI
28 Mar 2017
TL;DR: This work proposes the use of generative adversarial networks for speech enhancement, and operates at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them.
Abstract: Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance.

1,001 citations


Journal ArticleDOI
TL;DR: In this paper, the key fields within structured light from the perspective of experts in those areas, providing insight into the current state and the challenges their respective fields face, as well as the exciting prospects for the future that are yet to be realized.
Abstract: Structured light refers to the generation and application of custom light fields. As the tools and technology to create and detect structured light have evolved, steadily the applications have begun to emerge. This roadmap touches on the key fields within structured light from the perspective of experts in those areas, providing insight into the current state and the challenges their respective fields face. Collectively the roadmap outlines the venerable nature of structured light research and the exciting prospects for the future that are yet to be realized.

639 citations


Journal ArticleDOI
TL;DR: This paper proposes a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG), and develops a fast normalized cuts algorithm and proposes a high-performance hierarchical segmenter that makes effective use of multiscale information.
Abstract: We propose a unified approach for bottom-up hierarchical image segmentation and object proposal generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object proposals by exploring efficiently their combinatorial space. We also present Single-scale Combinatorial Grouping (SCG), a faster version of MCG that produces competitive proposals in under five seconds per image. We conduct an extensive and comprehensive empirical validation on the BSDS500, SegVOC12, SBD, and COCO datasets, showing that MCG produces state-of-the-art contours, hierarchical regions, and object proposals.

597 citations


Journal ArticleDOI
TL;DR: A review of current studies and research works in agriculture which employ the recent practice of big data analysis, showing that the availability of hardware and software, techniques and methods for big dataAnalysis, as well as the increasing openness ofbig data sources, shall encourage more academic research, public sector initiatives and business ventures in the agricultural sector.

547 citations


Journal ArticleDOI
TL;DR: In this article, the shape and size of catalyst particles and the interface between different components of heterogeneous catalysts at the nanometer level can radically alter their performances, particularly for CeO2-based catalysts, where the precise control of surface atomic arrangements can modify the reactivity of Ce4+/Ce3+ ions, changing the oxygen release/uptake characteristics of ceria.
Abstract: Engineering the shape and size of catalyst particles and the interface between different components of heterogeneous catalysts at the nanometer level can radically alter their performances. This is particularly true with CeO2-based catalysts, where the precise control of surface atomic arrangements can modify the reactivity of Ce4+/Ce3+ ions, changing the oxygen release/uptake characteristics of ceria, which, in turn, strongly affects catalytic performance in several reactions like CO, soot, and VOC oxidation, WGS, hydrogenation, acid–base reactions, and so on. Despite the fact that many of these catalysts are polycrystalline with rather ill-defined morphologies, experimental and theoretical studies on well-defined nanocrystals have clearly established that the exposure of specific facets can increase/decrease surface oxygen reactivity and metal–support interaction (for supported metal nanoparticles), consequently affecting catalytic reactions. Here, we want to address the most recent developments in this...

497 citations


Journal ArticleDOI
TL;DR: A wide range of ligands are evaluated for their binding affinity towards the RGD-binding integrins αv β3, αvβ5,αvβ6, α vβ8, α5β1, αIIbβ3, using homogenous ELISA-like solid phase binding assay.
Abstract: Integrins, a diverse class of heterodimeric cell surface receptors, are key regulators of cell structure and behaviour, affecting cell morphology, proliferation, survival and differentiation. Consequently, mutations in specific integrins, or their deregulated expression, are associated with a variety of diseases. In the last decades, many integrin-specific ligands have been developed and used for modulation of integrin function in medical as well as biophysical studies. The IC50-values reported for these ligands strongly vary and are measured using different cell-based and cell-free systems. A systematic comparison of these values is of high importance for selecting the optimal ligands for given applications. In this study, we evaluate a wide range of ligands for their binding affinity towards the RGD-binding integrins αvβ3, αvβ5, αvβ6, αvβ8, α5β1, αIIbβ3, using homogenous ELISA-like solid phase binding assay.

396 citations


Proceedings ArticleDOI
21 Jul 2017
TL;DR: This paper introduces Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images, and demonstrates that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic.
Abstract: In this paper, we introduce Recipe1M, a new large-scale, structured corpus of over 1m cooking recipes and 800k food images. As the largest publicly available collection of recipe data, Recipe1M affords the ability to train high-capacity models on aligned, multi-modal data. Using these data, we train a neural network to find a joint embedding of recipes and images that yields impressive results on an image-recipe retrieval task. Additionally, we demonstrate that regularization via the addition of a high-level classification objective both improves retrieval performance to rival that of humans and enables semantic vector arithmetic. We postulate that these embeddings will provide a basis for further exploration of the Recipe1M dataset and food and cooking in general. Code, data and models are publicly available

346 citations


Proceedings ArticleDOI
01 Oct 2017
TL;DR: The authors align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one, which can automatically learn the degree of feature alignment required at different levels of the deep network.
Abstract: Classifiers trained on given databases perform poorly when tested on data acquired in different settings. This is explained in domain adaptation through a shift among distributions of the source and target domains. Attempts to align them have traditionally resulted in works reducing the domain shift by introducing appropriate loss terms, measuring the discrepancies between source and target distributions, in the objective function. Here we take a different route, proposing to align the learned representations by embedding in any given network specific Domain Alignment Layers, designed to match the source and target feature distributions to a reference one. Opposite to previous works which define a priori in which layers adaptation should be performed, our method is able to automatically learn the degree of feature alignment required at different levels of the deep network. Thorough experiments on different public benchmarks, in the unsupervised setting, confirm the power of our approach.

284 citations


Journal ArticleDOI
06 Sep 2017
TL;DR: In this paper, the authors explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control.
Abstract: The research community has considered in the past the application of Artificial Intelligence (AI) techniques to control and operate networks A notable example is the Knowledge Plane proposed by DClark et al However, such techniques have not been extensively prototyped or deployed in the field yet In this paper, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of AI techniques in the context of network operation and control We describe a new paradigm that accommodates and exploits SDN, NA and AI, and provide use-cases that illustrate its applicability and benefits We also present simple experimental results that support, for some relevant use-cases, its feasibility We refer to this new paradigm as Knowledge-Defined Networking (KDN)

Journal ArticleDOI
04 Oct 2017-Gels
TL;DR: The innate ability of poly(N-isopropylacrylamide) thermo-responsive hydrogel to copolymerize and to graft synthetic polymers and biomolecules have expedited the widespread number of papers published in the last decade—especially in the biomedical field.
Abstract: The innate ability of poly(N-isopropylacrylamide) (PNIPAAm) thermo-responsive hydrogel to copolymerize and to graft synthetic polymers and biomolecules, in conjunction with the highly controlled methods of radical polymerization which are now available, have expedited the widespread number of papers published in the last decade-especially in the biomedical field. Therefore, PNIPAAm-based hydrogels are extensively investigated for applications on the controlled delivery of active molecules, in self-healing materials, tissue engineering, regenerative medicine, or in the smart encapsulation of cells. The most promising polymers for biodegradability enhancement of PNIPAAm hydrogels are probably poly(ethylene glycol) (PEG) and/or poly(e-caprolactone) (PCL), whereas the biocompatibility is mostly achieved with biopolymers. Ultimately, advances in three-dimensional bioprinting technology would contribute to the design of new devices and medical tools with thermal stimuli response needs, fabricated with PNIPAAm hydrogels.

Journal ArticleDOI
TL;DR: The BIG IoT (Bridging the Interoperability Gap of the IoT) project aims to ignite an IoT ecosystem as part of the European Platforms Initiative and employs five interoperability patterns that enable cross-platform interoperability and can help establish successful IoT ecosystems.
Abstract: Today, the Internet of Things (IoT) comprises vertically oriented platforms for things. Developers who want to use them need to negotiate access individually and adapt to the platform-specific API and information models. Having to perform these actions for each platform often outweighs the possible gains from adapting applications to multiple platforms. This fragmentation of the IoT and the missing interoperability result in high entry barriers for developers and prevent the emergence of broadly accepted IoT ecosystems. The BIG IoT (Bridging the Interoperability Gap of the IoT) project aims to ignite an IoT ecosystem as part of the European Platforms Initiative. As part of the project, researchers have devised an IoT ecosystem architecture. It employs five interoperability patterns that enable cross-platform interoperability and can help establish successful IoT ecosystems.

Proceedings ArticleDOI
01 Jan 2017
TL;DR: This paper presents a deep-learning based approach to solve the problem of classifying a dermoscopic image containing a skin lesion as malignant or benign, built around the VGGNet convolutional neural network architecture and uses the transfer learning paradigm.
Abstract: The recent emergence of deep learning methods for medical image analysis has enabled the development of intelligent medical imaging-based diagnosis systems that can assist the human expert in making better decisions about a patients health. In this paper we focus on the problem of skin lesion classification, particularly early melanoma detection, and present a deep-learning based approach to solve the problem of classifying a dermoscopic image containing a skin lesion as malignant or benign. The proposed solution is built around the VGGNet convolutional neural network architecture and uses the transfer learning paradigm. Experimental results are encouraging: on the ISIC Archive dataset, the proposed method achieves a sensitivity value of 78.66%, which is significantly higher than the current state of the art on that dataset.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new methodology to calculate damage variables evolution; the proposed approach is based in the Lubliner/Lee/Fenves formulation and provides closed-form expressions of the compressive and tensile damage variables in terms of the corresponding strains.

Journal ArticleDOI
TL;DR: In this paper, a modular energy management system and its integration to a grid-connected battery-based microgrid is presented, where a power generation-side strategy is defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units.
Abstract: Microgrids are energy systems that aggregate distributed energy resources, loads, and power electronics devices in a stable and balanced way. They rely on energy management systems to schedule optimally the distributed energy resources. Conventionally, many scheduling problems have been solved by using complex algorithms that, even so, do not consider the operation of the distributed energy resources. This paper presents the modeling and design of a modular energy management system and its integration to a grid-connected battery-based microgrid. The scheduling model is a power generation-side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data. The operation of the microgrid is complemented with a supervisory control stage that compensates any mismatch between the offline scheduling process and the real time microgrid operation. The proposal has been tested experimentally in a hybrid microgrid at the Microgrid Research Laboratory, Aalborg University.

Journal ArticleDOI
16 Oct 2017-Sensors
TL;DR: Analytical models are presented that allow the characterization of LoRaWAN end-device current consumption, lifetime and energy cost of data delivery, and the impact of relevant physical and Medium Access Control layer Lo RaWAN parameters and mechanisms on energy performance.
Abstract: LoRaWAN is a flagship Low-Power Wide Area Network (LPWAN) technology that has highly attracted much attention from the community in recent years. Many LoRaWAN end-devices, such as sensors or actuators, are expected not to be powered by the electricity grid; therefore, it is crucial to investigate the energy consumption of LoRaWAN. However, published works have only focused on this topic to a limited extent. In this paper, we present analytical models that allow the characterization of LoRaWAN end-device current consumption, lifetime and energy cost of data delivery. The models, which have been derived based on measurements on a currently prevalent LoRaWAN hardware platform, allow us to quantify the impact of relevant physical and Medium Access Control (MAC) layer LoRaWAN parameters and mechanisms, as well as Bit Error Rate (BER) and collisions, on energy performance. Among others, evaluation results show that an appropriately configured LoRaWAN end-device platform powered by a battery of 2400 mAh can achieve a 1-year lifetime while sending one message every 5 min, and an asymptotic theoretical lifetime of 6 years for infrequent communication.

Journal ArticleDOI
TL;DR: This approach combines the flexibility, and simplicity of a one-diode model with the extended capacity of an exponentially weighted moving average (EWMA) control chart to detect incipient changes in a PV system and shows that the proposed approach successfully monitors the DC side of PV systems and detects temporary shading.

Journal ArticleDOI
TL;DR: A review of statistical and machine-learning data-based predictive models, which have been applied to dam safety analysis is presented in this article, where some aspects to take into account when developing analysis of this kind, such as the selection of the input variables, its division into training and validation sets, and the error analysis are discussed.
Abstract: Predictive models are an important element in dam safety analysis. They provide an estimate of the dam response faced with a given load combination, which can be compared with the actual measurements to draw conclusions about dam safety. In addition to numerical finite element models, statistical models based on monitoring data have been used for decades for this purpose. In particular, the hydrostatic-season-time method is fully implemented in engineering practice, although some limitations have been pointed out. In other fields of science, powerful tools such as neural networks and support vector machines have been developed, which make use of observed data for interpreting complex systems . This paper contains a review of statistical and machine-learning data-based predictive models, which have been applied to dam safety analysis . Some aspects to take into account when developing analysis of this kind, such as the selection of the input variables, its division into training and validation sets, and the error analysis, are discussed. Most of the papers reviewed deal with one specific output variable of a given dam typology and the majority also lack enough validation data. As a consequence, although results are promising, there is a need for further validation and assessment of generalisation capability. Future research should also focus on the development of criteria for data pre-processing and model application.

Journal ArticleDOI
TL;DR: In this paper, a Life Cycle Assessment (LCA) was carried out comparing a conventional wastewater treatment plant with two nature-based technologies (i.e. hybrid constructed wetland and high rate algal pond systems).

Journal ArticleDOI
Arnauld Albert1, Michel André2, M. Anghinolfi3, Miguel Ardid4  +1987 moreInstitutions (227)
TL;DR: In this paper, the authors search for high-energy neutrinos from the binary neutron star merger in the GeV-EeV energy range using the Antares, IceCube, and Pierre Auger Observatories.
Abstract: The Advanced LIGO and Advanced Virgo observatories recently discovered gravitational waves from a binary neutron star inspiral. A short gamma-ray burst (GRB) that followed the merger of this binary was also recorded by the Fermi Gamma-ray Burst Monitor (Fermi-GBM), and the Anti-Coincidence Shield for the Spectrometer for the International Gamma-Ray Astrophysics Laboratory (INTEGRAL), indicating particle acceleration by the source. The precise location of the event was determined by optical detections of emission following the merger. We searched for high-energy neutrinos from the merger in the GeV–EeV energy range using the Antares, IceCube, and Pierre Auger Observatories. No neutrinos directionally coincident with the source were detected within ±500 s around the merger time. Additionally, no MeV neutrino burst signal was detected coincident with the merger. We further carried out an extended search in the direction of the source for high-energy neutrinos within the 14 day period following the merger, but found no evidence of emission. We used these results to probe dissipation mechanisms in relativistic outflows driven by the binary neutron star merger. The non-detection is consistent with model predictions of short GRBs observed at a large off-axis angle.

Journal ArticleDOI
TL;DR: The measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components.
Abstract: Identifying and quantifying dissimilarities among graphs is a fundamental and challenging problem of practical importance in many fields of science. Current methods of network comparison are limited to extract only partial information or are computationally very demanding. Here we propose an efficient and precise measure for network comparison, which is based on quantifying differences among distance probability distributions extracted from the networks. Extensive experiments on synthetic and real-world networks show that this measure returns non-zero values only when the graphs are non-isomorphic. Most importantly, the measure proposed here can identify and quantify structural topological differences that have a practical impact on the information flow through the network, such as the presence or absence of critical links that connect or disconnect connected components. Identifying and quantifying dissimilarities among graphs is a problem of practical importance, but current approaches are either limited or computationally demanding. Here, the authors propose an efficiently computable measure for network comparison that can identify structural topological differences.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an approach for analyzing the dynamic effects of virtual inertia in two-area AC/DC interconnected AGC power systems. But the authors did not consider the effects of frequency measurement delay and phase-locked loop effect by introducing a second-order function.
Abstract: Virtual inertia is known as an inevitable part of the modern power systems with high penetration of renewable energy. Recent trend of research is oriented in different methods of emulating the inertia to increase the sustainability of the system. In the case of dynamic performance of power systems especially in Automatic Generation Control (AGC) issue, there are concerns considering the matter of virtual inertia. This paper proposes an approach for analyzing the dynamic effects of virtual inertia in two-area AC/DC interconnected AGC power systems. Derivative control technique is used for higher level control application of inertia emulation. This method of inertia emulation is developed for two-area AGC system, which is connected by parallel AC/DC transmission systems. Based on the proposed technique, the dynamic effect of inertia emulated by storage devices for frequency and active power control are evaluated. The effects of frequency measurement delay and phase-locked loop effect are also considered by introducing a second-order function. Simulations performed by MATLAB software demonstrate how virtual inertia emulation can effectively improve the performance of the power system. A detailed eigenvalue analysis is also performed to support the positive effects of the proposed method.

Journal ArticleDOI
TL;DR: A review of methods to reduce phase separation of CPC and the associated constraints, and a review of phase separation mechanisms observed during extrusion of other pastes and the theoretical models used to describe these mechanisms are presented to benefit future attempts to develop injectable calcium phosphate based systems.

Journal ArticleDOI
TL;DR: The paper emphasizes the role played by three main technologies, namely SDN, NFV and MEC, and analyzes the main open issues of these technologies in relation to 5G.

Journal ArticleDOI
15 Feb 2017-PLOS ONE
TL;DR: The results show that a machine learning approach can be used to monitor FoG during the daily life of PD patients and, furthermore, personalised models for FoG detection can be use to improve monitoring accuracy.
Abstract: Among Parkinson’s disease (PD) symptoms, freezing of gait (FoG) is one of the most debilitating. To assess FoG, current clinical practice mostly employs repeated evaluations over weeks and months based on questionnaires, which may not accurately map the severity of this symptom. The use of a non-invasive system to monitor the activities of daily living (ADL) and the PD symptoms experienced by patients throughout the day could provide a more accurate and objective evaluation of FoG in order to better understand the evolution of the disease and allow for a more informed decision-making process in making adjustments to the patient’s treatment plan. This paper presents a new algorithm to detect FoG with a machine learning approach based on Support Vector Machines (SVM) and a single tri-axial accelerometer worn at the waist. The method is evaluated through the acceleration signals in an outpatient setting gathered from 21 PD patients at their home and evaluated under two different conditions: first, a generic model is tested by using a leave-one-out approach and, second, a personalised model that also uses part of the dataset from each patient. Results show a significant improvement in the accuracy of the personalised model compared to the generic model, showing enhancement in the specificity and sensitivity geometric mean (GM) of 7.2%. Furthermore, the SVM approach adopted has been compared to the most comprehensive FoG detection method currently in use (referred to as MBFA in this paper). Results of our novel generic method provide an enhancement of 11.2% in the GM compared to the MBFA generic model and, in the case of the personalised model, a 10% of improvement with respect to the MBFA personalised model. Thus, our results show that a machine learning approach can be used to monitor FoG during the daily life of PD patients and, furthermore, personalised models for FoG detection can be used to improve monitoring accuracy.

Journal ArticleDOI
TL;DR: It is found that the rod-shaped NPs actually restructure and expose {111} nanofacets, which has important consequences for understanding the controversial surface chemistry of these catalytically highly active ceria NPs and paves the way for the predictive, rational design of catalytic materials at the nanoscale.
Abstract: The surface atomic arrangement of metal oxides determines their physical and chemical properties, and the ability to control and optimize structural parameters is of crucial importance for many applications, in particular in heterogeneous catalysis and photocatalysis. Whereas the structures of macroscopic single crystals can be determined with established methods, for nanoparticles (NPs), this is a challenging task. Herein, we describe the use of CO as a probe molecule to determine the structure of the surfaces exposed by rod-shaped ceria NPs. After calibrating the CO stretching frequencies using results obtained for different ceria single-crystal surfaces, we found that the rod-shaped NPs actually restructure and expose {111} nanofacets. This finding has important consequences for understanding the controversial surface chemistry of these catalytically highly active ceria NPs and paves the way for the predictive, rational design of catalytic materials at the nanoscale.

Proceedings ArticleDOI
01 May 2017
TL;DR: A taxonomy that summarizes important aspects of deep learning for approaching both action and gesture recognition in image sequences is introduced, and the main works proposed so far are summarized.
Abstract: The interest in action and gesture recognition has grown considerably in the last years. In this paper, we present a survey on current deep learning methodologies for action and gesture recognition in image sequences. We introduce a taxonomy that summarizes important aspects of deep learning for approaching both tasks. We review the details of the proposed architectures, fusion strategies, main datasets, and competitions. We summarize and discuss the main works proposed so far with particular interest on how they treat the temporal dimension of data, discussing their main features and identify opportunities and challenges for future research.

Journal ArticleDOI
TL;DR: In this article, a new algebraic framework was proposed to generalize and analyze Diffie-Hellman-like decisional assumptions which allow us to argue about security and applications by considering only algebraic properties.
Abstract: We put forward a new algebraic framework to generalize and analyze Diffie---Hellman like decisional assumptions which allows us to argue about security and applications by considering only algebraic properties. Our $$\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}$$Dl,k-MDDH Assumption states that it is hard to decide whether a vector in $$\mathbb {G}^\ell $$Gl is linearly dependent of the columns of some matrix in $$\mathbb {G}^{\ell \times k}$$Gl×k sampled according to distribution $$\mathcal {D}_{\ell ,k}$$Dl,k. It covers known assumptions such as $$\textsf {DDH},\, 2\text{- }\textsf {Lin}$$DDH,2-Lin (Linear Assumption) and $$k\text{- }\textsf {Lin}$$k-Lin (the k-Linear Assumption). Using our algebraic viewpoint, we can relate the generic hardness of our assumptions in m-linear groups to the irreducibility of certain polynomials which describe the output of $$\mathcal {D}_{\ell ,k}$$Dl,k. We use the hardness results to find new distributions for which the $$\mathcal {D}_{\ell ,k}\text{- }\textsf {MDDH}$$Dl,k-MDDH Assumption holds generically in m-linear groups. In particular, our new assumptions $$2\text{- }\textsf {SCasc}$$2-SCasc and $$2\text{- }\textsf {ILin}$$2-ILin are generically hard in bilinear groups and, compared to $$2\text{- }\textsf {Lin}$$2-Lin, have shorter description size, which is a relevant parameter for efficiency in many applications. These results support using our new assumptions as natural replacements for the $$2\text{- }\textsf {Lin}$$2-Lin assumption which was already used in a large number of applications. To illustrate the conceptual advantages of our algebraic framework, we construct several fundamental primitives based on any $$\textsf {MDDH}$$MDDH Assumption. In particular, we can give many instantiations of a primitive in a compact way, including public-key encryption, hash proof systems, pseudo-random functions, and Groth---Sahai NIZK and NIWI proofs. As an independent contribution, we give more efficient NIZK and NIWI proofs for membership in a subgroup of $$\mathbb {G}^\ell $$Gl. The results imply very significant efficiency improvements for a large number of schemes.

Posted Content
TL;DR: In this paper, a generative adversarial network (GAN) is proposed for speech enhancement, where the model is trained at the waveform level, training the model end-to-end and incorporating 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them.
Abstract: Current speech enhancement techniques operate on the spectral domain and/or exploit some higher-level feature. The majority of them tackle a limited number of noise conditions and rely on first-order statistics. To circumvent these issues, deep networks are being increasingly used, thanks to their ability to learn complex functions from large example sets. In this work, we propose the use of generative adversarial networks for speech enhancement. In contrast to current techniques, we operate at the waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same model, such that model parameters are shared across them. We evaluate the proposed model using an independent, unseen test set with two speakers and 20 alternative noise conditions. The enhanced samples confirm the viability of the proposed model, and both objective and subjective evaluations confirm the effectiveness of it. With that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to improve their performance.