scispace - formally typeset
Search or ask a question

Showing papers by "École Polytechnique de Montréal published in 2018"


Journal ArticleDOI
TL;DR: The results show that SMEs do not exploit all the resources for implementing Industry 4.0 and often limit themselves to the adoption of Cloud Computing and the Internet of Things and there is still absence of real applications in the field of production planning.
Abstract: Industry 4.0 provides new paradigms for the industrial management of SMEs. Supported by a growing number of new technologies, this concept appears more flexible and less expensive than traditional ...

673 citations


Posted Content
TL;DR: A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task.
Abstract: This paper surveys the recent attempts, both from the machine learning and operations research communities, at leveraging machine learning to solve combinatorial optimization problems. Given the hard nature of these problems, state-of-the-art algorithms rely on handcrafted heuristics for making decisions that are otherwise too expensive to compute or mathematically not well defined. Thus, machine learning looks like a natural candidate to make such decisions in a more principled and optimized way. We advocate for pushing further the integration of machine learning and combinatorial optimization and detail a methodology to do so. A main point of the paper is seeing generic optimization problems as data points and inquiring what is the relevant distribution of problems to use for learning on a given task.

557 citations


Journal ArticleDOI
TL;DR: This method represents the state of the art of the current knowledge on how to assess potential impacts from water use in LCA, assessing both human and ecosystem users’ potential deprivation, at the midpoint level, and provides a consensus-based methodology for the calculation of a water scarcity footprint as per ISO 14046.
Abstract: Life cycle assessment (LCA) has been used to assess freshwater-related impacts according to a new water footprint framework formalized in the ISO 14046 standard. To date, no consensus-based approach exists for applying this standard and results are not always comparable when different scarcity or stress indicators are used for characterization of impacts. This paper presents the outcome of a 2-year consensus building process by the Water Use in Life Cycle Assessment (WULCA), a working group of the UNEP-SETAC Life Cycle Initiative, on a water scarcity midpoint method for use in LCA and for water scarcity footprint assessments. In the previous work, the question to be answered was identified and different expert workshops around the world led to three different proposals. After eliminating one proposal showing low relevance for the question to be answered, the remaining two were evaluated against four criteria: stakeholder acceptance, robustness with closed basins, main normative choice, and physical meaning. The recommended method, AWARE, is based on the quantification of the relative available water remaining per area once the demand of humans and aquatic ecosystems has been met, answering the question “What is the potential to deprive another user (human or ecosystem) when consuming water in this area?” The resulting characterization factor (CF) ranges between 0.1 and 100 and can be used to calculate water scarcity footprints as defined in the ISO standard. After 8 years of development on water use impact assessment methods, and 2 years of consensus building, this method represents the state of the art of the current knowledge on how to assess potential impacts from water use in LCA, assessing both human and ecosystem users’ potential deprivation, at the midpoint level, and provides a consensus-based methodology for the calculation of a water scarcity footprint as per ISO 14046.

455 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarized the recent advances, challenges, and prospects of both fundamental and applied aspects of stress in thin films and engineering coatings and systems, based on recent achievements presented during the 2016 Stress Workshop entitled “Stress Evolution in Thin Films and Coatings: from Fundamental Understanding to Control.
Abstract: The issue of stress in thin films and functional coatings is a persistent problem in materials science and technology that has congregated many efforts, both from experimental and fundamental points of view, to get a better understanding on how to deal with, how to tailor, and how to manage stress in many areas of applications. With the miniaturization of device components, the quest for increasingly complex film architectures and multiphase systems and the continuous demands for enhanced performance, there is a need toward the reliable assessment of stress on a submicron scale from spatially resolved techniques. Also, the stress evolution during film and coating synthesis using physical vapor deposition (PVD), chemical vapor deposition, plasma enhanced chemical vapor deposition (PECVD), and related processes is the result of many interrelated factors and competing stress sources so that the task to provide a unified picture and a comprehensive model from the vast amount of stress data remains very challenging. This article summarizes the recent advances, challenges, and prospects of both fundamental and applied aspects of stress in thin films and engineering coatings and systems, based on recent achievements presented during the 2016 Stress Workshop entitled “Stress Evolution in Thin Films and Coatings: from Fundamental Understanding to Control.” Evaluation methods, implying wafer curvature, x-ray diffraction, or focused ion beam removal techniques, are reviewed. Selected examples of stress evolution in elemental and alloyed systems, graded layers, and multilayer-stacks as well as amorphous films deposited using a variety of PVD and PECVD techniques are highlighted. Based on mechanisms uncovered by in situ and real-time diagnostics, a kinetic model is outlined that is capable of reproducing the dependence of intrinsic (growth) stress on the grain size, growth rate, and deposited energy. The problems and solutions related to stress in the context of optical coatings, inorganic coatings on plastic substrates, and tribological coatings for aerospace applications are critically examined. This review also suggests strategies to mitigate excessive stress levels from novel coating synthesis perspectives to microstructural design approaches, including the ability to empower crack-based fabrication processes, pathways leading to stress relaxation and compensation, as well as management of the film and coating growth conditions with respect to energetic ion bombardment. Future opportunities and challenges for stress engineering and stress modeling are considered and outlined.

448 citations


Journal ArticleDOI
TL;DR: This critical and comprehensive review of enabling hardware, instrumentation, algorithms, and potential applications in real-time high-resolution THz imaging can serve a diverse community of fundamental and applied scientists.
Abstract: Terahertz (THz) science and technology have greatly progressed over the past two decades to a point where the THz region of the electromagnetic spectrum is now a mature research area with many fundamental and practical applications. Furthermore, THz imaging is positioned to play a key role in many industrial applications, as THz technology is steadily shifting from university-grade instrumentation to commercial systems. In this context, the objective of this review is to discuss recent advances in THz imaging with an emphasis on the modalities that could enable real-time high-resolution imaging. To this end, we first discuss several key imaging modalities developed over the years: THz transmission, reflection, and conductivity imaging; THz pulsed imaging; THz computed tomography; and THz near-field imaging. Then, we discuss several enabling technologies for real-time THz imaging within the time-domain spectroscopy paradigm: fast optical delay lines, photoconductive antenna arrays, and electro-optic sampling with cameras. Next, we discuss the advances in THz cameras, particularly THz thermal cameras and THz field-effect transistor cameras. Finally, we overview the most recent techniques that enable fast THz imaging with single-pixel detectors: mechanical beam-steering, compressive sensing, spectral encoding, and fast Fourier optics. We believe that this critical and comprehensive review of enabling hardware, instrumentation, algorithms, and potential applications in real-time high-resolution THz imaging can serve a diverse community of fundamental and applied scientists.

284 citations


Proceedings Article
15 Feb 2018
TL;DR: The authors proposed a multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model, and trained this model on several data sources with multiple training objectives on over 100 million sentences.
Abstract: A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner. These representations are typically used as general purpose features for words across a range of NLP problems. However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem. Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations. In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model. We train this model on several data sources with multiple training objectives on over 100 million sentences. Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods. We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations.

272 citations


Book ChapterDOI
26 Jun 2018
TL;DR: The neural combinatorial optimization framework is extended to solve the traveling salesman problem (TSP) and the performance of the proposed framework alone is generally as good as high performance heuristics (OR-Tools).
Abstract: The aim of the study is to provide interesting insights on how efficient machine learning algorithms could be adapted to solve combinatorial optimization problems in conjunction with existing heuristic procedures. More specifically, we extend the neural combinatorial optimization framework to solve the traveling salesman problem (TSP). In this framework, the city coordinates are used as inputs and the neural network is trained using reinforcement learning to predict a distribution over city permutations. Our proposed framework differs from the one in [1] since we do not make use of the Long Short-Term Memory (LSTM) architecture and we opted to design our own critic to compute a baseline for the tour length which results in more efficient learning. More importantly, we further enhance the solution approach with the well-known 2-opt heuristic. The results show that the performance of the proposed framework alone is generally as good as high performance heuristics (OR-Tools). When the framework is equipped with a simple 2-opt procedure, it could outperform such heuristics and achieve close to optimal results on 2D Euclidean graphs. This demonstrates that our approach based on machine learning techniques could learn good heuristics which, once being enhanced with a simple local search, yield promising results.

242 citations


Journal ArticleDOI
TL;DR: This paper proposes and examines a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets, and shows that a low‐capacity FCN model can serve as a pre‐processor to normalize medical input data.

184 citations


Journal ArticleDOI
TL;DR: The salt addition method has been found appropriate and convenient to determine the PZC of natural organic substrates and both the ion adsorption and the zeta potential methods failed to give points of zero charge for these substrates.
Abstract: This study evaluates different methods to determine points of zero charge (PZCs) on five organic materials, namely maple sawdust, wood ash, peat moss, compost, and brown algae, used for the passive treatment of contaminated neutral drainage effluents. The PZC provides important information about metal sorption mechanisms. Three methods were used: (1) the salt addition method, measuring the PZC; (2) the zeta potential method, measuring the isoelectric point (IEP); (3) the ion adsorption method, measuring the point of zero net charge (PZNC). Natural kaolinite and synthetic goethite were also tested with both the salt addition and the ion adsorption methods in order to validate experimental protocols. Results obtained from the salt addition method in 0.05 M NaNO3 were the following: 4.72 ± 0.06 (maple sawdust), 9.50 ± 0.07 (wood ash), 3.42 ± 0.03 (peat moss), 7.68 ± 0.01 (green compost), and 6.06 ± 0.11 (brown algae). Both the ion adsorption and the zeta potential methods failed to give points of zero charge for these substrates. The PZC of kaolinite (3.01 ± 0.03) was similar to the PZNC (2.9-3.4) and fell within the range of values reported in the literature (2.7-4.1). As for the goethite, the PZC (10.9 ± 0.05) was slightly higher than the PZNC (9.0-9.4). The salt addition method has been found appropriate and convenient to determine the PZC of natural organic substrates.

174 citations


Posted Content
TL;DR: This paper collected ReDial, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations, and used this dataset to explore new neural architectures, mechanisms, and methods suitable for composing conversational recommendation systems.
Abstract: There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational recommendation is an interesting setting for the scientific exploration of dialogue with natural language as the associated discourse involves goal-driven dialogue that often transforms naturally into more free-form chat. This paper provides two contributions. First, until now there has been no publicly available large-scale dataset consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected ReDial, a dataset consisting of over 10,000 conversations centered around the theme of providing movie recommendations. We make this data available to the community for further research. Second, we use this dataset to explore multiple facets of conversational recommendations. In particular we explore new neural architectures, mechanisms, and methods suitable for composing conversational recommendation systems. Our dataset allows us to systematically probe model sub-components addressing different parts of the overall problem domain ranging from: sentiment analysis and cold-start recommendation generation to detailed aspects of how natural language is used in this setting in the real world. We combine such sub-components into a full-blown dialogue system and examine its behavior.

166 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed the transfer pathways mechanism of the photogenerated charge carriers in the hybrid samples and showed that the TW0.075 (titanium dioxide/tungsten oxide having 0.075 molar ratio of tungsten precursor) hybrid powder outperforms the rest of the samples in the UV degradation of the model pollutant methylene blue (MB).
Abstract: Synthesizing titanium dioxide with energy storage ability represents a paradigm-shift for photocatalytic applications. We prepared titania tungstated photocatalysts (TiO2/WO3) by sol–gel and crash precipitation methods followed by spray drying to produce a micro-sized hybrid material. X-ray diffraction confirmed the tetrahedral and monoclinic crystalline structure of TiO2 and WO3 in the hybrid material calcined at 600 °C. Spray drying a suspension of titanium hydroxide alone creates spherical 20 μm TiO2 particles, whereas a suspension of ammonium paratungstate dissolved in hydrochloric acid produces 7 μm median size needle like WO3 particles. According to SEM-EDX images, spray drying both semiconductors together produces a homogeneously distributed mixture of the powders. The TW0.075 (titanium dioxide/tungsten oxide having 0.075 molar ratio of tungsten precursor) hybrid powder, with a surface area of 221 m2/g, 2.88 eV band gap energy, and 21.5% of anatase [001] facets (Raman analysis), decreased the electron-hole recombination with 1594 ns of carrier lifetime (PL-TRPL analysis). TW0.075 outperforms the rest of the samples in the UV degradation of the model pollutant methylene blue (MB), converting 90% MB in 100 min (30 min dark + 40 min light + 30 min dark), thus demonstrating energy storage ability in the absence of UV irradiation. Hydroxyl radicals (OH ) and superoxide anions (O2 −) are the species mainly involved in the pollutant degradation. We propose the transfer pathways mechanism of the photogenerated charge carriers in the hybrid samples.

Journal ArticleDOI
TL;DR: In this article, a medium-based approach based on generalized sheet transition conditions and surface susceptibility tensors is proposed to obtain diffraction-free refractive metasurfaces that are essentially lossless, passive, bianisotropic, and reciprocal.
Abstract: Refraction represents one of the most fundamental operations that may be performed by a metasurface. However, simple phase-gradient metasurface designs suffer from restricted angular deflection due to spurious diffraction orders. It has been recently shown, using a circuit-based approach, that refraction without spurious diffraction, or diffraction-free, can fortunately be achieved by a transverse (or in-plane polarizable) metasurface exhibiting either loss–gain, nonreciprocity, or bianisotropy. Here, we re-derive these conditions using a medium-based—and hence, more insightful—approach based on generalized sheet transition conditions and surface susceptibility tensors, and experimentally demonstrate for the first time, beyond any doubt, two diffraction-free refractive metasurfaces that are essentially lossless, passive, bianisotropic, and reciprocal.

Journal ArticleDOI
TL;DR: The design, development, and validation of an in situ intraoperative, label-free, cancer detection system based on high wavenumber Raman spectroscopy, engineered into a commercially available biopsy system allowing tumor analysis prior to tissue harvesting without disrupting workflow are reported on.
Abstract: Modern cancer diagnosis requires histological, molecular, and genomic tumor analyses. Tumor sampling is often achieved using a targeted needle biopsy approach. Targeting errors and cancer heterogeneity causing inaccurate sampling are important limitations of this blind technique leading to non-diagnostic or poor quality samples, and the need for repeated biopsies pose elevated patient risk. An optical technology that can analyze the molecular nature of the tissue prior to harvesting could improve cancer targeting and mitigate patient risk. Here we report on the design, development, and validation of an in situ intraoperative, label-free, cancer detection system based on high wavenumber Raman spectroscopy. This optical detection device was engineered into a commercially available biopsy system allowing tumor analysis prior to tissue harvesting without disrupting workflow. Using a dual validation approach we show that high wavenumber Raman spectroscopy can detect human dense cancer with >60% cancer cells in situ during surgery with a sensitivity and specificity of 80% and 90%, respectively. We also demonstrate for the first time the use of this system in a swine brain biopsy model. These studies set the stage for the clinical translation of this optical molecular imaging method for high yield and safe targeted biopsy.

Journal ArticleDOI
TL;DR: In this article, a generalized sheet transition conditions (GSTC) with a bianisotropic surface perceptibility tensor model of the metasurface structure is presented.
Abstract: The paper overviews our recent work on the synthesis of metasurfaces and related concepts and applications. The synthesis is based on generalized sheet transition conditions (GSTCs) with a bianisotropic surface susceptibility tensor model of the metasurface structure. We first place metasurfaces in a proper historical context and describe the GSTC technique with some fundamental susceptibility tensor considerations. On this basis, we next provide an in-depth development of our susceptibilityGSTC synthesis technique. Finally, we present five recent metasurface concepts and applications, which cover the topics of birefringent transformations, bianisotropic refraction, light emission enhancement, remote spatial processing, and nonlinear second-harmonic generation.

Proceedings ArticleDOI
01 Jun 2018
TL;DR: Two techniques to improve landmark localization in images from partially annotated datasets are presented and it is shown that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels.
Abstract: We present two techniques to improve landmark localization in images from partially annotated datasets. Our primary goal is to leverage the common situation where precise landmark locations are only provided for a small data subset, but where class labels for classification or regression tasks related to the landmarks are more abundantly available. First, we propose the framework of sequential multitasking and explore it here through an architecture for landmark localization where training with class labels acts as an auxiliary signal to guide the landmark localization on unlabeled data. A key aspect of our approach is that errors can be backpropagated through a complete landmark localization model. Second, we propose and explore an unsupervised learning technique for landmark localization based on having a model predict equivariant landmarks with respect to transformations applied to the image. We show that these techniques, improve landmark prediction considerably and can learn effective detectors even when only a small fraction of the dataset has landmark labels. We present results on two toy datasets and four real datasets, with hands and faces, and report new state-of-the-art on two datasets in the wild, e.g. with only 5% of labeled images we outperform previous state-of-the-art trained on the AFLW dataset.

Journal ArticleDOI
TL;DR: This research suggests a potential approach using HMT to control the digestion of starch products with desired digestibility.

Journal ArticleDOI
TL;DR: The fusion of the PAM50 and ICBM152 templates will facilitate group and multi‐center studies of combined brain and spinal cord MRI, and enable the use of existing atlases of the brainstem compatible with the ICBM space.

Journal ArticleDOI
TL;DR: In this paper, the effect of different shot peening conditions on Inconel 718 tested in low and high cycle fatigue was presented in the case of low cycle fatigue, the roughness resulting from shot peens is considered while in high cycle fatigues, it is the presence of significant residual stresses.

Proceedings ArticleDOI
27 Dec 2018
TL;DR: Potential ethical issues that arise in dialogue systems research are highlighted, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns.
Abstract: The use of dialogue systems as a medium for human-machine interaction is an increasingly prevalent paradigm. A growing number of dialogue systems use conversation strategies that are learned from large datasets. There are well documented instances where interactions with these system have resulted in biased or even offensive conversations due to the data-driven training process. Here, we highlight potential ethical issues that arise in dialogue systems research, including: implicit biases in data-driven systems, the rise of adversarial examples, potential sources of privacy violations, safety concerns, special considerations for reinforcement learning systems, and reproducibility concerns. We also suggest areas stemming from these issues that deserve further investigation. Through this initial survey, we hope to spur research leading to robust, safe, and ethically sound dialogue systems.

Journal ArticleDOI
TL;DR: In this paper, an innovative finishing technique combining chemical and abrasive flow polishing of interior surfaces of tubular IN625 components designed for the aerospace industry was designed and validated, and the synergistic effect stemming from a combined use of the chemical-abrasive flows was investigated by studying: a) the flow of abrasive particles suspended in water, b) a flow of a chemical solution without abrasives, and c) the surface roughness and texture.

Journal ArticleDOI
TL;DR: This model not only significantly increased predictive power by combining all datasets, but also revealed novel interactions between different biological modalities, which provides the frameworks for future studies examining deviations implicated in pregnancy‐related pathologies including preterm birth and preeclampsia.
Abstract: Motivation Multiple biological clocks govern a healthy pregnancy. These biological mechanisms produce immunologic, metabolomic, proteomic, genomic and microbiomic adaptations during the course of pregnancy. Modeling the chronology of these adaptations during full-term pregnancy provides the frameworks for future studies examining deviations implicated in pregnancy-related pathologies including preterm birth and preeclampsia.

Proceedings Article
01 Nov 2018
TL;DR: This paper collects ReDial, a data set consisting of over 10,000 conversations centered around the theme of providing movie recommendations and uses this dataset to explore new neural architectures, mechanisms and methods suitable for composing conversational recommendation systems.
Abstract: There has been growing interest in using neural networks and deep learning techniques to create dialogue systems. Conversational recommendation is an interesting setting for the scientific exploration of dialogue with natural language as the associated discourse involves goal-driven dialogue that often transforms naturally into more free-form chat. This paper provides two contributions. First, until now there has been no publicly available large-scale data set consisting of real-world dialogues centered around recommendations. To address this issue and to facilitate our exploration here, we have collected ReDial, a data set consisting of over 10,000 conversations centered around the theme of providing movie recommendations. We make this data available to the community for further research. Second, we use this dataset to explore multiple facets of conversational recommendations. In particular we explore new neural architectures, mechanisms and methods suitable for composing conversational recommendation systems. Our dataset allows us to systematically probe model sub-components addressing different parts of the overall problem domain ranging from: sentiment analysis and cold-start recommendation generation to detailed aspects of how natural language is used in this setting in the real world. We combine such sub-components into a full-blown dialogue system and examine its behavior.

Posted Content
TL;DR: This work presents a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model and demonstrates that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods.
Abstract: A lot of the recent success in natural language processing (NLP) has been driven by distributed vector representations of words trained on large amounts of text in an unsupervised manner These representations are typically used as general purpose features for words across a range of NLP problems However, extending this success to learning representations of sequences of words, such as sentences, remains an open problem Recent work has explored unsupervised as well as supervised learning techniques with different training objectives to learn general purpose fixed-length sentence representations In this work, we present a simple, effective multi-task learning framework for sentence representations that combines the inductive biases of diverse training objectives in a single model We train this model on several data sources with multiple training objectives on over 100 million sentences Extensive experiments demonstrate that sharing a single recurrent sentence encoder across weakly related tasks leads to consistent improvements over previous methods We present substantial improvements in the context of transfer learning and low-resource settings using our learned general-purpose representations

Journal ArticleDOI
TL;DR: Experimental results suggest that the system proposed is capable of achieving a similar performance to standard verifiers trained with up to five signature specimens, and a challenging benchmark, assessed with multiple state-of-the-art automatic signature verifiers and multiple databases, proves the robustness of the system.
Abstract: The dynamic signature is a biometric trait widely used and accepted for verifying a person's identity. Current automatic signature-based biometric systems typically require five, ten, or even more specimens of a person's signature to learn intrapersonal variability sufficient to provide an accurate verification of the individual's identity. To mitigate this drawback, this paper proposes a procedure for training with only a single reference signature. Our strategy consists of duplicating the given signature a number of times and training an automatic signature verifier with each of the resulting signatures. The duplication scheme is based on a sigma lognormal decomposition of the reference signature. Two methods are presented to create human-like duplicated signatures: the first varies the strokes' lognormal parameters (stroke-wise) whereas the second modifies their virtual target points (target-wise). A challenging benchmark, assessed with multiple state-of-the-art automatic signature verifiers and multiple databases, proves the robustness of the system. Experimental results suggest that our system, with a single reference signature, is capable of achieving a similar performance to standard verifiers trained with up to five signature specimens.

Journal ArticleDOI
TL;DR: In this article, a review of catalysts and their role in the development of processes described in this paper can be found in Section 5.1.1] and Section 2.2.
Abstract: Methyl methacrylate (MMA) is a specialty monomer for poly methyl methacrylate (PMMA) and the increasing demand for this monomer has motivated industry to develop clean technologies based on renewable resources. The dominant commercial process reacts acetone and hydrogen cyanide to MMA (ACH route) but the intermediates (hydrogen cyanide, and acetone cyanohydrin) are toxic and represent an environmental hazard. Esterification of methacrylic acid (MAA) to MMA is a compelling alternative together with ethylene, propylene, and isobutene/t-butanol as feedstocks. Partially oxidizing isobutane or 2-methyl-1,3-propanediol (2MPDO) over heteropolycompounds to MAA in a single-step is nascent technology to replace current processes. The focus of this review is on catalysts and their role in the development of processes herein described. Indeed, in some cases remarkable catalysts were studied that enabled considerable steps forward in both the advancement of catalysis science and establishing the basis for new technologies. An emblematic example is represented by Keggin-type heteropolycompounds with cesium and vanadium, which are promising catalysts to convert isobutane and 2MPDO to MAA. Renewable sources for the MMA or MAA route include acetone, isobutanol, ethanol, lactic, itaconic, and citric acids. End-of-life PMMA is expected to grow as a future source of MMA.

Journal ArticleDOI
TL;DR: A kinetic model for time-evolution of species of interest in the presence of singlet quenchers is built and it is found that hexacene, under the conditions of the model, can feature a higher yield than cavity-free pentacene when assisted by polaritonic effects.
Abstract: Singlet fission is an important candidate to increase energy conversion efficiency in organic photovoltaics by providing a pathway to increase the quantum yield of excitons per photon absorbed in select materials. We investigate the dependence of exciton quantum yield for acenes in the strong light-matter interaction (polariton) regime, where the materials are embedded in optical microcavities. Starting from an open-quantum-systems approach, we build a kinetic model for time-evolution of species of interest in the presence of singlet quenchers and show that polaritons can decrease or increase exciton quantum yields compared to the cavity-free case. In particular, we find that hexacene, under the conditions of our model, can feature a higher yield than cavity-free pentacene when assisted by polaritonic effects. Similarly, we show that pentacene yield can be increased when assisted by polariton states. Finally, we address how various relaxation processes between bright and dark states in lossy microcavities...

Journal ArticleDOI
01 May 2018-Pain
TL;DR: This work shows that patients suffering from a common chronic pain disorder, compared with healthy volunteers, exhibit elevated levels of the neuroinflammation marker 18 kDa translocator protein, in both the neuroforamina and spinal cord, and suggests that therapies targeting immune cell activation may be beneficial for chronic pain patients.
Abstract: Numerous preclinical studies support the role of spinal neuroimmune activation in the pathogenesis of chronic pain, and targeting glia (eg, microglia/astrocyte)- or macrophage-mediated neuroinflammatory responses effectively prevents or reverses the establishment of persistent nocifensive behaviors in laboratory animals. However, thus far, the translation of those findings into novel treatments for clinical use has been hindered by the scarcity of data supporting the role of neuroinflammation in human pain. Here, we show that patients suffering from a common chronic pain disorder (lumbar radiculopathy), compared with healthy volunteers, exhibit elevated levels of the neuroinflammation marker 18 kDa translocator protein, in both the neuroforamina (containing dorsal root ganglion and nerve roots) and spinal cord. These elevations demonstrated a pattern of spatial specificity correlating with the patients' clinical presentation, as they were observed in the neuroforamen ipsilateral to the symptomatic leg (compared with both contralateral neuroforamen in the same patients as well as to healthy controls) and in the most caudal spinal cord segments, which are known to process sensory information from the lumbosacral nerve roots affected in these patients (compared with more superior segments). Furthermore, the neuroforaminal translocator protein signal was associated with responses to fluoroscopy-guided epidural steroid injections, supporting its role as an imaging marker of neuroinflammation, and highlighting the clinical significance of these observations. These results implicate immunoactivation at multiple levels of the nervous system as a potentially important and clinically relevant mechanism in human radicular pain, and suggest that therapies targeting immune cell activation may be beneficial for chronic pain patients.

Journal ArticleDOI
TL;DR: This paper aims at providing a comprehensive survey of open source publications related to APT actors and their activities, focusing on the APT activities, rather than research on defensive or detective measures.

Journal ArticleDOI
TL;DR: In this article, a global consensus process was initiated to agree on an updated overall life cycle impact assessment (LCIA) framework and to recommend a non-comprehensive list of environmental indicators and LCIA characterization factors for (1) climate change, (2) fine particulate matter impacts on human health, (3) water consumption impacts (both scarcity and human health) and 4) land use impacts on biodiversity.
Abstract: Guidance is needed on best-suited indicators to quantify and monitor the man-made impacts on human health, biodiversity and resources. Therefore, the UNEP-SETAC Life Cycle Initiative initiated a global consensus process to agree on an updated overall life cycle impact assessment (LCIA) framework and to recommend a non-comprehensive list of environmental indicators and LCIA characterization factors for (1) climate change, (2) fine particulate matter impacts on human health, (3) water consumption impacts (both scarcity and human health) and 4) land use impacts on biodiversity. The consensus building process involved more than 100 world-leading scientists in task forces via multiple workshops. Results were consolidated during a 1-week Pellston Workshop™ in January 2016 leading to the following recommendations. LCIA framework: The updated LCIA framework now distinguishes between intrinsic, instrumental and cultural values, with disability-adjusted life years (DALY) to characterize damages on human health and with measures of vulnerability included to assess biodiversity loss. Climate change impacts: Two complementary climate change impact categories are recommended: (a) The global warming potential 100 years (GWP 100) represents shorter term impacts associated with rate of change and adaptation capacity, and (b) the global temperature change potential 100 years (GTP 100) characterizes the century-scale long term impacts, both including climate-carbon cycle feedbacks for all climate forcers. Fine particulate matter (PM2.5) health impacts: Recommended characterization factors (CFs) for primary and secondary (interim) PM2.5 are established, distinguishing between indoor, urban and rural archetypes. Water consumption impacts: CFs are recommended, preferably on monthly and watershed levels, for two categories: (a) The water scarcity indicator “AWARE” characterizes the potential to deprive human and ecosystems users and quantifies the relative Available WAter REmaining per area once the demand of humans and aquatic ecosystems has been met, and (b) the impact of water consumption on human health assesses the DALYs from malnutrition caused by lack of water for irrigated food production. Land use impacts: CFs representing global potential species loss from land use are proposed as interim recommendation suitable to assess biodiversity loss due to land use and land use change in LCA hotspot analyses. The recommended environmental indicators may be used to support the UN Sustainable Development Goals in order to quantify and monitor progress towards sustainable production and consumption. These indicators will be periodically updated, establishing a process for their stewardship.

Journal ArticleDOI
TL;DR: By analyzing hundreds of specifications it appears that, indeed, the range of C-factors of the grippers built by one company can be often consistently different from these of competitors, which seems at odd with the requirement of modern robotic systems.
Abstract: With the recent introduction of ambitious industrial strategies such as Horizon 2020 and Industry 4.0, a massive focus has been placed on the development of an efficient robotic workforce. Amongst all the operations robotic systems can take care of, handling remains a preferred choice due to a combination of factors including its often repetitive nature and low skill requirement. The associated demand for grasping tools has led to an ever increasing market for manipulation end-of-arm tooling from which a handful of industry giants have emerged. Based on data publicly accessible from the catalogs of several well-known companies, this paper aims at presenting a review on the characteristics of pneumatic, parallel, two-finger, industrial grippers. Included in the specifications under scrutiny in this paper are: stroke, force, weight, as well as a performance index referred to as the C-factor. This last index is a combination of three of the aforementioned characteristics and has been proposed in the literature as a measure of the efficiency that a gripper is capable of reaching. As will be shown, by analyzing hundreds of specifications it appears that, indeed, the range of C-factors of the grippers built by one company can be often consistently different from these of competitors. Furthermore, an important bias for certain typical specifications can be observed in most of the grippers which seems at odd with the requirement of modern robotic systems. This latter remark will open up a closing discussion proposed in the last part of this paper on the future evolution of grippers based on emerging new products.