scispace - formally typeset
Search or ask a question

Showing papers by "Chalmers University of Technology published in 2017"


Journal ArticleDOI
TL;DR: The Global Burden of Diseases, Injuries, and Risk Factors Study 2016 (GBD 2016) provides a comprehensive assessment of prevalence, incidence, and years lived with disability (YLDs) for 328 causes in 195 countries and territories from 1990 to 2016.

10,401 citations


Journal ArticleDOI
18 Aug 2017-Science
TL;DR: A Human Pathology Atlas has been created as part of the Human Protein Atlas program to explore the prognostic role of each protein-coding gene in 17 different cancers, and reveals that gene expression of individual tumors within a particular cancer varied considerably and could exceed the variation observed between distinct cancer types.
Abstract: Cancer is one of the leading causes of death, and there is great interest in understanding the underlying molecular mechanisms involved in the pathogenesis and progression of individual tumors. We used systems-level approaches to analyze the genome-wide transcriptome of the protein-coding genes of 17 major cancer types with respect to clinical outcome. A general pattern emerged: Shorter patient survival was associated with up-regulation of genes involved in cell growth and with down-regulation of genes involved in cellular differentiation. Using genome-scale metabolic models, we show that cancer patients have widespread metabolic heterogeneity, highlighting the need for precise and personalized medicine for cancer treatment. All data are presented in an interactive open-access database (www.proteinatlas.org/pathology) to allow genome-wide exploration of the impact of individual proteins on clinical outcomes.

2,276 citations


Journal ArticleDOI
TL;DR: There remains growing interest in magnesium (Mg) and its alloys, as they are the lightest structural metallic materials Mg alloys have the potential to enable design of lighter engineered systems, including positive implications for reduced energy consumption as mentioned in this paper.

1,173 citations


Journal ArticleDOI
TL;DR: The time is ripe for describing some of the recent development of superconducting devices, systems and applications as well as practical applications of QIP, such as computation and simulation in Physics and Chemistry.
Abstract: During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of quantum supremacy with fifty qubits is anticipated in just a few years. Quantum supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.

809 citations


Journal ArticleDOI
TL;DR: The analysis of the theoretical approaches can serve as an introduction to CE concept, while the developed tools can be instrumental for designing new CE cases, as well as for developing tools for CE implementation.
Abstract: The paper provides an overview of the literature on Circular Economy (CE) theoretical approaches, strategies and implementation cases. After analyzing different CE approaches and the underlying principles the paper then proceeds with the main goal of developing tools for CE implementation. Two tools are presented. The first is a CE Strategies Database, which includes 45 CE strategies that are applicable to different parts of the value chain. The second is a CE Implementation Database, which includes over 100 case studies categorized by Scope, Parts of the Value Chain that are involved, as well as by the used Strategy and Implementation Level. An analysis of the state of the art in CE implementation is also included in the paper. One of the observations from the analysis is that while such Parts of the Value Chain as Recovery/Recycling and Consumption/Use are prominently featured, others, including Manufacturing and Distribution, are rarely involved in CE. On the other hand, the Implementation Levels of the used Strategies indicate that many market-ready solutions exist already. The Scope of current CE implementation considers selected products, materials and sectors, while system changes to economy are rarely suggested. Finally, the CE monitoring methods and suggestions for future development are also discussed in this paper. The analysis of the theoretical approaches can serve as an introduction to CE concept, while the developed tools can be instrumental for designing new CE cases.

772 citations


Journal ArticleDOI
TL;DR: In this paper, the deformation behavior and microstructure evolution of the eutectic and near-eUTectic high entropy alloys were thoroughly studied using a combination of techniques, including strain measurement by digital image correlation, in-situ synchrotron X-ray diffraction, and transmission electron microscopy.

681 citations


Proceedings ArticleDOI
03 Apr 2017
TL;DR: This taxonomy captures major architectural characteristics of blockchains and the impact of their principal design decisions and is intended to help with important architectural considerations about the performance and quality attributes of blockchain-based systems.
Abstract: Blockchain is an emerging technology for decentralised and transactional data sharing across a large network of untrusted participants. It enables new forms of distributed software architectures, where agreement on shared states can be established without trusting a central integration point. A major difficulty for architects designing applications based on blockchain is that thetechnology has many configurations and variants. Since blockchains are at an early stage, there is little product data or reliable technology evaluation available to compare different blockchains. In this paper, we propose how to classify and compare blockchains and blockchain-based systems to assist with the design and assessment of their impact on software architectures. Our taxonomy captures major architectural characteristics of blockchains and the impact of their principal design decisions. This taxonomy is intended to help with important architectural considerations about the performance and quality attributes of blockchain-based systems.

579 citations


Journal ArticleDOI
Timothy W. Shimwell1, Huub Röttgering1, Philip Best2, Wendy L. Williams3, T. J. Dijkema4, F. de Gasperin1, Martin J. Hardcastle3, George Heald5, D. N. Hoang1, A. Horneffer6, Huib Intema1, Elizabeth K. Mahony7, Elizabeth K. Mahony4, Subhash C. Mandal1, A. P. Mechev1, Leah K. Morabito1, J. B. R. Oonk4, J. B. R. Oonk1, D. A. Rafferty8, E. Retana-Montenegro1, J. Sabater2, Cyril Tasse9, Cyril Tasse10, R. J. van Weeren11, Marcus Brüggen8, Gianfranco Brunetti12, Krzysztof T. Chyzy13, John Conway14, Marijke Haverkorn15, Neal Jackson16, Matt J. Jarvis17, Matt J. Jarvis18, John McKean4, George K. Miley1, Raffaella Morganti19, Raffaella Morganti4, Glenn J. White20, Glenn J. White21, Michael W. Wise22, Michael W. Wise4, I. van Bemmel23, Rainer Beck6, Marisa Brienza4, Annalisa Bonafede8, G. Calistro Rivera1, Rossella Cassano12, A. O. Clarke16, D. Cseh15, Adam Deller4, A. Drabent, W. van Driel9, W. van Driel24, D. Engels8, Heino Falcke15, Heino Falcke4, Chiara Ferrari25, S. Fröhlich26, M. A. Garrett4, Jeremy J. Harwood4, Volker Heesen27, Matthias Hoeft23, Cathy Horellou14, Frank P. Israel1, Anna D. Kapińska28, Anna D. Kapińska29, Magdalena Kunert-Bajraszewska, D. J. McKay30, D. J. McKay21, N. R. Mohan31, Emanuela Orru4, R. Pizzo19, R. Pizzo4, Isabella Prandoni12, Dominik J. Schwarz32, Aleksandar Shulevski4, M. Sipior4, Daniel J. Smith3, S. S. Sridhar19, S. S. Sridhar4, Matthias Steinmetz33, Andra Stroe34, Eskil Varenius14, P. van der Werf1, J. A. Zensus6, Jonathan T. L. Zwart17, Jonathan T. L. Zwart35 
TL;DR: The LOFAR Two-metre Sky Survey (LoTSS) as mentioned in this paper is a deep 120-168 MHz imaging survey that will eventually cover the entire northern sky, where each of the 3170 pointings will be observed for 8 h, which, at most declinations, is sufficient to produce ~5? resolution images with a sensitivity of ~100?Jy/beam and accomplish the main scientific aims of the survey, which are to explore the formation and evolution of massive black holes, galaxies, clusters of galaxies and large-scale structure.
Abstract: The LOFAR Two-metre Sky Survey (LoTSS) is a deep 120-168 MHz imaging survey that will eventually cover the entire northern sky. Each of the 3170 pointings will be observed for 8 h, which, at most declinations, is sufficient to produce ~5? resolution images with a sensitivity of ~100 ?Jy/beam and accomplish the main scientific aims of the survey, which are to explore the formation and evolution of massive black holes, galaxies, clusters of galaxies and large-scale structure. Owing to the compact core and long baselines of LOFAR, the images provide excellent sensitivity to both highly extended and compact emission. For legacy value, the data are archived at high spectral and time resolution to facilitate subarcsecond imaging and spectral line studies. In this paper we provide an overview of the LoTSS. We outline the survey strategy, the observational status, the current calibration techniques, a preliminary data release, and the anticipated scientific impact. The preliminary images that we have released were created using a fully automated but direction-independent calibration strategy and are significantly more sensitive than those produced by any existing large-Area low-frequency survey. In excess of 44 000 sources are detected in the images that have a resolution of 25?, typical noise levels of less than 0.5 mJy/beam, and cover an area of over 350 square degrees in the region of the HETDEX Spring Field (right ascension 10h45m00s to 15h30m00s and declination 45°00?00? to 57°00?00?).

447 citations


Journal ArticleDOI
TL;DR: This paper proposes a common evaluation framework for automatic stroke lesion segmentation from MRIP, describes the publicly available datasets, and presents the results of the two sub‐challenges: Sub‐Acute Stroke Lesion Segmentation (SISS) and Stroke Perfusion Estimation (SPES).

417 citations


Journal ArticleDOI
TL;DR: The importance of battery energy storage in densely populated urban areas, where traditional storage techniques such as pumped hydroelectric energy storage and compressed-air energy storage are often not feasible as discussed by the authors.
Abstract: Battery energy storage effectively staBIlizes the electric grid and aids renewable integration by balancing supply and demand in real time. The importance of such storage is especially crucial in densely populated urban areas, where traditional storage techniques such as pumped hydroelectric energy storage and compressed-air energy storage are often not feasible.

394 citations


Journal ArticleDOI
TL;DR: In this article, the authors highlight the functionality and data models necessary for real-time geometry assurance and how this concept allows moving from mass production to more individualized production, a concept referred to as a Digital Twin.
Abstract: Simulations of products and production processes are extensively used in the engineering phase. To secure good geometrical quality in the final product, tolerances, locator positions, clamping strategies, welding sequence, etc. are optimized during design and pre-production. Faster optimization algorithms, increased computer power and amount of available data, can leverage the area of simulation toward real-time control and optimization of products and production systems - a concept often referred to as a Digital Twin. This paper specifies and highlights functionality and data models necessary for real-time geometry assurance and how this concept allows moving from mass production to more individualized production.

Journal ArticleDOI
TL;DR: It is illustrated that, for the 1-bit quantized case, pilot-based channel estimation together with maximal-ratio combing, or zero-forcing detection enables reliable multi-user communication with high-order constellations, in spite of the severe nonlinearity introduced by the ADCs.
Abstract: We investigate the uplink throughput achievable by a multiple-user (MU) massive multiple-input multiple-output (MIMO) system, in which the base station is equipped with a large number of low-resolution analog-to-digital converters (ADCs). Our focus is on the case where neither the transmitter nor the receiver have any a priori channel state information. This implies that the fading realizations have to be learned through pilot transmission followed by channel estimation at the receiver, based on coarsely quantized observations. We propose a novel channel estimator, based on Bussgang’s decomposition, and a novel approximation to the rate achievable with finite-resolution ADCs, both for the case of finite-cardinality constellations and of Gaussian inputs, that is accurate for a broad range of system parameters. Through numerical results, we illustrate that, for the 1-bit quantized case, pilot-based channel estimation together with maximal-ratio combing, or zero-forcing detection enables reliable multi-user communication with high-order constellations, in spite of the severe nonlinearity introduced by the ADCs. Furthermore, we show that the rate achievable in the infinite-resolution (no quantization) case can be approached using ADCs with only a few bits of resolution. We finally investigate the robustness of low-ADC-resolution MU-MIMO uplink against receive power imbalances between the different users, caused for example by imperfect power control.

Journal ArticleDOI
TL;DR: NFC/A bioink is suitable for bioprinting iPSCs to support cartilage production in co-cultures with irradiated chondrocytes and a marked increase in cell number within the cartilaginous tissue was detected by 2-photon fluorescence microscopy, indicating the importance of high cell density in the pursuit of achieving good survival after printing.
Abstract: Cartilage lesions can progress into secondary osteoarthritis and cause severe clinical problems in numerous patients. As a prospective treatment of such lesions, human-derived induced pluripotent stem cells (iPSCs) were shown to be 3D bioprinted into cartilage mimics using a nanofibrillated cellulose (NFC) composite bioink when co-printed with irradiated human chondrocytes. Two bioinks were investigated: NFC with alginate (NFC/A) or hyaluronic acid (NFC/HA). Low proliferation and phenotypic changes away from pluripotency were seen in the case of NFC/HA. However, in the case of the 3D-bioprinted NFC/A (60/40, dry weight % ratio) constructs, pluripotency was initially maintained, and after five weeks, hyaline-like cartilaginous tissue with collagen type II expression and lacking tumorigenic Oct4 expression was observed in 3D -bioprinted NFC/A (60/40, dry weight % relation) constructs. Moreover, a marked increase in cell number within the cartilaginous tissue was detected by 2-photon fluorescence microscopy, indicating the importance of high cell densities in the pursuit of achieving good survival after printing. We conclude that NFC/A bioink is suitable for bioprinting iPSCs to support cartilage production in co-cultures with irradiated chondrocytes.

Journal ArticleDOI
20 Jul 2017
TL;DR: In this article, the authors provide an overview of available high-index materials and existing fabrication techniques for the realization of all-dielectric nanostructures and compare performance of the chosen materials in terms of scattering efficiencies and Q factors of the magnetic Mie resonance.
Abstract: All-dielectric nanophotonics is an exciting and rapidly developing area of nano-optics that utilizes the resonant behavior of high-index low-loss dielectric nanoparticles to enhance light–matter interaction at the nanoscale. When experimental implementation of a specific all-dielectric nanostructure is desired, two crucial factors have to be considered: the choice of a high-index material and a fabrication method. The degree to which various effects can be enhanced relies on the dielectric response of the chosen material as well as the fabrication accuracy. Here, we provide an overview of available high-index materials and existing fabrication techniques for the realization of all-dielectric nanostructures. We compare performance of the chosen materials in the visible and IR spectral ranges in terms of scattering efficiencies and Q factors of the magnetic Mie resonance. Methods for all-dielectric nanostructure fabrication are discussed and their advantages and disadvantages are highlighted. We also present an outlook for the search for better materials with higher refractive indices and novel fabrication methods that will enable low-cost manufacturing of optically resonant high-index nanoparticles. We believe that this information will be valuable across the field of nanophotonics and particularly for the design of resonant all-dielectric nanostructures.

Journal ArticleDOI
TL;DR: GECKO is presented, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance and turnover number.
Abstract: Genome-scale metabolic models (GEMs) are widely used to calculate metabolic phenotypes. They rely on defining a set of constraints, the most common of which is that the production of metabolites and/or growth are limited by the carbon source uptake rate. However, enzyme abundances and kinetics, which act as limitations on metabolic fluxes, are not taken into account. Here, we present GECKO, a method that enhances a GEM to account for enzymes as part of reactions, thereby ensuring that each metabolic flux does not exceed its maximum capacity, equal to the product of the enzyme's abundance and turnover number. We applied GECKO to a Saccharomyces cerevisiae GEM and demonstrated that the new model could correctly describe phenotypes that the previous model could not, particularly under high enzymatic pressure conditions, such as yeast growing on different carbon sources in excess, coping with stress, or overexpressing a specific pathway. GECKO also allows to directly integrate quantitative proteomics data; by doing so, we significantly reduced flux variability of the model, in over 60% of metabolic reactions. Additionally, the model gives insight into the distribution of enzyme usage between and within metabolic pathways. The developed method and model are expected to increase the use of model-based design in metabolic engineering.

Journal ArticleDOI
TL;DR: In this paper, the authors give an overview of the sources of NdFeB permanent magnets related to their applications, followed by a summary of various available technologies to recover the rare-earth elements (REEs) from these magnets, including physical processing and separation, direct alloy production, and metallurgical extraction and recovery.
Abstract: NdFeB permanent magnets have different life cycles, depending on the applications: from as short as 2–3 years in consumer electronics to 20–30 years in wind turbines. The size of the magnets ranges from less than 1 g in small consumer electronics to about 1 kg in electric vehicles (EVs) and hybrid and electric vehicles (HEVs), and can be as large as 1000–2000 kg in the generators of modern wind turbines. NdFeB permanent magnets contain about 31–32 wt% of rare-earth elements (REEs). Recycling of REEs contained in this type of magnets from the End-of-Life (EOL) products will play an important and complementary role in the total supply of REEs in the future. However, collection and recovery of the magnets from small consumer electronics imposes great social and technological challenges. This paper gives an overview of the sources of NdFeB permanent magnets related to their applications, followed by a summary of the various available technologies to recover the REEs from these magnets, including physical processing and separation, direct alloy production, and metallurgical extraction and recovery. At present, no commercial operation has been identified for recycling the EOL NdFeB permanent magnets and the recovery of the associated REE content. Most of the processing methods are still at various research and development stages. It is estimated that in the coming 10–15 years, the recycled REEs from EOL permanent magnets will play a significant role in the total REE supply in the magnet sector, provided that efficient technologies will be developed and implemented in practice.

Journal ArticleDOI
TL;DR: In this paper, the authors adopt a theory-based approach to synthesize research on the effectiveness of payments for environmental services in achieving environmental objectives and socio-economic co-benefits in varying contexts.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the performance of linear precoders, such as maximal-ratio transmission and zero-forcing, subject to coarse quantization and derived a closed-form approximation on the rate achievable under such quantization.
Abstract: Massive multiuser (MU) multiple-input multiple-output (MIMO) is foreseen to be one of the key technologies in fifth-generation wireless communication systems. In this paper, we investigate the problem of downlink precoding for a narrowband massive MU-MIMO system with low-resolution digital-to-analog converters (DACs) at the base station (BS). We analyze the performance of linear precoders, such as maximal-ratio transmission and zero-forcing, subject to coarse quantization. Using Bussgang’s theorem, we derive a closed-form approximation on the rate achievable under such coarse quantization. Our results reveal that the performance attainable with infinite-resolution DACs can be approached using DACs having only 3–4 bits of resolution, depending on the number of BS antennas and the number of user equipments (UEs). For the case of 1-bit DACs, we also propose novel nonlinear precoding algorithms that significantly outperform linear precoders at the cost of an increased computational complexity. Specifically, we show that nonlinear precoding incurs only a 3 dB penalty compared with the infinite-resolution case for an uncoded bit-error rate of 10−3, in a system with 128 BS antennas that uses 1-bit DACs and serves 16 single-antenna UEs. In contrast, the penalty for linear precoders is about 8 dB.

Proceedings Article
17 Jul 2017
TL;DR: In this paper, a family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability, was proposed, where the algorithms learn a "balanced" representation such that the induced treated and control distributions look similar.
Abstract: There is intense interest in applying machine learning to problems of causal inference in fields such as healthcare, economics and education. In particular, individual-level causal inference has important applications such as precision medicine. We give a new theoretical analysis and family of algorithms for predicting individual treatment effect (ITE) from observational data, under the assumption known as strong ignorability. The algorithms learn a "balanced" representation such that the induced treated and control distributions look similar. We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation. We use Integral Probability Metrics to measure distances between distributions, deriving explicit bounds for the Wasserstein and Maximum Mean Discrepancy (MMD) distances. Experiments on real and simulated data show the new algorithms match or outperform the state-of-the-art.

Journal ArticleDOI
TL;DR: A mitogenic hydrogel system based on alginate sulfate which potently supports chondrocyte phenotype, but is not printable due to its rheological properties (no yield point) is identified.
Abstract: One of the challenges of bioprinting is to identify bioinks which support cell growth, tissue maturation, and ultimately the formation of functional grafts for use in regenerative medicine. The influence of this new biofabrication technology on biology of living cells, however, is still being evaluated. Recently we have identified a mitogenic hydrogel system based on alginate sulfate which potently supports chondrocyte phenotype, but is not printable due to its rheological properties (no yield point). To convert alginate sulfate to a printable bioink, it was combined with nanocellulose, which has been shown to possess very good printability. The alginate sulfate/nanocellulose ink showed good printing properties and the non-printed bioink material promoted cell spreading, proliferation, and collagen II synthesis by the encapsulated cells. When the bioink was printed, the biological performance of the cells was highly dependent on the nozzle geometry. Cell spreading properties were maintained with the lowest extrusion pressure and shear stress. However, extruding the alginate sulfate/nanocellulose bioink and chondrocytes significantly compromised cell proliferation, particularly when using small diameter nozzles and valves.

Journal ArticleDOI
TL;DR: This work proposes a direct localization approach in which the position of a user is localized by jointly processing the observations obtained at distributed massive MIMO base stations, and leads to improved performance results compared to previous existing methods.
Abstract: Large-scale MIMO systems are well known for their advantages in communications, but they also have the potential for providing very accurate localization, thanks to their high angular resolution. A difficult problem arising indoors and outdoors is localizing users over multipath channels. Localization based on angle of arrival (AOA) generally involves a two-step procedure, where signals are first processed to obtain a user's AOA at different base stations, followed by triangulation to determine the user's position. In the presence of multipath, the performance of these methods is greatly degraded due to the inability to correctly detect and/or estimate the AOA of the line-of-sight (LOS) paths. To counter the limitations of this two-step procedure which is inherently suboptimal, we propose a direct localization approach in which the position of a user is localized by jointly processing the observations obtained at distributed massive MIMO base stations. Our approach is based on a novel compressed sensing framework that exploits channel properties to distinguish LOS from non-LOS signal paths, and leads to improved performance results compared to previous existing methods.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a catalogue of similar to 3000 submillimetre sources detected at 850 mu m over similar to 5 deg(2) surveyed as part of the SCUBA-2 Cosmology Legacy Survey (S2CLS).
Abstract: We present a catalogue of similar to 3000 submillimetre sources detected (>= 3.5 sigma) at 850 mu m over similar to 5 deg(2) surveyed as part of the James Clerk Maxwell Telescope (JCMT) SCUBA-2 Cosmology Legacy Survey (S2CLS). This is the largest survey of its kind at 850 mu m, increasing the sample size of 850 mu m selected submillimetre galaxies by an order of magnitude. The wide 850 mu m survey component of S2CLS covers the extragalactic fields: UKIDSS-UDS, COSMOS, Akari-NEP, Extended Groth Strip, Lockman Hole North, SSA22 and GOODS-North. The average 1s depth of S2CLS is 1.2 mJy beam(-1), approaching the SCUBA-2 850 mu m confusion limit, which we determine to be sigma(c) approximate to 0.8 mJy beam(-1). We measure the 850 mu m number counts, reducing the Poisson errors on the differential counts to approximately 4 per cent at S-850 approximate to 3 mJy. With several independent fields, we investigate field-to-field variance, finding that the number counts on 0.5 degrees-1 degrees scales are generally within 50 per cent of the S2CLS mean for S-850 > 3 mJy, with scatter consistent with the Poisson and estimated cosmic variance uncertainties, although there is a marginal (2 sigma) density enhancement in GOODS-North. The observed counts are in reasonable agreement with recent phenomenological and semi-analytic models, although determining the shape of the faint-end slope (S-850 10 mJy there are approximately 10 sources per square degree, and we detect the distinctive up-turn in the number counts indicative of the detection of local sources of 850 mu m emission

Journal ArticleDOI
TL;DR: These findings have identified matrix remodeling, in the absence of cytoskeletal tension generation, as a previously unknown strategy to maintain stemness in 3D.
Abstract: Neural progenitor cell (NPC) culture within three-dimensional (3D) hydrogels is an attractive strategy for expanding a therapeutically relevant number of stem cells. However, relatively little is known about how 3D material properties such as stiffness and degradability affect the maintenance of NPC stemness in the absence of differentiation factors. Over a physiologically relevant range of stiffness from â 1/40.5 to 50 kPa, stemness maintenance did not correlate with initial hydrogel stiffness. In contrast, hydrogel degradation was both correlated with, and necessary for, maintenance of NPC stemness. This requirement for degradation was independent of cytoskeletal tension generation and presentation of engineered adhesive ligands, instead relying on matrix remodelling to facilitate cadherin-mediated cell-cell contact and promote ?-catenin signalling. In two additional hydrogel systems, permitting NPC-mediated matrix remodelling proved to be a generalizable strategy for stemness maintenance in 3D. Our findings have identified matrix remodelling, in the absence of cytoskeletal tension generation, as a previously unknown strategy to maintain stemness in 3D.

Journal ArticleDOI
01 Dec 2017
TL;DR: This work model the service placement problem for IoT applications over fog resources as an optimization problem, which explicitly considers the heterogeneity of applications and resources in terms of Quality of Service attributes, and proposes a genetic algorithm as a problem resolution heuristic.
Abstract: The Internet of Things (IoT) leads to an ever-growing presence of ubiquitous networked computing devices in public, business, and private spaces. These devices do not simply act as sensors, but feature computational, storage, and networking resources. Being located at the edge of the network, these resources can be exploited to execute IoT applications in a distributed manner. This concept is known as fog computing. While the theoretical foundations of fog computing are already established, there is a lack of resource provisioning approaches to enable the exploitation of fog-based computational resources. To resolve this shortcoming, we present a conceptual fog computing framework. Then, we model the service placement problem for IoT applications over fog resources as an optimization problem, which explicitly considers the heterogeneity of applications and resources in terms of Quality of Service attributes. Finally, we propose a genetic algorithm as a problem resolution heuristic and show, through experiments, that the service execution can achieve a reduction of network communication delays when the genetic algorithm is used, and a better utilization of fog resources when the exact optimization method is applied.

Journal ArticleDOI
TL;DR: The Line Information System Architecture (LISA), is presented, an event-driven architecture featuring loose coupling, a prototype-oriented information model and formalised transformation services designed to enable flexible factory integration and data utilisation.
Abstract: Future manufacturing systems need to be more flexible, to embrace tougher and constantly changing market demands. They need to make better use of plant data, ideally utilising all data from the entire plant. Low-level data should be refined to real-time information for decision-making, to facilitate competitiveness through informed and timely decisions. The Line Information System Architecture (LISA), is presented in this paper. It is an event-driven architecture featuring loose coupling, a prototype-oriented information model and formalised transformation services. LISA is designed to enable flexible factory integration and data utilisation. The focus of LISA is on integration of devices and services on all levels, simplifying hardware changes and integration of new smart services as well as supporting continuous improvements on information visualisation and control. The architecture has been evaluated on both real industrial data and industrial demonstrators and it is also being installed at a large automotive company. This article is an extended and revised version of the paper presented at the 2015 IFAC Symposium on Information Control in Manufacturing (INCOM 2015). The paper has been restructured in regards to the order and title of the chapters, and additional information about the integration between devices and services aspects have been added. The introduction and the general structure of the paper now better highlight the contributions of the paper and the uniqueness of the framework.

01 Jan 2017
TL;DR: In this article, the authors reviewed the economics of global warming with special emphasis on how the cost depends on the discount rate and on how costs in poor and rich regions are aggregated into a global cost estimate.
Abstract: The economics of global warming is reviewed with special emphasis on how the cost depends on the discount rate and on how costs in poor and rich regions are aggregated into a global cost estimate. Both of these factors depend on the assumptions made concerning the underlying utility and welfare functions. It is common to aggregate welfare gains and losses across generations and countries as if the utility of money were constant, but it is not If we assume that a C02-equivalent doubling implies costs equal to 1.5% of the income in both high and low income countries, a pure rate of time preference equal to zero, and a utility function which is logarithmic in income, then the marginal cost of C02 emissions is estimated at 260-590 USD/ton C for a time horizon in the range 300-1000 years, an estimate which is large enough to justify significant reductions of C02 emissions on purely economic grounds. The estimate is approximately 50-100-times larger than the estimate made by Nordhaus in his DICE model and the difference is almost completely due to the choice of discount rate and the weight given to the costs in the developing world as well as a more accurate model of the carbon cycle. Finally, the sensitivity of the marginal cost estimate with respect to several parameters is analyzed.

Journal ArticleDOI
TL;DR: The knowledge-based view of the firm is drawn to investigate how search in external knowledge sources and information technology for knowledge absorption jointly influence process innovation performance, and how firms should coordinate strategies for sourcing external knowledge with specific IT investments in order to improve their innovation performance.
Abstract: Prior information systems research highlights the vital role of information technology (IT) for innovation in firms. At the same time, innovation literature has shown that accessing and integrating knowledge from sources that reside outside the firm, such as customers, competitors, universities, or consultants, is critical to firms' innovative success. In this paper, we draw on the knowledge-based view of the firm to investigate how search in external knowledge sources and information technology for knowledge absorption jointly influence process innovation performance. Our model is tested on a nine-year panel (2003-2011) of Swiss firms from a wide range of manufacturing industries. Using instrumental variables, and disaggregating by type of IT, we find that data access systems and network connectivity hold very different potential for the effective absorption of external knowledge, and the subsequent realized economic gains from process innovation. Against the backdrop of today's digital transformation, our findings demonstrate how firms should coordinate strategies for sourcing external knowledge with specific IT investments in order to improve their innovation performance.

Journal ArticleDOI
TL;DR: A coherent perfect absorber is a system in which complete absorption of electromagnetic radiation is achieved by controlling the interference of multiple incident waves as discussed by the authors, which can be made much more efficient by exploiting wave interference.
Abstract: Absorption of electromagnetic energy by a material is a phenomenon that underlies many applied problems, including molecular sensing, photocurrent generation and photodetection. Commonly, the incident energy is delivered to the system through a single channel, for example by a plane wave incident on one side of an absorber. However, absorption can be made much more efficient by exploiting wave interference. A coherent perfect absorber is a system in which complete absorption of electromagnetic radiation is achieved by controlling the interference of multiple incident waves. Here, we review recent advances in the design and applications of such devices. We present the theoretical principles underlying the phenomenon of coherent perfect absorption and give an overview of the photonic structures in which it can be realized, including planar and guided-mode structures, graphene-based systems, parity- and time-symmetric structures, 3D structures and quantum-mechanical systems. We then discuss possible applications of coherent perfect absorption in nanophotonics and, finally, we survey the perspectives for the future of this field.

Journal ArticleDOI
TL;DR: In this paper, the authors present a set of guiding principles for energy system optimization models (ESOMs) that can be used to guide ESOM-based analysis, including how to formulate research questions, set spatio-temporal boundaries, consider appropriate model features, conduct and refine the analysis, quantify uncertainty, and communicate insights.

Journal ArticleDOI
TL;DR: A comprehensive analysis of the various methodologies developed for optimizing the optimal design of a hybrid electric vehicle spreads over multiple levels and identifies challenges for future research.
Abstract: The optimal design of a hybrid electric vehicle (HEV) can be formulated as a multiobjective optimization problem that spreads over multiple levels (technology, topology, size, and control). In the last decade, studies have shown that by integrating these optimization levels, fuel benefits are obtained, which go beyond the results achieved with solely optimal control for a given topology. Due to the large number of variables for optimization, their diversity, and the nonlinear and multiobjective nature of the problem, a variety of methodologies have been developed. This paper presents a comprehensive analysis of the various methodologies developed and identifies challenges for future research. Starting from a general description of the problem, with examples found in the literature, we categorize the types of optimization problems and methods used. To offer a complete analysis, we broaden the scope of the search to several sectors of transport, such as naval or ground.