scispace - formally typeset
Search or ask a question

Showing papers by "Polytechnic University of Milan published in 2018"


Journal ArticleDOI
01 Jun 2018
TL;DR: This Review Article examines the development of in-memory computing using resistive switching devices, where the two-terminal structure of the devices, theirresistive switching properties, and direct data processing in the memory can enable area- and energy-efficient computation.
Abstract: Modern computers are based on the von Neumann architecture in which computation and storage are physically separated: data are fetched from the memory unit, shuttled to the processing unit (where computation takes place) and then shuttled back to the memory unit to be stored. The rate at which data can be transferred between the processing unit and the memory unit represents a fundamental limitation of modern computers, known as the memory wall. In-memory computing is an approach that attempts to address this issue by designing systems that compute within the memory, thus eliminating the energy-intensive and time-consuming data movement that plagues current designs. Here we review the development of in-memory computing using resistive switching devices, where the two-terminal structure of the devices, their resistive switching properties, and direct data processing in the memory can enable area- and energy-efficient computation. We examine the different digital, analogue, and stochastic computing schemes that have been proposed, and explore the microscopic physical mechanisms involved. Finally, we discuss the challenges in-memory computing faces, including the required scaling characteristics, in delivering next-generation computing. This Review Article examines the development of in-memory computing using resistive switching devices.

1,193 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe scenarios that limit end-of-century radiative forcing to 1.9 Wm−2, and consequently restrict median warming in the year 2100 to below 1.5 W m−2.
Abstract: The 2015 Paris Agreement calls for countries to pursue efforts to limit global-mean temperature rise to 1.5 °C. The transition pathways that can meet such a target have not, however, been extensively explored. Here we describe scenarios that limit end-of-century radiative forcing to 1.9 W m−2, and consequently restrict median warming in the year 2100 to below 1.5 °C. We use six integrated assessment models and a simple climate model, under different socio-economic, technological and resource assumptions from five Shared Socio-economic Pathways (SSPs). Some, but not all, SSPs are amenable to pathways to 1.5 °C. Successful 1.9 W m−2 scenarios are characterized by a rapid shift away from traditional fossil-fuel use towards large-scale low-carbon energy supplies, reduced energy use, and carbon-dioxide removal. However, 1.9 W m−2 scenarios could not be achieved in several models under SSPs with strong inequalities, high baseline fossil-fuel use, or scattered short-term climate policy. Further research can help policy-makers to understand the real-world implications of these scenarios.

733 citations


Journal ArticleDOI
TL;DR: The optical characterization of tyrosine, thyroglobulin and iodine using a time domain broadband diffuse optical spectrometer in the 550–1350 nm range is presented and a brief comparison with other known tissue constituents is presented, which reveals key spectral regions for the quantification of the thyroid absorbers in an in vivo scenario.
Abstract: Thyroid plays an important role in the endocrine system of the human body. Its characterization by diffuse optics can open new path ways in the non-invasive diagnosis of thyroid pathologies. Yet, the absorption spectra of tyrosine and thyroglobulin–key tissue constituents specific to the thyroid organ–in the visible to near infrared range are not fully available. Here, we present the optical characterization of tyrosine (powder), thyroglobulin (granular form) and iodine (aqueous solution) using a time domain broadband diffuse optical spectrometer in the 550–1350 nm range. Various systematic errors caused by physics of photo migration and sample inherent properties were effectively suppressed by means of advanced time domain diffuse optical methods. A brief comparison with various other known tissue constituents is presented, which reveals key spectral regions for the quantification of the thyroid absorbers in an in vivo scenario.

543 citations


Journal ArticleDOI
Craig E. Aalseth1, Fabio Acerbi2, P. Agnes3, Ivone F. M. Albuquerque4  +297 moreInstitutions (48)
TL;DR: The DarkSide-20k detector as discussed by the authors is a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active mass of 23 t (20 t).
Abstract: Building on the successful experience in operating the DarkSide-50 detector, the DarkSide Collaboration is going to construct DarkSide-20k, a direct WIMP search detector using a two-phase Liquid Argon Time Projection Chamber (LAr TPC) with an active (fiducial) mass of 23 t (20 t). This paper describes a preliminary design for the experiment, in which the DarkSide-20k LAr TPC is deployed within a shield/veto with a spherical Liquid Scintillator Veto (LSV) inside a cylindrical Water Cherenkov Veto (WCV). This preliminary design provides a baseline for the experiment to achieve its physics goals, while further development work will lead to the final optimization of the detector parameters and an eventual technical design. Operation of DarkSide-50 demonstrated a major reduction in the dominant 39Ar background when using argon extracted from an underground source, before applying pulse shape analysis. Data from DarkSide-50, in combination with MC simulation and analytical modeling, shows that a rejection factor for discrimination between electron and nuclear recoils of $>3 \times 10^{9}$ is achievable. This, along with the use of the veto system and utilizing silicon photomultipliers in the LAr TPC, are the keys to unlocking the path to large LAr TPC detector masses, while maintaining an experiment in which less than $< 0.1$ events (other than $ u$ -induced nuclear recoils) is expected to occur within the WIMP search region during the planned exposure. DarkSide-20k will have ultra-low backgrounds than can be measured in situ, giving sensitivity to WIMP-nucleon cross sections of $1.2 \times 10^{-47}$ cm2 ( $1.1 \times 10^{-46}$ cm2) for WIMPs of 1 TeV/c2 (10 TeV/c2) mass, to be achieved during a 5 yr run producing an exposure of 100 t yr free from any instrumental background.

534 citations


Journal ArticleDOI
TL;DR: In this article, the authors estimate country-level contributions to the social cost of carbon using recent climate model projections, empirical climate-driven economic damage estimations and socio-economic projections.
Abstract: The social cost of carbon (SCC) is a commonly employed metric of the expected economic damages from carbon dioxide (CO2) emissions Although useful in an optimal policy context, a world-level approach obscures the heterogeneous geography of climate damage and vast differences in country-level contributions to the global SCC, as well as climate and socio-economic uncertainties, which are larger at the regional level Here we estimate country-level contributions to the SCC using recent climate model projections, empirical climate-driven economic damage estimations and socio-economic projections Central specifications show high global SCC values (median, US$417 per tonne of CO2 (tCO2); 66% confidence intervals, US$177–805 per tCO2) and a country-level SCC that is unequally distributed However, the relative ranking of countries is robust to different specifications: countries that incur large fractions of the global cost consistently include India, China, Saudi Arabia and the United States Global estimates of the economic impacts of CO2 emissions may obscure regional heterogeneities A modular framework for estimating the country-level social cost of carbon shows consistently unequal country-level costs

473 citations


Journal ArticleDOI
26 Feb 2018
TL;DR: In this paper, the challenges and opportunities of blockchain for business process management (BPM) are outlined and a summary of seven research directions for investigating the application of blockchain technology in the context of BPM are presented.
Abstract: Blockchain technology offers a sizable promise to rethink the way interorganizational business processes are managed because of its potential to realize execution without a central party serving as a single point of trust (and failure). To stimulate research on this promise and the limits thereof, in this article, we outline the challenges and opportunities of blockchain for business process management (BPM). We first reflect how blockchains could be used in the context of the established BPM lifecycle and second how they might become relevant beyond. We conclude our discourse with a summary of seven research directions for investigating the application of blockchain technology in the context of BPM.

456 citations


Journal ArticleDOI
TL;DR: The main purpose of this paper is to review the state-of-the-art on intermediate human–robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human-robot collaboration.
Abstract: Recent technological advances in hardware design of the robotic platforms enabled the implementation of various control modalities for improved interactions with humans and unstructured environments. An important application area for the integration of robots with such advanced interaction capabilities is human---robot collaboration. This aspect represents high socio-economic impacts and maintains the sense of purpose of the involved people, as the robots do not completely replace the humans from the work process. The research community's recent surge of interest in this area has been devoted to the implementation of various methodologies to achieve intuitive and seamless human---robot-environment interactions by incorporating the collaborative partners' superior capabilities, e.g. human's cognitive and robot's physical power generation capacity. In fact, the main purpose of this paper is to review the state-of-the-art on intermediate human---robot interfaces (bi-directional), robot control modalities, system stability, benchmarking and relevant use cases, and to extend views on the required future developments in the realm of human---robot collaboration.

452 citations


Journal ArticleDOI
TL;DR: In this paper, the authors identify less abundant iodine defects as the source of photochemically active deep electron and hole traps in MAPbI3 and explain the defect tolerance of mixed-halide perovskites.
Abstract: Metal-halide perovskites are outstanding materials for photovoltaics Their long carrier lifetimes and diffusion lengths favor efficient charge collection, leading to efficiencies competing with established photovoltaics These observations suggest an apparently low density of traps in the prototype methylammonium lead iodide (MAPbI3) contrary to the expected high defect density of a low-temperature, solution-processed material Combining first-principles calculations and spectroscopic measurements we identify less abundant iodine defects as the source of photochemically active deep electron and hole traps in MAPbI3 The peculiar iodine redox chemistry leads, however, to kinetic deactivation of filled electron traps, leaving only short-living hole traps as potentially harmful defects Under mild oxidizing conditions the amphoteric hole traps can be converted into kinetically inactive electron traps, providing a rationale for the defect tolerance of metal-halide perovskites Bromine and chlorine doping of MAPbI3 also inactivate hole traps, possibly explaining the superior optoelectronic properties of mixed-halide perovskites

442 citations


Journal ArticleDOI
TL;DR: This review explores multiple components of the food‐energy‐water nexus and highlights possible approaches that could be used to meet food and energy security with the limited renewable water resources of the planet.
Abstract: Water availability is a major factor constraining humanity's ability to meet the future food and energy needs of a growing and increasingly affluent human population. Water plays an important role ...

392 citations


Journal ArticleDOI
21 Nov 2018-Sensors
TL;DR: An overview of five Co-CPS use cases, as introduced in the SafeCOP EU project, and a comprehensive analysis of the main existing wireless communication technologies giving details about the protocols developed within particular standardization bodies are provided.
Abstract: Cooperative Cyber-Physical Systems (Co-CPSs) can be enabled using wireless communication technologies, which in principle should address reliability and safety challenges. Safety for Co-CPS enabled by wireless communication technologies is a crucial aspect and requires new dedicated design approaches. In this paper, we provide an overview of five Co-CPS use cases, as introduced in our SafeCOP EU project, and analyze their safety design requirements. Next, we provide a comprehensive analysis of the main existing wireless communication technologies giving details about the protocols developed within particular standardization bodies. We also investigate to what extent they address the non-functional requirements in terms of safety, security and real time, in the different application domains of each use case. Finally, we discuss general recommendations about the use of different wireless communication technologies showing their potentials in the selected real-world use cases. The discussion is provided under consideration in the 5G standardization process within 3GPP, whose current efforts are inline to current gaps in wireless communications protocols for Co-CPSs including many future use cases.

387 citations


Journal ArticleDOI
TL;DR: No single segmentation approach is suitable for all the different anatomical region or imaging modalities, thus the primary goal of this review was to provide an up to date source of information about the state of the art of the vessel segmentation algorithms so that the most suitable methods can be chosen according to the specific task.

Journal ArticleDOI
TL;DR: The landscape for entrepreneurial finance has changed strongly over the last years as discussed by the authors, and many new players have entered the arena, and they can be classified into four dimensions: debt or equity, investment goal, investment approach, and investment target.
Abstract: The landscape for entrepreneurial finance has changed strongly over the last years. Many new players have entered the arena. This editorial introduces and describes the new players and compares them along the four dimensions: debt or equity, investment goal, investment approach, and investment target. Following this, we discuss the factors explaining the emergence of the new players and group them into supply- and demand-side factors. The editorial gives researchers and practitioners orientation about recent developments in entrepreneurial finance and provides avenues for relevant and fruitful further research.

Journal ArticleDOI
TL;DR: In this paper, the state-of-the-art of self-healing concrete is provided, covering autogenous or intrinsic healing of traditional concrete followed by stimulated autogenous healing via use of mineral additives, crystalline admixtures or (superabsorbent) polymers.
Abstract: The increasing concern for safety and sustainability of structures is calling for the development of smart self-healing materials and preventive repair methods. The appearance of small cracks (<300 µm in width) in concrete is almost unavoidable, not necessarily causing a risk of collapse for the structure, but surely impairing its functionality, accelerating its degradation, and diminishing its service life and sustainability. This review provides the state-of-the-art of recent developments of self-healing concrete, covering autogenous or intrinsic healing of traditional concrete followed by stimulated autogenous healing via use of mineral additives, crystalline admixtures or (superabsorbent) polymers, and subsequently autonomous self-healing mechanisms, i.e. via, application of micro-, macro-, or vascular encapsulated polymers, minerals, or bacteria. The (stimulated) autogenous mechanisms are generally limited to healing crack widths of about 100–150 µm. In contrast, most autonomous self-healing mechanisms can heal cracks of 300 µm, even sometimes up to more than 1 mm, and usually act faster. After explaining the basic concept for each self-healing technique, the most recent advances are collected, explaining the progress and current limitations, to provide insights toward the future developments. This review addresses the research needs required to remove hindrances that limit market penetration of self-healing concrete technologies.

Journal ArticleDOI
TL;DR: This paper is concerned with dissipativity-based fuzzy integral sliding mode control (FISMC) of continuous-time Takagi-Sugeno (T-S) fuzzy systems with matched/unmatched uncertainties and external disturbance, and an appropriate integral-type fuzzy switching surface is put forward.
Abstract: This paper is concerned with dissipativity-based fuzzy integral sliding mode control (FISMC) of continuous-time Takagi-Sugeno (T-S) fuzzy systems with matched/unmatched uncertainties and external disturbance To better accommodate the characteristics of T-S fuzzy models, an appropriate integral-type fuzzy switching surface is put forward by taking the state-dependent input matrix into account, which is the key contribution of the paper Based on the utilization of Lyapunov function and property of the transition matrix for unmatched uncertainties, sufficient conditions are presented to guarantee the asymptotic stability of corresponding sliding mode dynamics with a strictly dissipative performance A FISMC law is synthesized to drive system trajectories onto the fuzzy switching surface despite matched/unmatched uncertainties and external disturbance A modified adaptive FISMC law is further designed for adapting the unknown upper bound of matched uncertainty Two practical examples are provided to illustrate the effectiveness and advantages of developed FISMC scheme

Journal ArticleDOI
TL;DR: The aim of this paper is to synthesize a controller via an event-triggered communication scheme such that not only the resulting closed-loop system is finite-time bounded and satisfies a prescribed performance level, but also the communication burden is reduced.
Abstract: This paper investigates the finite-time event-triggered $\mathcal{H}_{\infty }$ control problem for Takagi–Sugeno Markov jump fuzzy systems. Because of the sampling behaviors and the effect of network environment, the premise variables considered in this paper are subject to asynchronous constraints. The aim of this paper is to synthesize a controller via an event-triggered communication scheme such that not only the resulting closed-loop system is finite-time bounded and satisfies a prescribed $\mathcal{H}_{\infty }$ performance level, but also the communication burden is reduced. First, a sufficient condition is established for the finite-time bounded $\mathcal{H} _{\infty }$ performance analysis of the closed-loop fuzzy system with fully considering the asynchronous premises. Then, based on the derived condition, the method of the desired controller design is presented. Two illustrative examples are finally presented to demonstrate the practicability and efficacy of the proposed method.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the determinants of these residual emissions, focusing on sector-level contributions, and show that even when strengthened pre-2030 mitigation action is combined with very stringent long-term policies, cumulative residual CO2 emissions from fossil fuels remain at 850-1,150 GtCO2 during 2016-2100, despite carbon prices of US$130-420 per tCO2 by 2030.
Abstract: The Paris Agreement—which is aimed at holding global warming well below 2 °C while pursuing efforts to limit it below 1.5 °C—has initiated a bottom-up process of iteratively updating nationally determined contributions to reach these long-term goals. Achieving these goals implies a tight limit on cumulative net CO2 emissions, of which residual CO2 emissions from fossil fuels are the greatest impediment. Here, using an ensemble of seven integrated assessment models (IAMs), we explore the determinants of these residual emissions, focusing on sector-level contributions. Even when strengthened pre-2030 mitigation action is combined with very stringent long-term policies, cumulative residual CO2 emissions from fossil fuels remain at 850–1,150 GtCO2 during 2016–2100, despite carbon prices of US$130–420 per tCO2 by 2030. Thus, 640–950 GtCO2 removal is required for a likely chance of limiting end-of-century warming to 1.5 °C. In the absence of strengthened pre-2030 pledges, long-term CO2 commitments are increased by 160–330 GtCO2, further jeopardizing achievement of the 1.5 °C goal and increasing dependence on CO2 removal.

Journal ArticleDOI
TL;DR: The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs) to accurately approximate the coefficients of the reduced model.

Journal ArticleDOI
TL;DR: In this article, the authors present a review of existing works that consider information from such sequentially ordered user-item interaction logs in the recommendation process and propose a categorization of the corresponding recommendation tasks and goals, summarize existing algorithmic solutions, discuss methodological approaches when benchmarking what they call sequence-aware recommender systems, and outline open challenges in the area.
Abstract: Recommender systems are one of the most successful applications of data mining and machine-learning technology in practice. Academic research in the field is historically often based on the matrix completion problem formulation, where for each user-item-pair only one interaction (e.g., a rating) is considered. In many application domains, however, multiple user-item interactions of different types can be recorded over time. And, a number of recent works have shown that this information can be used to build richer individual user models and to discover additional behavioral patterns that can be leveraged in the recommendation process.In this work, we review existing works that consider information from such sequentially ordered user-item interaction logs in the recommendation process. Based on this review, we propose a categorization of the corresponding recommendation tasks and goals, summarize existing algorithmic solutions, discuss methodological approaches when benchmarking what we call sequence-aware recommender systems, and outline open challenges in the area.

Journal ArticleDOI
TL;DR: A formalization of the fraud-detection problem is proposed that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions and a novel learning strategy is designed and assessed that effectively addresses class imbalance, concept drift, and verification latency.
Abstract: Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.

Posted Content
TL;DR: A categorization of the corresponding recommendation tasks and goals is proposed, existing algorithmic solutions are summarized, methodological approaches when benchmarking what the authors call sequence-aware recommender systems are discussed, and open challenges in the area are outlined.
Abstract: Recommender systems are one of the most successful applications of data mining and machine learning technology in practice. Academic research in the field is historically often based on the matrix completion problem formulation, where for each user-item-pair only one interaction (e.g., a rating) is considered. In many application domains, however, multiple user-item interactions of different types can be recorded over time. And, a number of recent works have shown that this information can be used to build richer individual user models and to discover additional behavioral patterns that can be leveraged in the recommendation process. In this work we review existing works that consider information from such sequentially-ordered user- item interaction logs in the recommendation process. Based on this review, we propose a categorization of the corresponding recommendation tasks and goals, summarize existing algorithmic solutions, discuss methodological approaches when benchmarking what we call sequence-aware recommender systems, and outline open challenges in the area.

Journal ArticleDOI
TL;DR: This review summarizes recent progress made in developing Zn alloys for vascular stenting application and critically surveys the zinc alloys developed since 2013 from metallurgical and biodegradation points of view.

Journal ArticleDOI
TL;DR: This article analyzed the determinants of the success of these token offerings by considering a sample of 253 campaigns and found that the probability of an ICO's success is higher if the code source is available, when a token presale is organized, and when tokens allow contributors to access a specific service (or to share profits).

Journal ArticleDOI
TL;DR: The ASHRAE Global Thermal Comfort Database II (Comfort Database II) as discussed by the authors is an open-source thermal comfort database that includes approximately 81,846 complete sets of objective indoor climatic observations with accompanying subjective evaluations by the building occupants who were exposed to them.

Journal ArticleDOI
TL;DR: In this paper, a functional hygroscopic polymer, poly(ethylene oxide), was applied to perovskite solar cells to make them more stable in a humid environment.
Abstract: Long-term device stability is one of the most critical issues that impede perovskite solar cell commercialization. Here we show that a thin layer of a functional hygroscopic polymer, poly(ethylene oxide), PEO, on top of the perovskite thin film, can make perovskite-based solar cells highly stable during operation and in a humid atmosphere. We prove that PEO chemically interacts with lead ions on the perovskite surface, and thus passivates undercoordinated defect sites. Importantly, defect healing by PEO not only results in an improvement of the photo-voltage but also makes the perovskite thin film stable. We demonstrate that the hygroscopic PEO thin film can prevent water inclusion into the perovskite film by screening water molecules, thus having a multi-functional role. Overall, such interface engineering leads to highly durable perovskite solar cells, which, in the presence of PEO passivation, retained more than 95% of their initial power conversion efficiency over 15 h illumination, under load, in ambient atmosphere without encapsulation. Our findings experimentally reveal the role of interface engineering in mastering the instability of perovskite materials and propose a general approach for improving the reliability of perovskite-based optoelectronic devices.

Journal ArticleDOI
A. Aab1, P. Abreu2, Marco Aglietta, Ivone F. M. Albuquerque3  +391 moreInstitutions (64)
TL;DR: In this paper, a new analysis of the data set from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources.
Abstract: A new analysis of the data set from the Pierre Auger Observatory provides evidence for anisotropy in the arrival directions of ultra-high-energy cosmic rays on an intermediate angular scale, which is indicative of excess arrivals from strong, nearby sources. The data consist of 5514 events above 20 EeV with zenith angles up to 80 recorded before 2017 April 30. Sky models have been created for two distinct populations of extragalactic gamma-ray emitters: active galactic nuclei from the second catalog of hard Fermi-LAT sources (2FHL) and starburst galaxies from a sample that was examined with Fermi-LAT. Flux-limited samples, which include all types of galaxies from the Swift-BAT and 2MASS surveys, have been investigated for comparison. The sky model of cosmic-ray density constructed using each catalog has two free parameters, the fraction of events correlating with astrophysical objects, and an angular scale characterizing the clustering of cosmic rays around extragalactic sources. A maximum-likelihood ratio test is used to evaluate the best values of these parameters and to quantify the strength of each model by contrast with isotropy. It is found that the starburst model fits the data better than the hypothesis of isotropy with a statistical significance of 4.0σ, the highest value of the test statistic being for energies above 39 EeV. The three alternative models are favored against isotropy with 2.7σ-3.2σ significance. The origin of the indicated deviation from isotropy is examined and prospects for more sensitive future studies are discussed.

Journal ArticleDOI
TL;DR: This paper examined the roles of different sources of family firm heterogeneity and the context in shaping the determinants, processes, and outcomes of business internationalization, and set out an agenda for further research aimed at advancing a more fine-grained and contextualized understanding of internationalization in family firms.
Abstract: Research on the internationalization of family firms has flourished in recent years, yet the mechanisms through which family involvement shapes the determinants, processes, and outcomes of internationalization remain little understood and largely undertheorized. We contribute to research at the intersection of international business and family business by examining the roles of different sources of family firm heterogeneity and the context in shaping the determinants, processes, and outcomes of business internationalization. Drawing on this analysis, we summarize the articles published in this special issue and set out an agenda for further research aimed at advancing a more fine-grained and contextualized understanding of internationalization in family firms.

Journal ArticleDOI
TL;DR: In this article, the authors investigate the existing body of knowledge on crowdsourcing systematically through a penetrating review in which the strengths and weakness of this literature stream are presented clearly and then future avenues of research are set out.
Abstract: As academic and practitioner studies on crowdsourcing have been building up since 2006, the subject itself has progressively gained in importance within the broad field of management. No systematic review on the topic has so far appeared in management journals, however; moreover, the field suffers from ambiguity in the topic's definition, which in turn has led to its largely unstructured evolution. The authors therefore investigate the existing body of knowledge on crowdsourcing systematically through a penetrating review in which the strengths and weakness of this literature stream are presented clearly and then future avenues of research are set out. The review is based on 121 scientific articles published between January 2006 and January 2015. The review recognizes that crowdsourcing is ingrained in two mainstream disciplines within the broader subject matter of innovation and management: (1) open innovation; and (2) co-creation. The review, in addition, also touches on several issues covered in other theoretical streams: (3) information systems management; (4) organizational theory and design; (5) marketing; and (6) strategy. The authors adopt a process perspective, applying the ‘Input–Process–Output’ framework to interpret research on crowdsourcing within the broad lines of: (1) Input (Problem/Task); (2) Process (session management; problem management; knowledge management; technology); and (3) Outcome (solution/completed task; seekers’ benefits; solvers’ benefits). This framework provides a detailed description of how the topic has evolved over time, and suggestions concerning the future direction of research are proposed in the form of research questions that are valuable for both academics and managers.

Journal ArticleDOI
TL;DR: A novel integral-type fuzzy switching surface function is put forward, which contains singular perturbation matrix and state-dependent input matrix simultaneously in a transformed fuzzy SPSs such that the matched uncertainty/perturbation is completely compensated without amplifying the unmatched one.
Abstract: This paper presents a new sliding mode control (SMC) design methodology for fuzzy singularly perturbed systems (SPSs) subject to matched/unmatched uncertainties. To fully accommodate the model characteristics of the systems, a novel integral-type fuzzy switching surface function is put forward, which contains singular perturbation matrix and state-dependent input matrix simultaneously. Its corresponding sliding mode dynamics is a transformed fuzzy SPSs such that the matched uncertainty/perturbation is completely compensated without amplifying the unmatched one. By adopting a $\boldsymbol \varepsilon $ -dependent Lyapunov function, sufficient conditions are presented to guarantee the asymptotic stability of sliding mode dynamics, and a simple search algorithm is provided to find the stability bound. Then, a fuzzy SMC law is synthesized to ensure the reaching condition despite matched/unmatched uncertainties. A modified adaptive fuzzy SMC law is further constructed for adapting the unknown upper bound of the matched uncertainty. The applicability and superiority of obtained fuzzy SMC methodology are verified by a controller design for an electric circuit system.

Journal ArticleDOI
TL;DR: Several test cases intended to be benchmarks for numerical schemes for single-phase fluid flow in fractured porous media are presented, including a vertex and two cell-centred finite volume methods, a non-conforming embedded discrete fracture model, a primal and a dual extended finite element formulation, and a mortar discrete fractures model.

Journal ArticleDOI
TL;DR: First, RRAM devices with improved window and reliability thanks to SiO x dielectric layer are discussed, then, the application of RRAM in neuromorphic computing are addressed, presenting hybrid synapses capable of spike-timing dependent plasticity (STDP).