scispace - formally typeset
Search or ask a question

Showing papers by "Worcester Polytechnic Institute published in 2020"


Journal ArticleDOI
TL;DR: In this article, the authors introduce a measures framework for sustainability based on the United Nations Sustainable Development Goals (SDGs) incorporating various economic, environmental and social attributes, and develop a hybrid multi-situation decision method integrating hesitant fuzzy set, cumulative prospect theory and VIKOR.

485 citations


Journal ArticleDOI
TL;DR: In the context of the COVID-19 pandemic, social distancing has become a common practice in daily lifestyles as individuals, governments, communities, industrial firms, and academic institutions come to grips with the challenges of minimizing the loss of human life in the face of an invisible contagion as mentioned in this paper.
Abstract: As members of the Future Earth Knowledge-Action Network on Systems of Sustainable Consumption and Production we have – as virtually everyone else – paid close attention to the COVID-19 pandemic which is one of the most comprehensive and tragic public health crises in a century. As we write this perspective article, the situation is still in its early stages in many regions of the world and is continually evolving. The practice of social distancing has entered daily lifestyles as individuals, governments, communities, industrial firms, and academic institutions come to grips with the challenges of minimizing the loss of human life in the face of an invisible contagion. We have all seen figures on “flattening the curve” to help spread out the impact on medical facilities. The coronavirus outbreak will diffuse, but behavioral actions are needed to mitigate the number of contractions, illnesses, and deaths. Some of the actions of social distancing include self-quarantining, avoiding large gatherings, working from home where possible, sending students back to their residences, providing online education, reducing travel (especially in confined and mass transportation modes), limiting visits to stores, and many other everyday activities. Many of these adjustments are in contradistinction to “normal” routines. At a time when we are being prevailed upon to come together and to support one another in society, we must learn to do so from a distance. But the behavior changes are necessary and some of them may provide useful insight for how we can facilitate transformations toward more sustainable supply and production.

363 citations


Journal ArticleDOI
TL;DR: Blockchain technology capabilities for contributing to social and environmental sustainability, research gaps, adversary effects of Blockchain, and future research directions are discussed.
Abstract: The objective of this study is to provide an overview of Blockchain technology and Industry 4.0 for advancing supply chains towards sustainability. First, extracted from the existing literature, we evaluate the capabilities of Industry 4.0 for sustainability under three main topics of (1) Internet of things (IoT)-enabled energy management in smart factories; (2) smart logistics and transportation; and (3) smart business models. We expand beyond Industry 4.0 with unfolding the capabilities that Blockchain offers for increasing sustainability, under four main areas: (1) design of incentive mechanisms and tokenization to promote consumer green behavior; (2) enhance visibility across the entire product lifecycle; (3) increase systems efficiency while decreasing development and operational costs; and (4) foster sustainability monitoring and reporting performance across supply chain networks. Furthermore, Blockchain technology capabilities for contributing to social and environmental sustainability, research gaps, adversary effects of Blockchain, and future research directions are discussed.

330 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide guidance for investigating sustainability in supply chains in a post-COVID-19 environment, with a special focus on environmental sustainability, and set the stage for research requiring rethinking of some previous tenets and ontologies.
Abstract: This paper, a pathway, aims to provide research guidance for investigating sustainability in supply chains in a post-COVID-19 environment.,Published literature, personal research experience, insights from virtual open forums and practitioner interviews inform this study.,COVID-19 pandemic events and responses are unprecedented to modern operations and supply chains. Scholars and practitioners seek to make sense of how this event will make us revisit basic scholarly notions and ontology. Sustainability implications exist. Short-term environmental sustainability gains occur, while long-term effects are still uncertain and require research. Sustainability and resilience are complements and jointly require investigation.,The COVID-19 crisis is emerging and evolving. It is not clear whether short-term changes and responses will result in a new “normal.” Adjustment to current theories or new theoretical developments may be necessary. This pathway article only starts the conservation – many additional sustainability issues do arise and cannot be covered in one essay.,Organizations have faced a major shock during this crisis. Environmental sustainability practices can help organizations manage in this and future competitive contexts.,Broad economic, operational, social and ecological-environmental sustainability implications are included – although the focus is on environmental sustainability. Emergent organizational, consumer, policy and supply chain behaviors are identified.,The authors take an operations and supply chain environmental sustainability perspective to COVID-19 pandemic implications; with sustainable representing the triple bottom-line dimensions of environmental, social and economic sustainability; with a special focus on environmental sustainability. Substantial open questions for investigation are identified. This paper sets the stage for research requiring rethinking of some previous tenets and ontologies.

314 citations


Journal ArticleDOI
TL;DR: It is concluded that the detection methods targeting antibodies are not suitable for screening of early and asymptomatic cases since most patients had an antibody response at about 10 days after onset of symptoms, but antibody detection methods can be combined with quantitative real-time reverse transcriptase-polymerase chain reaction (RT-qPCR) to significantly improve the sensitivity and specificity of diagnosis, and boost vaccine research.

282 citations


Posted Content
TL;DR: The current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients is described and AoI timeliness metrics are described.
Abstract: We summarize recent contributions in the broad area of age of information (AoI). In particular, we describe the current state of the art in the design and optimization of low-latency cyberphysical systems and applications in which sources send time-stamped status updates to interested recipients. These applications desire status updates at the recipients to be as timely as possible; however, this is typically constrained by limited system resources. We describe AoI timeliness metrics and present general methods of AoI evaluation analysis that are applicable to a wide variety of sources and systems. Starting from elementary single-server queues, we apply these AoI methods to a range of increasingly complex systems, including energy harvesting sensors transmitting over noisy channels, parallel server systems, queueing networks, and various single-hop and multi-hop wireless networks. We also explore how update age is related to MMSE methods of sampling, estimation and control of stochastic processes. The paper concludes with a review of efforts to employ age optimization in cyberphysical applications.

265 citations


Journal ArticleDOI
TL;DR: In this article, blockchain technology and the circular economy (CE) are two emergent concepts that can change the way we live for decades, and the arrival of Industry 4.0 is set to transform organisational activities.
Abstract: Blockchain technology and the circular economy (CE) are two emergent concepts that can change the way we live for decades. Arrival of Industry 4.0 is set to transform organisational activities thro...

232 citations


Journal ArticleDOI
TL;DR: This review will nicely bridge ML with biosensors, and greatly expand chemometrics for detection, analysis, and diagnosis.
Abstract: Chemometrics play a critical role in biosensors-based detection, analysis, and diagnosis. Nowadays, as a branch of artificial intelligence (AI), machine learning (ML) have achieved impressive advances. However, novel advanced ML methods, especially deep learning, which is famous for image analysis, facial recognition, and speech recognition, has remained relatively elusive to the biosensor community. Herein, how ML can be beneficial to biosensors is systematically discussed. The advantages and drawbacks of most popular ML algorithms are summarized on the basis of sensing data analysis. Specially, deep learning methods such as convolutional neural network (CNN) and recurrent neural network (RNN) are emphasized. Diverse ML-assisted electrochemical biosensors, wearable electronics, SERS and other spectra-based biosensors, fluorescence biosensors and colorimetric biosensors are comprehensively discussed. Furthermore, biosensor networks and multibiosensor data fusion are introduced. This review will nicely bridge ML with biosensors, and greatly expand chemometrics for detection, analysis, and diagnosis.

214 citations


Journal ArticleDOI
25 Mar 2020-Viruses
TL;DR: This work provides comprehensive structural genomics and interactomics roadmaps of SARS-CoV-2 and uses this information to infer the possible functional differences and similarities with the related SARS coronavirus.
Abstract: During its first two and a half months, the recently emerged 2019 novel coronavirus, SARS-CoV-2, has already infected over one-hundred thousand people worldwide and has taken more than four thousand lives. However, the swiftly spreading virus also caused an unprecedentedly rapid response from the research community facing the unknown health challenge of potentially enormous proportions. Unfortunately, the experimental research to understand the molecular mechanisms behind the viral infection and to design a vaccine or antivirals is costly and takes months to develop. To expedite the advancement of our knowledge, we leveraged data about the related coronaviruses that is readily available in public databases and integrated these data into a single computational pipeline. As a result, we provide comprehensive structural genomics and interactomics roadmaps of SARS-CoV-2 and use this information to infer the possible functional differences and similarities with the related SARS coronavirus. All data are made publicly available to the research community.

207 citations


Proceedings ArticleDOI
18 May 2020
TL;DR: This paper proposes Load Value Injection (LVI) as an innovative technique to reversely exploit Meltdown-type microarchitectural data leakage by directly injecting incorrect, attacker-controlled values into a victim’s transient execution.
Abstract: The recent Spectre attack first showed how to inject incorrect branch targets into a victim domain by poisoning microarchitectural branch prediction history. In this paper, we generalize injection-based methodologies to the memory hierarchy by directly injecting incorrect, attacker-controlled values into a victim’s transient execution. We propose Load Value Injection (LVI) as an innovative technique to reversely exploit Meltdown-type microarchitectural data leakage. LVI abuses that faulting or assisted loads, executed by a legitimate victim program, may transiently use dummy values or poisoned data from various microarchitectural buffers, before eventually being re-issued by the processor. We show how LVI gadgets allow to expose victim secrets and hijack transient control flow. We practically demonstrate LVI in several proof-of-concept attacks against Intel SGX enclaves, and we discuss implications for traditional user process and kernel isolation. State-of-the-art Meltdown and Spectre defenses, including widespread silicon-level and microcode mitigations, are orthogonal to our novel LVI techniques. LVI drastically widens the spectrum of incorrect transient paths. Fully mitigating our attacks requires serializing the processor pipeline with lfence instructions after possibly every memory load. Additionally and even worse, due to implicit loads, certain instructions have to be blacklisted, including the ubiquitous x86 ret instruction. Intel plans compiler and assembler-based full mitigations that will allow at least SGX enclave programs to remain secure on LVI-vulnerable systems. Depending on the application and optimization strategy, we observe extensive overheads of factor 2 to 19 for prototype implementations of the full mitigation.

159 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator and empirically compares the proposed approach with several state-of-the-art outlier detectors on both synthetic and real-world datasets.
Abstract: Outlier detection is an important topic in machine learning and has been used in a wide range of applications. In this paper, we approach outlier detection as a binary-classification issue by sampling potential outliers from a uniform reference distribution. However, due to the sparsity of data in high-dimensional space, a limited number of potential outliers may fail to provide sufficient information to assist the classifier in describing a boundary that can separate outliers from normal data effectively. To address this, we propose a novel Single-Objective Generative Adversarial Active Learning (SO-GAAL) method for outlier detection, which can directly generate informative potential outliers based on the mini-max game between a generator and a discriminator. Moreover, to prevent the generator from falling into the mode collapsing problem, the stop node of training should be determined when SO-GAAL is able to provide sufficient information. But without any prior information, it is extremely difficult for SO-GAAL. Therefore, we expand the network structure of SO-GAAL from a single generator to multiple generators with different objectives (MO-GAAL), which can generate a reasonable reference distribution for the whole dataset. We empirically compare the proposed approach with several state-of-the-art outlier detection methods on both synthetic and real-world datasets. The results show that MO-GAAL outperforms its competitors in the majority of cases, especially for datasets with various cluster types or high irrelevant variable ratio. The experiment codes are available at: https://github.com/leibinghe/GAAL-based-outlier-detection .

Journal ArticleDOI
TL;DR: This matrix, developed by the Consensus for Experimental Design in Electromyography (CEDE) project, presents six approaches to EMG normalization and general considerations for normalization, features that should be reported, definitions, and "pros and cons" of each normalization approach are presented.


Journal ArticleDOI
TL;DR: A novel approach that utilizes deep two-dimensional (2-D) Convolutional Neural Networks to extract features from 2-D scalograms generated from PV system data in order to effectively detect and classify PV system faults is presented.
Abstract: Fault diagnosis in photovoltaic (PV) arrays is essential in enhancing power output as well as the useful life span of a PV system. Severe faults such as Partial Shading (PS) and high impedance faults, low location mismatch, and the presence of Maximum Power Point Tracking (MPPT) make fault detection challenging in harsh environmental conditions. In this regard, there have been several attempts made by various researchers to identify PV array faults. However, most of the previous work has focused on fault detection and classification in only a few faulty scenarios. This paper presents a novel approach that utilizes deep two-dimensional (2-D) Convolutional Neural Networks (CNN) to extract features from 2-D scalograms generated from PV system data in order to effectively detect and classify PV system faults. An in-depth quantitative evaluation of the proposed approach is presented and compared with previous classification methods for PV array faults - both classical machine learning based and deep learning based. Unlike contemporary work, five different faulty cases (including faults in PS - on which no work has been done before in the machine learning domain) have been considered in our study, along with the incorporation of MPPT. We generate a consistent dataset over which to compare ours and previous approaches, to make for the first (to the best of our knowledge) comprehensive and meaningful comparative evaluation of fault diagnosis. It is observed that the proposed method involving fine-tuned pre-trained CNN outperforms existing techniques, achieving a high fault detection accuracy of 73.53%. Our study also highlights the importance of representative and discriminative features to classify faults (as opposed to the use of raw data), especially in the noisy scenario, where our method achieves the best performance of 70.45%. We believe that our work will serve to guide future research in PV system fault diagnosis.

Journal ArticleDOI
TL;DR: In this article, a cascade of DL using a combination of convolutional neural networks (CNNs) and RNNs was proposed to improve the performance of affective EEG-based person identification.
Abstract: Electroencephalography (EEG) is another method for performing person identification (PI). Due to the nature of the EEG signals, EEG-based PI is typically done while a person is performing a mental task such as motor control. However, few studies used EEG-based PI while the person is in different mental states (affective EEG). The aim of this paper is to improve the performance of affective EEG-based PI using a deep learning (DL) approach. We proposed a cascade of DL using a combination of convolutional neural networks (CNNs) and recurrent neural networks (RNNs). CNNs are used to handle the spatial information from the EEG while RNNs extract the temporal information. We evaluated two types of RNNs, namely long short-term memory (LSTM) and gated recurrent unit (GRU). The proposed method is evaluated on the state-of-the-art affective data set DEAP. The results indicate that CNN-GRU and CNN-LSTM can perform PI from different affective states and reach up to 99.90%–100% mean correct recognition rate. This significantly outperformed a support vector machine baseline system that used power spectral density features. Notably, the 100% mean CRR came from 32 subjects in DEAP data set. Even after the reduction of the number of EEG electrodes from 32 to 5 for more practical applications, the model could still maintain an optimal result obtained from the frontal region, reaching up to 99.17%. Amongst the two DL models, we found that CNN-GRU and CNN-LSTM performed similarly while CNN-GRU expended faster training time. In conclusion, the studied DL approaches overcame the influence of affective states in EEG-Based PI reported in the previous works.

Journal ArticleDOI
13 Feb 2020-Sensors
TL;DR: Mechanisms of recognition events between imprinted polymers with different biomarkers, such as signaling molecules, microbial toxins, viruses, and bacterial and fungal cells are discussed.
Abstract: Owing to their merits of simple, fast, sensitive, and low cost, electrochemical biosensors have been widely used for the diagnosis of infectious diseases. As a critical element, the receptor determines the selectivity, stability, and accuracy of the electrochemical biosensors. Molecularly imprinted polymers (MIPs) and surface imprinted polymers (SIPs) have great potential to be robust artificial receptors. Therefore, extensive studies have been reported to develop MIPs/SIPs for the detection of infectious diseases with high selectivity and reliability. In this review, we discuss mechanisms of recognition events between imprinted polymers with different biomarkers, such as signaling molecules, microbial toxins, viruses, and bacterial and fungal cells. Then, various preparation methods of MIPs/SIPs for electrochemical biosensors are summarized. Especially, the methods of electropolymerization and micro-contact imprinting are emphasized. Furthermore, applications of MIPs/SIPs based electrochemical biosensors for infectious disease detection are highlighted. At last, challenges and perspectives are discussed.

Proceedings ArticleDOI
23 Aug 2020
TL;DR: Attentive knowledge tracing is proposed, which couples flexible attention-based neural network models with a series of novel, interpretable model components inspired by cognitive and psychometric models and exhibits excellent interpretability and thus has potential for automated feedback and personalization in real-world educational settings.
Abstract: Knowledge tracing (KT) refers to the problem of predicting future learner performance given their past performance in educational applications. Recent developments in KT using flexible deep neural network-based models excel at this task. However, these models often offer limited interpretability, thus making them insufficient for personalized learning, which requires using interpretable feedback and actionable recommendations to help learners achieve better learning outcomes. In this paper, we propose attentive knowledge tracing (AKT), which couples flexible attention-based neural network models with a series of novel, interpretable model components inspired by cognitive and psychometric models. AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses; attention weights are computed using exponential decay and a context-aware relative distance measure, in addition to the similarity between questions. Moreover, we use the Rasch model to regularize the concept and question embeddings; these embeddings are able to capture individual differences among questions on the same concept without using an excessive number of parameters. We conduct experiments on several real-world benchmark datasets and show that AKT outperforms existing KT methods (by up to $6%$ in AUC in some cases) on predicting future learner responses. We also conduct several case studies and show that AKT exhibits excellent interpretability and thus has potential for automated feedback and personalization in real-world educational settings.

Journal ArticleDOI
TL;DR: In this article, the defect structure process maps (DSPMs) were used to quantify the role of porosity as an exemplary defect structure in powder bed printed materials and demonstrated that large-scale defects in LPBF materials can be successfully predicted and thus mitigated/minimized via appropriate selection of processing parameters.
Abstract: Accurate detection, characterization, and prediction of defects has great potential for immediate impact in the production of fully-dense and defect free metal additive manufacturing (AM) builds. Accordingly, this paper presents Defect Structure Process Maps (DSPMs) as a means of quantifying the role of porosity as an exemplary defect structure in powder bed printed materials. Synchrotron-based micro-computed tomography (μSXCT) was used to demonstrate that metal AM defects follow predictable trends within processing parameter space for laser powder bed fusion (LPBF) materials. Ti-6Al-4 V test blocks were fabricated on an EOS M290 utilizing variations in laser power, scan velocity, and hatch spacing. In general, characteristic under-melting or lack-of-fusion defects were discovered in the low laser power, high scan velocity region of process space via μSXCT. These defects were associated with insufficient overlap between adjacent melt tracks and can be avoided through the application of a lack-of-fusion criterion using melt pool geometric modeling. Large-scale keyhole defects were also successfully mitigated for estimated melt pool morphologies associated with shallow keyhole front wall angles. Process variable selections resulting in deep keyholes, i.e., high laser power and low scan velocity, exhibit a substantial increase of spherical porosity as compared to the nominal (manufacturer recommended) processing parameters for Ti-6Al-4 V. Defects within fully-dense process space were also discovered, and are associated with gas porosity transfer to the AM test blocks during the laser-powder interaction. Overall, this work points to the fact that large-scale defects in LPBF materials can be successfully predicted and thus mitigated/minimized via appropriate selection of processing parameters.

Journal ArticleDOI
TL;DR: In this article, the authors explored how firms develop localization, agility and digitization (L-A-D) capabilities by applying (or not applying) their critical circular economy (CE) and blockchain technology (BCT)-related resources and capabilities that they either already possess or acquire from external agents.
Abstract: Using the resource-based and the resource dependence theoretical approaches of the firm, the paper explores firm responses to supply chain disruptions during COVID-19. The paper explores how firms develop localization, agility and digitization (L-A-D) capabilities by applying (or not applying) their critical circular economy (CE) and blockchain technology (BCT)-related resources and capabilities that they either already possess or acquire from external agents.,An abductive approach, applying exploratory qualitative research was conducted over a sample of 24 firms. The sample represented different industries to study their critical BCT and CE resources and capabilities and the L-A-D capabilities. Firm resources and capabilities were classified using the technology, organization and environment (TOE) framework.,Findings show significant patterns on adoption levels of the blockchain-enabled circular economy system (BCES) and L-A-D capability development. The greater the BCES adoption capabilities, the greater the L-A-D capabilities. Organizational size and industry both influence the relationship between BCES and L-A-D. Accordingly, research propositions and a research framework are proposed.,Given the limited sample size, the generalizability of the findings is limited. Our findings extend supply chain resiliency research. A series of propositions provide opportunities for future research. The resource-based view and resource-dependency theories are useful frameworks to better understanding the relationship between firm resources and supply chain resilience.,The results and discussion of this study serve as useful guidance for practitioners to create CE and BCT resources and capabilities for improving supply chain resiliency.,The study shows the socio-economic and socio-environmental importance of BCES in the COVID-19 or similar crises.,The study is one of the initial attempts that highlights the possibilities of BCES across multiple industries and their value during pandemics and disruptions.

Posted Content
TL;DR: In this paper, the authors propose attentive knowledge tracing (AKT), which couples flexible attention-based neural network models with a series of novel, interpretable model components inspired by cognitive and psychometric models.
Abstract: Knowledge tracing (KT) refers to the problem of predicting future learner performance given their past performance in educational applications. Recent developments in KT using flexible deep neural network-based models excel at this task. However, these models often offer limited interpretability, thus making them insufficient for personalized learning, which requires using interpretable feedback and actionable recommendations to help learners achieve better learning outcomes. In this paper, we propose attentive knowledge tracing (AKT), which couples flexible attention-based neural network models with a series of novel, interpretable model components inspired by cognitive and psychometric models. AKT uses a novel monotonic attention mechanism that relates a learner's future responses to assessment questions to their past responses; attention weights are computed using exponential decay and a context-aware relative distance measure, in addition to the similarity between questions. Moreover, we use the Rasch model to regularize the concept and question embeddings; these embeddings are able to capture individual differences among questions on the same concept without using an excessive number of parameters. We conduct experiments on several real-world benchmark datasets and show that AKT outperforms existing KT methods (by up to $6\%$ in AUC in some cases) on predicting future learner responses. We also conduct several case studies and show that AKT exhibits excellent interpretability and thus has potential for automated feedback and personalization in real-world educational settings.

Journal ArticleDOI
TL;DR: The failure mechanism of Li6PS5Cl at high voltage through in situ Raman spectra is studied and the stability with high-voltage LiNi1/3Mn 1/3Co1/ 3O2 (NMC) cathode is investigated and the electrochemical stability window of Li-In was greatly improved.
Abstract: All-solid-state lithium batteries (ASLBs) are promising for the next generation energy storage system with critical safety. Among various candidates, thiophosphate-based electrolytes have shown gre...

Journal ArticleDOI
TL;DR: In this paper, the lead-free perovskite solar cells hold the most promise among lead free PSCs, but they are plagued with inadequate environmental stability and power-conversion efficiency (PCE).
Abstract: Tin-based halide perovskite solar cells (PSCs) hold the most promise among lead-free PSCs, but they are plagued with inadequate environmental stability and power-conversion efficiency (PCE). Here w...

Journal ArticleDOI
TL;DR: Recent trends and research directions are identified in which successful methodology has been established in computer generation of packings of complex particles, and where more work is needed, for example, in the meshing of nonsphere packings and the simulation of industrial-size packed tubes.
Abstract: Flow, heat, and mass transfer in fixed beds of catalyst particles are complex phenomena and, when combined with catalytic reactions, are multiscale in both time and space; therefore, advanced computational techniques are being applied to fixed bed modeling to an ever-greater extent. The fast-growing literature on the use of computational fluid dynamics (CFD) in fixed bed design reflects the rapid development of this subfield of reactor modeling. We identify recent trends and research directions in which successful methodology has been established, for example, in computer generation of packings of complex particles, and where more work is needed, for example, in the meshing of nonsphere packings and the simulation of industrial-size packed tubes. Development of fixed bed reactor models, by either using CFD directly or obtaining insight, closures, and parameters for engineering models from simulations, will increase confidence in using these methods for design along with, or instead of, expensive pilot-scale experiments.

Journal ArticleDOI
TL;DR: In this paper, a computational framework which weakly couples a finite element thermal model to a non-equilibrium PF model was developed to investigate the rapid solidification microstructure of a Ni-Nb alloy during L-PBF.

Journal ArticleDOI
TL;DR: The authors proposed the use of magnesia (MgO) in ambient looping processes to remove CO2 from the air and they found that the proposed approach will cost $46–195 tCO2−1 net removed from the atmosphere considering both grid and solar electricity resources without including post-processing costs.
Abstract: To avoid dangerous climate change, new technologies must remove billions of tonnes of CO2 from the atmosphere every year by mid-century. Here we detail a land-based enhanced weathering cycle utilizing magnesite (MgCO3) feedstock to repeatedly capture CO2 from the atmosphere. In this process, MgCO3 is calcined, producing caustic magnesia (MgO) and high-purity CO2. This MgO is spread over land to carbonate for a year by reacting with atmospheric CO2. The carbonate minerals are then recollected and re-calcined. The reproduced MgO is spread over land to carbonate again. We show this process could cost approximately $46–159 tCO2−1 net removed from the atmosphere, considering grid and solar electricity without post-processing costs. This technology may achieve lower costs than projections for more extensively engineered Direct Air Capture methods. It has the scalable potential to remove at least 2–3 GtCO2 year−1, and may make a meaningful contribution to mitigating climate change. To remove CO2 from the atmosphere every year by mid-century will need new technologies. Here the authors proposed the use of magnesia (MgO) in ambient looping processes to remove CO2 from the air and they found that the proposed approach will cost $46–195 tCO2−1 net removed from the atmosphere considering both grid and solar electricity resources without including post-processing costs.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed to use carbon mineralization in ultramafic rocks in order to remove carbon dioxide from air (CDR) combined with permanent solid storage (PSS) in at least four ways: 1. Surficial CDR: CO2-bearing air and surface waters are reacted with crushed and or ground mine tailings.

Journal ArticleDOI
TL;DR: It is demonstrated that ultrafast optical pulses with wavelengths straddling the visible range induce transient broadband THz transparency in the MXene that persists for nanoseconds and is independent of temperature from 95 K to 290 K.
Abstract: High electrical conductivity and strong absorption of electromagnetic radiation in the terahertz (THz) frequency range by metallic 2D MXene Ti3C2Ty make it a promising material for electromagnetic interference shielding, THz detectors, and transparent conducting electrodes. Here, we demonstrate that ultrafast optical pulses with wavelengths straddling the visible range (400 and 800 nm) induce transient broad-band THz transparency in the MXene that persists for nanoseconds. We demonstrate that optically induced transient THz transparency is independent of temperature from 95 to 290 K. This discovery opens new possibilities for development of switchable electromagnetic interference shielding materials and devices that can be rendered partially transparent on demand for transmitting THz signals, or for designing new THz devices such as sensitive optically gated detectors.

Journal ArticleDOI
TL;DR: This study reviews and analyzes 132 DEA application studies in the insurance industry published from 1993 through July 2018, covering both applications and methodologies, and highlights the existing gaps in the DEA applications in the industry.

Journal ArticleDOI
TL;DR: The authors survey a set of empirically validated frameworks and principles for enhancing mathematics teaching and learning as dialogic multimodal activity, and synthetize the set of principles for educational practice.
Abstract: A rising epistemological paradigm in the cognitive sciences—embodied cognition—has been stimulating innovative approaches, among educational researchers, to the design and analysis of STEM teaching and learning. The paradigm promotes theorizations of cognitive activity as grounded, or even constituted, in goal-oriented multimodal sensorimotor phenomenology. Conceptual learning, per these theories, could emanate from, or be triggered by, experiences of enacting or witnessing particular movement forms, even before these movements are explicitly signified as illustrating target content. Putting these theories to practice, new types of learning environments are being explored that utilize interactive technologies to initially foster student enactment of conceptually oriented movement forms and only then formalize these gestures and actions in disciplinary formats and language. In turn, new research instruments, such as multimodal learning analytics, now enable researchers to aggregate, integrate, model, and represent students’ physical movements, eye-gaze paths, and verbal–gestural utterance so as to track and evaluate emerging conceptual capacity. We—a cohort of cognitive scientists and design-based researchers of embodied mathematics—survey a set of empirically validated frameworks and principles for enhancing mathematics teaching and learning as dialogic multimodal activity, and we synthetize a set of principles for educational practice.

Journal ArticleDOI
TL;DR: The objective-DEMATEL method is used to evaluate the relationship between a new SSCF measures framework and CE-targeted performance.
Abstract: The circular economy (CE) is an evolving economic and sustainable development model. In this new environment, companies face a more dynamic, uncertain, and complex market environment. These challen...