scispace - formally typeset
Search or ask a question

Showing papers in "Aiche Journal in 2021"










Journal ArticleDOI
TL;DR: This work uses long short-term memory networks with training data corrupted by two types of noise, Gaussian and non-Gaussian noise, to train the process model that will be used in a model predictive controller (MPC).
Abstract: Author(s): Luo, Junwei | Advisor(s): Christofides, Panagiotis D | Abstract: This work focuses on applying machine learning modeling on predictive control of nonlinear processes with noisy data. We use long short-term memory (LSTM) networks with training data from sensor measurements corrupted by two types of noise: Gaussian and non-Gaussian noise, to train the process model that will be used in a model predictive controller (MPC). We first discuss the LSTM training with noisy data following a Gaussian distribution, and demonstrate that the standard LSTM network is capable of capturing the underlying process dynamic behavior by reducing the impact of noise. Subsequently, given that the standard LSTM performs poorly on a noisy dataset from industrial operation (i.e., non-Gaussian noisy data), we propose an LSTM network using Monte Carlo dropout method to reduce the overfitting to noisy data. Furthermore, an LSTM network using co-teaching training method is proposed to further improve its approximation performance when noise-free data from a process model capturing the nominal process state evolution is available. A chemical process example is used throughout the study to illustrate the application of the proposed modeling approaches and demonstrate their open- and closed-loop performance under a Lyapunov-based model predictive controller with state measurements corrupted by industrial noise.

28 citations











Journal ArticleDOI
TL;DR: This paper used context-free grammar (CFG) based representations of molecules in a neural machine translation framework to predict chemical reaction outcomes, achieving an accuracy of 80.1% on a standard reaction dataset using a model characterized by only a fraction of the number of training parameters in other sequence-to-sequence models based works in this area.
Abstract: Discovering and designing novel materials is a challenging problem as it often requires searching a combinatorially large space of potential candidates. Evaluation of all candidates experimentally is typically infeasible as it requires great amounts of effort, time, expertise, and money. The ability to predict reaction outcomes without performing extensive experiments is, therefore, important. Towards that goal, we report an approach that uses context-free grammar (CFG) based representations of molecules in a neural machine translation framework. We formulate the reaction-prediction task as a machine translation problem that involves discovering the transformations from the source sequence (comprising the reactants and agents) to the target sequence (comprising the major product) in the reaction. The grammar ontology-based representation of molecules hierarchically incorporates rich molecular structure information that, in principle, should be valuable for modeling chemical reactions. We achieve an accuracy of 80.1% on a standard reaction dataset using a model characterized by only a fraction of the number of training parameters in other sequence-to-sequence models based works in this area. Moreover, 99% of the predictions made on the same reaction dataset were found to be syntactically valid. We conclude that CFGs-based ontological representations could be an efficient way of incorporating structural information, ensuring chemically valid predictions, and overcoming overfitting in complex machine learning architectures employed in reaction prediction tasks.

Journal ArticleDOI
TL;DR: Lively et al. as mentioned in this paper discussed the future of the refining industry and the role of separations systems in the future, in addition to specific research challenges facing membrane and adsorption technologies.
Abstract: The hydrocarbon processing industry is in the midst of a major shift in feedstocks, structure, and products. Aggressive carbon abatement targets and intrinsic efficiency advantages from electric vehicles strongly undercut the advantages of fossil fuels, which are the majority product of this industry. However, the immense value of the existing hydrocarbon infrastructure suggests that fossil feedstocks, processing, and products will be the dominant form for quite some time. Existing fossil-based plants with compatible equipment (e.g., hydrocrackers) will begin the externality-induced transition over to bioand e-refinery formats to leverage this valuable existing infrastructure and logistical connections. Advanced separations play a role in this transition in several ways. First, advanced separations can partner with existing separation units (e.g., distillation) to extend the time in which fossil-based processing remains competitive under modern externalities (e.g., CO2). Moreover, energyand capital-efficient separation technologies can mitigate the decrease in returns of energy invested in fossil-based refining, due to greenhouse gas emission mandates. While bioand e-refineries are often thought of as a greenfield for advanced separations technologies (thus bypassing the problem of working, amortized capital in existing plants), in fact, the adaptation of existing fossil-based refineries to renewable feedstocks suggests that the “hybrid” separation system paradigm is likely to be the standard for years to come. Nevertheless, these “green refineries” introduce many new separations challenges that are likely to be poorly addressed by conventional technologies. Finally, decades-old regulatory definitions of fuels will continue to promote distillation-centric refinery designs – flexibility in not only these regulations, but also in end use will pave the way for low energy, low carbon separation techniques. In this talk, comments on the future of the refining industry and the role of separations systems in the future will be discussed, in addition to specific research challenges facing membrane and adsorption technologies. Bio Ryan P. Lively received a B.S. and Ph.D. degree in Chemical Engineering from the Georgia Institute of Technology working with Prof. William J. Koros. This was followed by a post-doctoral research position at Algenol Biofuels under the guidance of Dr. Ronald R. Chance. He joined the faculty of Chemical and Biomolecular Engineering at the Georgia Institute of Technology in 2013 and was promoted to Associate Professor in 2018. His current research seeks to advance fluid separation processes critical to the global energy infrastructure.







Journal ArticleDOI
TL;DR: The potential to blend MSM, HPC, and ML presents opportunities for unbound innovation and promises to represent the future of MSM and explainable ML that will likely define the fields in the 21st century.
Abstract: Research problems in the domains of physical, engineering, biological sciences often span multiple time and length scales, owing to the complexity of information transfer underlying mechanisms. Multiscale modeling (MSM) and high-performance computing (HPC) have emerged as indispensable tools for tackling such complex problems. We review the foundations, historical developments, and current paradigms in MSM. A paradigm shift in MSM implementations is being fueled by the rapid advances and emerging paradigms in HPC at the dawn of exascale computing. Moreover, amidst the explosion of data science, engineering, and medicine, machine learning (ML) integrated with MSM is poised to enhance the capabilities of standard MSM approaches significantly, particularly in the face of increasing problem complexity. The potential to blend MSM, HPC, and ML presents opportunities for unbound innovation and promises to represent the future of MSM and explainable ML that will likely define the fields in the 21st century.