scispace - formally typeset
Search or ask a question

Showing papers in "Aiche Journal in 2011"


Journal ArticleDOI
TL;DR: This work develops a superstructure-based strategy where complex unit models are replaced with surrogate models built from data generated via commercial process simulators and how these models can be reformulated and incorporated into mathematical programming superstructure formulations.
Abstract: In principle, optimization-based “superstructure” methods for process synthesis can be more powerful than sequential-conceptual methods as they account for all complex interactions between design decisions. However, these methods have not been widely adopted because they lead to mixed-integer nonlinear programs that are hard to solve, especially when realistic unit operation models are used. To address this challenge, we develop a superstructure-based strategy where complex unit models are replaced with surrogate models built from data generated via commercial process simulators. In developing this strategy, we study aspects such as the systematic design of process unit surrogate models, the generation of simulation data, the selection of the surrogate's structure, and the required model fitting. We also present how these models can be reformulated and incorporated into mathematical programming superstructure formulations. Finally, we discuss the application of the proposed strategy to a number of applications. © 2010 American Institute of Chemical Engineers AIChE J, 2011

192 citations



Journal ArticleDOI
TL;DR: In this paper, the authors proposed a general superstructure and a model for the global optimization for integrated process water networks, which consists of multiple sources of water, water-using processes, wastewater treatment, and pre-treatment operations.
Abstract: We propose a general superstructure and a model for the global optimization for integrated process water networks. The superstructure consists of multiple sources of water, water-using processes, wastewater treatment, and pre-treatment operations. Unique features are that all feasible interconnections are considered between them and multiple sources of water can be used. The proposed model is formulated as a nonlinear programing (NLP) and as a mixed integer nonlinear programing (MINLP) problem for the case when 0–1 variables are included for the cost of piping and to establish optimal trade-offs between cost and network complexity. To effectively solve the NLP and MINLP models to global optimality we propose tight bounds on the variables, which are expressed as general equations. We also incorporate the cut proposed by Karuppiah and Grossmann to significantly improve the strength of the lower bound for the global optimum. The proposed model is tested on several examples. © 2010 American Institute of Chemical Engineers AIChE J, 2011

181 citations


Journal ArticleDOI
TL;DR: In this article, the authors used a 6.5 m height CFB carbonator connected to a twin CFB calciner to investigate the effect of these variables on CO2 capture efficiency.
Abstract: Calcium looping processes for capturing CO2 from large emissions sources are based on the use of CaO particles as sorbent in circulating fluidized-bed (CFB) reactors. A continuous flow of CaO from an oxyfired calciner is fed into the carbonator and a certain inventory of active CaO is expected to capture the CO2 in the flue gas. The circulation rate and the inventory of CaO determine the CO2 capture efficiency. Other parameters such as the average carrying capacity of the CaO circulating particles, the temperature, and the gas velocity must be taken into account. To investigate the effect of these variables on CO2 capture efficiency, we used a 6.5 m height CFB carbonator connected to a twin CFB calciner. Many stationary operating states were achieved using different operating conditions. The trends of CO2 capture efficiency measured are compared with those from a simple reactor model. This information may contribute to the future scaling up of the technology. © 2010 American Institute of Chemical Engineers AIChE J, 57: 000–000, 2011

152 citations


Journal ArticleDOI
TL;DR: In this paper, a variety of metal-organic frameworks (MOFs) with varying linkers, topologies, pore sizes, and metal atoms were screened for xenon/krypton separation using GCMC simulations.
Abstract: A variety of metal-organic frameworks (MOFs) with varying linkers, topologies, pore sizes, and metal atoms were screened for xenon/krypton separation using grand canonical Monte Carlo (GCMC) simulations. The results indicate that small pores with strong adsorption sites are desired to preferentially adsorb xenon over krypton in multicomponent adsorption. However, if the pore size is too small, it can significantly limit overall gas uptake, which is undesirable. Based on our simulations, MOF-505 was identified as a promising material due to its increased xenon selectivity over a wider pressure range compared with other MOFs investigated. © 2010 American Institute of Chemical Engineers AIChE J, 2011

152 citations



Journal ArticleDOI
TL;DR: The results show that the traditional recursive partial least squares algorithm struggles to deliver accurate predictions, and by exploiting the two-level adaptation scheme, the proposed algorithm delivers more accurate results.
Abstract: This work presents an algorithm for the development of adaptive soft sensors. The method is based on the local learning framework, where locally valid models are built and maintained. In this framework, it is possible to model nonlinear relationship between the input and output data by the means of a combination of linear models. The method provides the possibility to perform adaptation at two levels: (i) recursive adaptation of the local models and (ii) the adaptation of the combination weights. The dataset used for evaluation of the algorithm describes a polymerization reactor where the target value is a simulated catalyst activity in the reactor. This dataset is also used to evaluate the performance of the proposed algorithm. The results show that the traditional recursive partial least squares algorithm struggles to deliver accurate predictions. In contrast to this, by exploiting the two-level adaptation scheme, the proposed algorithm delivers more accurate results. © 2010 American Institute of Chemical Engineers AIChE J, 57, 2011

150 citations


Journal ArticleDOI
TL;DR: In this article, a mathematical model has been developed to predict the increase in both the deposit thickness and the wax fraction of the deposit using a fundamental analysis of the heat and mass transfer for laminar and turbulent flow conditions.
Abstract: Wax deposition in subsea pipelines is a significant economic issue in the petroleum industry. A mathematical model has been developed to predict the increase in both the deposit thickness and the wax fraction of the deposit using a fundamental analysis of the heat and mass transfer for laminar and turbulent flow conditions. It was found that the precipitation of wax in the oil is a competing phenomenon with deposition. Two existing approaches consider either no precipitation (the independent heat and mass transfer model) or instantaneous precipitation (the solubility model) and result in either an overprediction or an underprediction of deposit thickness. By accounting for the kinetics of wax precipitation of wax in the oil (the kinetic model), accurate predictions for wax deposition for both lab-scale and pilot-scale flow-loop experiments with three different oils were achieved. Furthermore, this kinetic model for wax precipitation in the oil was used to compare field-scale deposition predictions for different oils. V C 2011 American Institute of Chemical Engineers AIChE J, 57: 2955–2964, 2011

146 citations


Journal ArticleDOI
TL;DR: In this article, the conceptual design of the bioethanol process from switchgrass via gasification is addressed, and a superstructure is postulated for optimizing energy use that embeds direct or indirect gasification, followed by steam reforming or partial oxidation.
Abstract: In this article, we address the conceptual design of the bioethanol process from switchgrass via gasification. A superstructure is postulated for optimizing energy use that embeds direct or indirect gasification, followed by steam reforming or partial oxidation. Next, the gas composition is adjusted with membrane-PSA or water gas shift. Membrane separation, absorption with ethanol-amines and PSA are considered for the removal of sour gases. Finally, two synthetic paths are considered, high alcohols catalytic process with two possible distillation sequences, and syngas fermentation with distillation, corn grits, molecular sieves and pervaporation as alternative dehydration processes. The optimization of the superstructure is formulated as an mixed-integer nonlinear programming problem using short-cut models, and solved through a special decomposition scheme that is followed by heat integration. The optimal process consists of direct gasification followed by steam reforming, removal of the excess of hydrogen and catalytic synthesis, yielding a potential operating cost of $0.41/gal. © 2011 American Institute of Chemical Engineers AIChE J, 2011

144 citations


Journal ArticleDOI
TL;DR: In this article, a model-based computer-aided methodology for design and verification of a class of chemical-based products (liquid formulations) is presented, where stage-1 generates a list of feasible product candidates and/or verifies a specified set through a sequence of predefined activities (work-flow).
Abstract: In chemical product design one tries to find a product which exhibits the desired (target) behavior specified a priori. The identity of the ingredients of chemical-based products maybe unknown at the start, but some of their desired qualities and functions are usually known. A systematic model-based computer-aided methodology for design and verification of a class of chemical-based products (liquid formulations) is presented. This methodology is part of an integrated three-stage approach for design/verification of liquid formulations where stage-1 generates a list of feasible product candidates and/or verifies a specified set through a sequence of predefined activities (work-flow). Stage-2 and stage-3 (not presented here) deal with the planning and execution of experiments, for product validation. Four case studies have been developed to test the methodology. The computer-aided design (stage-1) of a paint formulation and an insect repellent lotion are presented. © 2011 American Institute of Chemical Engineers AIChE J, 2011

138 citations


Journal ArticleDOI
TL;DR: The mass transfer area of nine structured packings was measured in a 0.427 m ID column via absorption of CO2 from air into 0.1 kmol/m3 NaOH as discussed by the authors.
Abstract: The mass-transfer area of nine structured packings was measured in a 0.427 m ID column via absorption of CO2 from air into 0.1 kmol/m3 NaOH. The mass-transfer area was most strongly related to the specific area (125–500 m2/m3), and liquid load (2.5–75 m3/m2·h). Surface tension (30–72 mN/m) had a weaker but significant effect. Gas velocity (0.6–2.3 m/s), liquid viscosity (1–15 mPa·s), and flow channel configuration had essentially no impact on the mass-transfer area. Surface texture (embossing) increased the effective area by 10% at most. The ratio of mass-transfer area to specific area (ae/ap) was correlated within the limits of ±13% for the entire experimental database . © 2010 American Institute of Chemical Engineers AIChE J, 2010

Journal ArticleDOI
TL;DR: By monitoring batch statistics, the proposed SPA framework not only eliminates all data preprocessing steps but also provides superior fault detection performance, and examines the fundamental reasons for the improved performance from SPA.
Abstract: In the semiconductor industry, process monitoring has been recognized as a critical component of the manufacturing system. Multivariate statistical process monitoring (SPM) techniques, such as multiway principal component analysis and multiway partial least squares, have been extend to monitor semiconductor processes. These SPM methods require extensive, often off-line data preprocessing such as data unfolding, trajectory mean shift, and trajectory alignment. This requirement is probably not an issue for the traditional chemical batch processes but it poses a significant challenge for semiconductor batch processes. This is because data preprocessing makes model building and maintenance extremely labor intensive due to the large number of models in a typical semiconductor fab. In addition, semiconductor process data often show more severe nonnormality compared to those of the traditional chemical process under closed-loop control, which results in suboptimal performance in many applications. To address these challenges, several pattern classification based monitoring (PCM) methods have been developed recently, but some limitations remain and trajectory alignment is still required. In this article, we analyze the fundamental reasons for the limitations of the SPM and PCM methods when applied to monitor semiconductor processes. In addition, we propose a new statistics pattern analysis (SPA) framework to address the challenges associated with semiconductor processes. By monitoring batch statistics, the proposed SPA framework not only eliminates all data preprocessing steps but also provides superior fault detection performance. Finally, we use an industrial example to demonstrate the advantages of the proposed SPA framework, and examine the fundamental reasons for the improved performance from SPA. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: The TEDMDS-derived hybrid silica membranes showed high H2 permeance (0.3-1.1 × 10−6 mol m−2 s−1 Pa−1) with low H2/N2 (∼10) and high H 2/SF6 (√ 1200) perm-selectivity, confirming successful tuning of micropore sizes larger than TEOS-derived silica membrane.
Abstract: Organic/inorganic hybrid silica membranes were prepared from 1,1,3,3-tetraethoxy-1,3-dimethyl disiloxane (TEDMDS) by the sol-gel technique with firing at 300–550°C in N2. TEDMDS-derived silica membranes showed high H2 permeance (0.3–1.1 × 10−6 mol m−2 s−1 Pa−1) with low H2/N2 (∼10) and high H2/SF6 (∼1200) perm-selectivity, confirming successful tuning of micropore sizes larger than TEOS-derived silica membranes. TEDMDS-derived silica membranes prepared at 550°C in N2 increased gas permeances as well as pore sizes after air exposure at 450°C. TEDMDS had an advantage in tuning pore size by the “template” and “spacer” techniques, due to the pyrolysis of methyl groups in air and SiOSi bonding, respectively. For pore size evaluation of microporous membranes, normalized Knudsen-based permeance, which was proposed based on the gas translation model and verified with permeance of zeolite membranes, reveals that pore sizes of TEDMDS membranes were successfully tuned in the range of 0.6–1.0 nm. © 2011 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, a localized Fisher discriminant analysis (LFDA) based process monitoring approach is proposed to monitor the processes containing multiple types of steady-state or dynamic faults, which can not only separate the normal and faulty data with maximized margin but also preserve the multimodality within the multiple faulty clusters.
Abstract: Complex chemical process is often corrupted with various types of faults and the fault-free training data may not be available to build the normal operation model. Therefore, the supervised monitoring methods such as principal component analysis (PCA), partial least squares (PLS), and independent component analysis (ICA) are not applicable in such situations. On the other hand, the traditional unsupervised algorithms like Fisher discriminant analysis (FDA) may not take into account the multimodality within the abnormal data and thus their capability of fault detection and classification can be significantly degraded. In this study, a novel localized Fisher discriminant analysis (LFDA) based process monitoring approach is proposed to monitor the processes containing multiple types of steady-state or dynamic faults. The stationary testing and Gaussian mixture model are integrated with LFDA to remove any nonstationarity and isolate the normal and multiple faulty clusters during the preprocessing steps. Then the localized between-class and within-class scatter mattress are computed for the generalized eigenvalue decomposition to extract the localized Fisher discriminant directions that can not only separate the normal and faulty data with maximized margin but also preserve the multimodality within the multiple faulty clusters. In this way, different types of process faults can be well classified using the discriminant function index. The proposed LFDA monitoring approach is applied to the Tennessee Eastman process and compared with the traditional FDA method. The monitoring results in three different test scenarios demonstrate the superiority of the LFDA approach in detecting and classifying multiple types of faults with high accuracy and sensitivity. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, the effect of solid boundaries on the closure relationships for filtered two-fluid models for riser flows was probed by filtering the results obtained through highly resolved kinetic theory-based two-fluid model simulations.
Abstract: The effect of solid boundaries on the closure relationships for filtered two-fluid models for riser flows was probed by filtering the results obtained through highly resolved kinetic theory-based two-fluid model simulations. The closures for the filtered drag coefficient and particle phase stress depended not only on particle volume fraction and the filter length but also on the distance from the wall. The wall corrections to the filtered closures are nearly independent of the filter length and particle volume fraction. Simulations of filtered model equations yielded grid length independent solutions when the grid length is � half the filter length or smaller. Coarse statistical results obtained by solving the filtered models with different filter lengths were the same and corresponded to those from highly resolved simulations of the kinetic theory model, which was used to construct the filtered models, thus verifying the fidelity of the filtered modeling approach. V V C 2010 American Institute of Chemical Engineers AIChE J, 57: 2691–2707, 2011

Journal ArticleDOI
TL;DR: In this paper, a cost-efficient desalination technology was developed by integrating a countercurrent cascade of the novel cross-flow direct contact membrane distillation (DCMD) devices and solid polymeric hollow fiber-based heat exchange devices Simulations have been carried out for the whole DCMD cascade to project values of gained output ratio (GOR) as a function of the number of DCMD stages as well as other important factors in the cascade vis-avis the temperatures and flow rates of the incoming hot brine and cold distillate streams.
Abstract: Cost-efficient desalination technology was developed successfully by integrating a countercurrent cascade of the novel cross-flow direct contact membrane distillation (DCMD) devices and solid polymeric hollow fiber-based heat exchange devices Simulations have been carried out for the whole DCMD cascade to project values of gained output ratio (GOR) as a function of the number of DCMD stages as well as other important factors in the cascade vis-a-vis the temperatures and flow rates of the incoming hot brine and cold distillate streams The simulation results were verified with experimental results from cascades consisting of two to eight stages The numerical simulator predicts a GOR of 12 when unequal flow rates of the incoming brine and distillate streams are used An artificial sea water was concentrated eight times successfully when a countercurrent cascade composed of four stages of the DCMD modules and a heat exchanger was used during the DCMD process © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this article, a stochastic pooling problem optimization formulation is proposed to address the product quality and uncertainty in natural gas production networks, where the qualities of the flows in the system are described with a pooling model and the uncertainty of the system is handled with a multiscenario, two-stage Stochastic recourse approach in addition, multi-objective problems are handled via a hierarchical optimization approach.
Abstract: Product quality and uncertainty are two important issues in the design and operation of natural gas production networks This paper presents a stochastic pooling problem optimization formulation to address these two issues, where the qualities of the flows in the system are described with a pooling model and the uncertainty in the system is handled with a multiscenario, two-stage stochastic recourse approach In addition, multi-objective problems are handled via a hierarchical optimization approach The advantages of the proposed formulation are demonstrated with case studies involving an example system based on Haverly’s pooling problem and a real industrial system The stochastic pooling problem is a potentially large-scale nonconvex Mixed-Integer Nonlinear Program (MINLP), and a rigorous decomposition method developed recently is used to solve this problem A computational study demonstrates the advantage of the decomposition method over a state-of-the-art branchand-reduce global optimizer, BARON VC 2010 American Institute of Chemical Engineers AIChE J, 00: 000–000, 2010

Journal ArticleDOI
TL;DR: In this article, the authors address the mid-term planning of chemical complexes with integration of stochastic inventory management under supply and demand uncertainty by using the guaranteed service approach to model time delays in the flows inside the network, and develop an equivalent deterministic optimization model to minimize the production, feedstock purchase, cycle inventory, and safety stock costs.
Abstract: We address in this article the mid-term planning of chemical complexes with integration of stochastic inventory management under supply and demand uncertainty. By using the guaranteed service approach to model time delays in the flows inside the network, we capture the stochastic nature of the supply and demand variations, and develop an equivalent deterministic optimization model to minimize the production, feedstock purchase, cycle inventory, and safety stock costs. The model determines the optimal purchases of the feedstocks, production levels of the processes, sales of final products, and safety stock levels of all the chemicals. We formulate the model as a mixed-integer nonlinear program with a nonconvex objective function and nonconvex constraints. To solve the global optimization problem with modest computational times, we exploit some model properties and develop a tailored branch-and-refine algorithm based on successive piecewise linear approximation. Five industrial-scale examples with up to 38 processes and 28 chemicals are presented to illustrate the application of the model and the performance of the proposed algorithm. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, a specific reaction pathway is suggested to describe the metallic phase formation during solution combustion synthesis (SCS) and a methodology for SCS of pure metals and metal alloys nanoparticles can be inferred from the results presented.
Abstract: Nanopowders of pure nickel were directly synthesized for the first time by conventional solution combustion synthesis (SCS) method. In this article, a specific reaction pathway is suggested to describe the metallic phase formation during SCS. It is proposed that the exothermic reaction between NH3 and HNO3 species formed during the decomposition of glycine and nickel nitrate acts as the source of energy required to achieve the self-sustained reaction regime. A thermodynamic analysis of the combustion synthesis reaction indicates that increasing glycine concentration leads to establishing a hydrogen rich reducing environment in the combustion wave that in turn results in the formation of pure metals and metal alloys. TGA of reaction systems and XRD analysis of products in the quenched combustion wave show that the formation of oxide phases occurs in the reaction front, followed by gradual reduction of oxide to pure metallic phases in the postcombustion zone. A methodology for SCS of pure metals and metal alloys nanoparticles can be inferred from the results presented. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, an optimization formulation for the synthesis of heat exchanger networks where pressure levels of process streams can be adjusted to improve heat integration is presented, which allows for the interconversion of work, temperature, and pressure-based exergy and leads to reduced usage of expensive cold utility.
Abstract: This article presents an optimization formulation for the synthesis of heat exchanger networks where pressure levels of process streams can be adjusted to improve heat integration. Especially important at subambient conditions, this allows for the interconversion of work, temperature, and pressure-based exergy and leads to reduced usage of expensive cold utility. Furthermore, stream temperatures and pressures are tuned for close tracking of the composite curves yielding increased exergy efficiency. The formulation is showcased on a simple example and applied to a case study drawn from the design of an offshore natural gas liquefaction process. Aided by the optimization, it is demonstrated how the process can extract exergy from liquid nitrogen and carbon dioxide streams to support the liquefaction of a natural gas stream without additional utilities. This process is part of a liquefied energy chain, which, supplies natural gas for power generation while facilitating carbon dioxide sequestration. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this article, the authors investigated the activation energy of H2 permeation and the selectivity of gaseous molecules, focusing particularly on hydrogen and water vapor, and found that H2 was always more permeable than water.
Abstract: Silica and cobalt-doped silica membranes that showed a high permeance of 1.8 × 10−7 mol m−2 s−1 Pa−1 and a H2/N2 permeance ratio of ∼730, with excellent hydrothermal stability under steam pressure of 300 kPa, were successfully prepared. The permeation mechanism of gas molecules, focusing particularly on hydrogen and water vapor, was investigated in the 300–500°C range and is discussed based on the activation energy of permeation and the selectivity of gaseous molecules. The activation energy of H2 permeation correlated well with the permeance ratio of He/H2 for porous silica membranes prepared by sol–gel processing, chemical vapor deposition (CVD), and vitreous glasses, indicating that similar amorphous silica network structures were formed. The permeance ratios of H2/H2O were found to range from 5 to 40, that is, hydrogen (kinetic diameter: 0.289 nm) was always more permeable than water (0.265 nm). © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this article, a three-tier approach comprising of partial charges, interaction energies and sigma profile generation using conductor-like screening model for real solvents (COSMO-RS) was chosen to study the simultaneous interaction of thiophene and pyridine with different ionic liquids.
Abstract: The simultaneous interaction of thiophene and pyridine with different ionic liquids:1-butyl-1-methylpyrrolidinium tetrafluoroborate([BPYRO][BF4]),1-butyl-1-methylpyrrolidinium hexafluoro-phosphate ([BPYRO][PF6]), 1-butyl-4-methylpyridinium tetrafluoroborate ([BPY][BF4]), 1-butyl-4-methylpyridinium hexafluorophosphate ([BPY][PF6]) and 1-benzyl-3-methylimidazolium tetrafluoroborate ([BeMIM][BF4]) were investigated using quantum chemical calculations. A three-tier approach comprising of partial charges, interaction energies and sigma profile generation using conductor-like screening model for real solvents (COSMO-RS) was chosen to study the systems. A quantitative attempt based on the CH-π interaction in ionic liquid; thiophene–pyridine complexes gave the interaction energies of ILs in the order: [BPY][BF4] > [BPYRO][PF6] > [BeMIM][BF4] > [BPY][PF6] > [BPYRO][BF4]. An inverse relation was observed between the activity coefficient at infinite dilution predicted via COSMO-RS–based model and interaction energies. The dominance of CH-π interaction was evident from the sigma profiles of ionic liquid together with thiophene and pyridine. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this article, small batch autoclaves have been used to study the competition between hydrotreating and polymerization reactions in pyrolysis oil, and the results of these experiments (carried out at 300 degrees C) showed that in the first 5 min of HDO, gas-liquid mass transfer appears to be limiting the overall rate of hydoretreating reactions, leading to undesired polymerization reaction and product deterioration.
Abstract: Hydrodeoxygenation (HDO) of pyrolysis oil is an upgrading step that allows further coprocessing of the oil product in (laboratory-scale) standard refinery units to produce advanced biofuels. During HDO, desired hydrotreating reactions are in competition with polymerization reactions that can lead to unwanted product properties. To suppress this polymerization, a low-temperature HDO step, referred to as stabilization, is typically used. Small batch autoclaves have been used to study at near isothermal conditions the competition between hydrotreating and polymerization reactions. Although fast polymerization reactions take place above 200 degrees C, hydrogen consumption was already observed for temperatures as low as 80 degrees C. Hydrogen consumption increased with temperature and reaction time; however, when the end temperature exceeded 250 degrees C, hydrogen consumption achieved a plateau. This was thought to be caused by the occurrence of fast polymerization reactions and the refractivity of the products to further hydrotreating reactions. The effect of the gas-liquid mass transfer was evaluated by using different stirring speeds. The results of these experiments (carried out at 300 degrees C) showed that in the first 5 min of HDO, gas-liquid mass transfer appears to be limiting the overall rate of hydrotreating reactions, leading to undesired polymerization reactions and product deterioration. Afterward, intraparticle mass transfer/kinetics seems to be governing the hydrogen consumption rate. Estimations on the degree of utilization (effectiveness factor) for industrially sized catalysts show that this is expected to be much lower than 1, at least, in the early stage of HDO (first 30 min). Catalyst particle size should, thus, be carefully considered when designing industrial processes not only to minimize reactor volume but also to improve the ratio of hydrotreating to polymerization reactions. (C) 2011 American Institute of Chemical Engineers AIChE J, 57: 3160-3170, 2011


Journal ArticleDOI
TL;DR: In this article, a series of high performance carbonaceous mesoporous materials: activated carbon beads (ACBs), have been prepared in this work, which are not only good candidates for CO2 and CH4 storage but also for the capture of carbon dioxide in pre- and postcombustion processes.
Abstract: A series of high performance carbonaceous mesoporous materials: activated carbon beads (ACBs), have been prepared in this work. Among the samples, ACB-5 possesses the BET specific surface area of 3537 m2 g−1 and ACB-2 has the pore volume of 3.18 cm3 g−1. Experimental measurements were carried out on the intelligent gravimetric analyzer (IGA-003, Hiden). Carbon dioxide adsorption capacity of 909 mg g−1 has been achieved in ACB-5 at 298 K and 18 bar, which is superior to the existing carbonaceous porous materials and comparable to metal-organic framework (MOF)-177 (1232 mg g−1, at 298 K and 20 bar) and covalent-organic framework (COF)-102 (1050 mg g−1 at 298 K and 20 bar) reported in the literature. Moreover, methane uptake reaches 15.23 wt % in ACB-5 at 298 K and 18 bar, which is better than MOF-5. To predict the performances of the samples ACB-2 and ACB-5 at high pressures, modeling of the samples and grand canonical Monte Carlo simulation have been conducted, as is presented in our previous work. The adsorption isotherms of CO2/N2 and CO2/CH4 in our samples ACB-2 and 5 have been measured at 298 and 348 K and different compositions, corresponding to the pre- and postcombustion conditions for CO2 capture. The Dual-Site Langmuir-Freundlich (DSLF) model-based ideal-adsorbed solution theory (IAST) was also used to solve the selectivity of CO2 over N2 and CH4. The selectivities of ACBs for CO2/CH4 are in the range of 2–2.5, while they remain in the range of 6.0–8.0 for CO2/N2 at T = 298 K. In summary, this work presents a new type of adsorbent-ACBs, which are not only good candidates for CO2 and CH4 storage but also for the capture of carbon dioxide in pre- and postcombustion processes. © 2011 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, a Ca looping system that uses CaO as regenerable sorbent to capture CO2 from the flue gases generated in power plants is analyzed, where the CO2 is captured by CaO in a CFB carbonator while coal oxycombustion provides the energy required to regenerate the sorbent.
Abstract: This work analyses a Ca looping system that uses CaO as regenerable sorbent to capture CO2 from the flue gases generated in power plants. The CO2 is captured by CaO in a CFB carbonator while coal oxycombustion provides the energy required to regenerate the sorbent. Part of the energy introduced into the calciner can be transferred to a new supercritical steam cycle to generate additional power. Several case studies have been integrated with this steam cycle. Efficiency penalties, mainly associated with the energy consumption of the ASU, CO2 compressor and auxiliaries, can be as low as 7.5% p. of net efficiency when working with low-CaCO3 make-up flows and integrating the Ca looping with a cement plant that makes use of the spent sorbent. The penalties increase to 8.3% p. when this possibility is not available. Operation conditions aiming at minimum calciner size result in slightly higher-efficiency penalties. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this paper, the reaction pathways for solution combustion synthesis of pure copper and copper-nickel alloy nanopowders are investigated, and it is confirmed that the necessary condition for SCS of metals in a metal-nitrate oxidizer-glycine system is the property of the oxidizer to decompose with formation of HNO3 species.
Abstract: Based on a general methodology for the preparation of metal–nanopowders by solution combustion synthesis (SCS), the reaction pathways for SCS of pure copper and copper–nickel alloy nanopowders are investigated. It is confirmed that the necessary condition for SCS of metals in a metal-nitrate oxidizer–glycine system is the property of the oxidizer to decompose with formation of HNO3 species. In this case, for compositions with excess of glycine, a hydrogen reducing atmosphere develops in the reaction front, leading to the formation of reduced metals. The proposed reaction pathways are supported by X-ray diffraction analysis of the quenched samples and DTA–TGA studies of the Cu(NO3)2·6H2O–glycine and Ni(NO3)2·6H2O/Cu(NO3)2·6H2O–glycine systems. The results show that the formation of Cu2O and CuO oxide phases takes place at early stages in the reaction front followed by their reduction to pure Cu phase in the postcombustion zones. However, in a Cu–Ni alloy, a fraction of intermetallic Cu–Ni phase appeared directly in the combustion front, whereas the rest of the oxygen-free alloy formed through reduction of oxide phases. © 2011 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
Yun Yu1, Hongwei Wu1
TL;DR: Ball milling leads to a considerable reduction in cellulose particle size and crystallinity, as well as a significant increase in the specific reactivity of cellulose during hydrolysis in hot-compressed water (HCW) as discussed by the authors.
Abstract: Ball milling leads to a considerable reduction in cellulose particle size and crystallinity, as well as a significant increase in the specific reactivity of cellulose during hydrolysis in hot-compressed water (HCW). Cryogenic ball milling for 2 min also results in a significant size reduction but only little change in cellulose crystallinity and specific reactivity during hydrolysis. Therefore, crystallinity is the dominant factor in determining the hydrolysis reactivity of cellulose in HCW while particle size only plays a minor role. Ball milling also significantly influences the distribution of glucose oligomers in the primary liquid products of cellulose hydrolysis. It increases the selectivities of glucose oligomers at low conversions. At high conversions, the reduction in chain length plays an important role in glucose oligomer formation as cellulose samples become more crystalline. An extensive ball milling completely converts the crystalline cellulose into amorphous cellulose, substantially enhancing the formation of glucose oligomers with high degrees of polymerization. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
TL;DR: In this article, a semisupervised method is proposed for soft sensor modeling, which can successfully incorporate the unlabeled data information and to determine the effective dimensionality of the latent space, the Bayesian regularization method is introduced into the semisuPervised model structure.
Abstract: Most traditional soft sensors are built upon the labeled dataset that contains equal numbers of input and output data samples. However, the output variables that correspond to quality variables and other important controlled variables are always difficult to obtain in chemical processes. Therefore, we may only obtain the output data for a small portion of the whole dataset and have much more input data samples. In this article, a semisupervised method is proposed for soft sensor modeling, which can successfully incorporate the unlabeled data information. To determine the effective dimensionality of the latent space, the Bayesian regularization method is introduced into the semisupervised model structure. Two industrial application case studies are provided to evaluate the feasibility and efficiency of the newly developed probabilistic soft sensor. © 2010 American Institute of Chemical Engineers AIChE J, 2011

Journal ArticleDOI
Kai Wang1, Yangcheng Lu1, Jianhong Xu1, J. Tan1, Guangsheng Luo1 
TL;DR: In this article, the effects of the microchannel structure, operating conditions, and physical properties on the dispersion rules were carefully investigated, and it was found that the extended capillary could greatly affect the dispersibility rules, which was favorable for reducing the dispersed size.
Abstract: This work focuses on the dispersion of micromonodispersed droplets and bubbles in the capillary embedded T-junction microfluidic devices. The effects of the microchannel structure, operating conditions, and physical properties on the dispersion rules were carefully investigated. It was found that the extended capillary could greatly affect the dispersion rules, which was favorable for reducing the dispersed size. The dispersed size was mainly dominated by the Ca number, and the effects of dispersed phase flow rate and viscosity ratio of the two phases were also very important. The dispersion mechanism and size rules in the capillary embedded microfluidic devices were discussed seriously by comparing the similarities and differences of the liquid/liquid and gas/liquid dispersion processes. © 2010 American Institute of Chemical Engineers AIChE J, 2011