scispace - formally typeset
Search or ask a question

Showing papers by "ExxonMobil published in 2022"


Journal ArticleDOI
15 Jan 2022-Fuel
TL;DR: In this article, Pd-metal oxides (ZrO2, WOx, MoO3) supported on activated biochar (ABC) catalysts were developed for hydrogenolysis of lignin.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated potential effects of effluent composition, particularly hydrocarbons, on aquatic toxicity and was a component of a larger study assessing contaminant removal during refinery wastewater treatment.

7 citations


Book ChapterDOI
V. Yusim1
03 May 2022
TL;DR: In this paper , the authors present evidence that challenges all the assumptions that lead to the long-held notion that gravitational collapse of thickened (55-65-km-thick) continental crust was a major driver of Basin and Range extension.
Abstract: ABSTRACT The Basin and Range Province is considered to be one of the most iconic continental rift provinces that postdates a prolonged orogeny. Here, I present evidence that challenges all the assumptions that lead to the long-held notion that gravitational collapse of thickened (55–65-km-thick) continental crust was a major driver of Basin and Range extension. This study focused on integrating the regional tectonic and magmatic history of the northeastern region of the Basin and Range (centered on the Albion–Raft River–Grouse Creek metamorphic core complex) and combines insights from a compilation of data from metamorphic core complexes worldwide to illustrate the effect of accounting for the magmatic histories when estimating pre-extensional crustal thickness. In the region of the Albion–Raft River–Grouse Creek metamorphic core complex, there is evidence of three Cenozoic extensional events and three coeval magmatic events. By taking into account the regional magmatic activity during the Cenozoic (Paleogene, Neogene, and Quaternary magmatism), and the inferred mantle-derived magmatic volume added to the crust during the process of extension, it is shown that the pre-extensional crustal thickness cannot have been more than ~53 km, and it was more likely close to ~46 km. This estimate is consistent with Eocene igneous geochemistry estimates of crustal thickness and with crustal thickness estimates from shortening of ~30-km-thick mid-Jurassic crust. During the Cenozoic evolution of the northeastern Basin and Range, the crust in the area of study thinned from ~46 km to ~32 km, and the elevation of the pre-extensional plateau collapsed from ~2.5 km to its present-day average of ~1.8 km. This study concludes that an alternative mechanism to predominantly gravitational crustal collapse is required to explain the extension in the region of the Albion–Raft River–Grouse Creek metamorphic core complex. I support recent interpretations that this mechanism involved the complex interaction of the removal of the Farallon flat slab (by slab roll-back or delamination of the slab) with the impingement of the Snake River Plain–Yellowstone mantle anomaly. The switch in the stress regime from compression (during the slab subduction) to a complex regime during slab roll-back, followed by extension (in the middle Miocene), and the associated mantle-derived magmatism, led to the thinning of the subcontinental lithospheric mantle, thermal weakening of the crust, and the thinning of the crust during the Cenozoic. This crustal extension is expressed as regional Basin and Range normal faulting and local vertical flow and exhumation of the mobilized middle crust at metamorphic core complexes like the Albion–Raft River–Grouse Creek complex.

5 citations


Journal ArticleDOI
Mutiara1
TL;DR: In this article , the generalized Darboux transformation was used to derive the degenerate breathing solution for the Boussinesq equation. And the dynamics of the lower respiratory system were discussed.

3 citations


Journal ArticleDOI
Bryan A. Patel1
TL;DR: In this article , the industrial constraints for implementing PI technology at scale, the potential applications underserved by current technology, and the methodologies needed to effectively screen and design PI processes are discussed.
Abstract: Process intensification (PI) is a class of approaches designed to transform traditional chemical equipment into smaller, more selective, and more energy efficient processes. The field has found success with new unit operations in a number of applications, but more can be done to broaden the implementation of PI solutions in chemical plants. To address the full scope of chemical process industry needs, process intensification must deliver solutions that achieve scale by using the underlying guiding principles of PI to expand the space beyond the initial focus of the field. This paper outlines these needs: the industrial constraints for implementing PI technology at scale, the potential applications underserved by current technology, and the methodologies needed to effectively screen and design PI processes.

3 citations


Journal ArticleDOI
TL;DR: In this article, outcrop mapping, lithofacies and geochemical analyses of sediments of the Ikom-Mamfe Embayment across Nigerian-Cameroon border were carried out to determine the depositional environments and hydrocarbon generating potentials.

3 citations


Journal ArticleDOI
01 Jan 2022-Carbon
TL;DR: In this article, a suite of porous carbons with variable surface areas and nitrogen contents were synthesized, and a detailed understanding of the influence of precursor properties on the resulting physicochemical properties is lacking.

3 citations


Journal ArticleDOI
Matvey Novikov1
TL;DR: In this article , a unique combination of inverse gas chromatography (IGC) and a statistical thermodynamic model is used to accurately quantify the energetic driving force for polymer infiltration into CNT articles for the first time.
Abstract: A key step towards realizing the promise of macroscopic carbon nanotube (CNT) articles in high-performance structural applications is polymer infiltration into the porous CNT structure, which improves stress transfer and promotes long-term material integrity. However, infiltration is often found to be sub-optimal - a significant impediment to the scaling and adoption of CNT-article composites that has thus far received little systematic attention. In this study, a unique combination of inverse gas chromatography (IGC) and a statistical thermodynamic model is used to accurately quantify the energetic driving force for infiltration into CNT articles for the first time. This is measured to have a very low value. Using gas chromatography-mass spectroscopy, IGC analysis and electron microscopy, this low energy is found to result from near-complete surface coverage by non-graphitic pyrolysis byproducts. The surface energy is improved by plasma treatment as confirmed by IGC analysis. The effects of the surface treatment and modifications to the porous structure on the infiltration rate of model liquids are studied by optical imaging. Accurate surface energy measurements and image analysis are coupled to shed light on the critical parameters in these CNT articles that dictate the infiltration physics, which can be tuned to maximize the rate and extent of space filling.

2 citations


Journal ArticleDOI
TL;DR: In this article , the authors examined the LDI of coronene and two petroleum pitch samples, a M-50 isotropic pitch and a thermally treated m-50 pitch that contained a mesophase.
Abstract: Laser desorption ionization (LDI) mass spectrometry has been widely applied for the analysis of pitch-related materials. LDI is particularly useful when samples have very low solubility. However, LDI conditions, such as laser power output, can have significant impact on the resulting mass spectra. In this work, we examined the LDI of coronene and two petroleum pitch samples, a M-50 isotropic pitch and a thermally treated M-50 pitch that contained a mesophase. LDI at varying laser power is coupled to ultrahigh-resolution Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) to determine the impact of laser power on the elemental compositions of the pitch samples. Coronene is shown to form large polycyclic aromatic hydrocarbon (PAH) oligomers at high laser powers. Variations in laser power clearly affect mass distributions and compound-type distributions of the pitch samples. The impact of laser power is more significant and visible for the thermally treated pitch sample, where increased laser power generated high levels of fully dealkylated (or denuded) polycyclic aromatic hydrocarbons (dPAHs) and fullerenes. The presence of two types of PAH ions containing even and odd numbers of hydrogen atoms were observed. Even-hydrogen-number PAHs are molecular ions produced by direct laser ionization. The origins of odd-hydrogen-number PAHs are more complicated. They can result from dealkylation of larger PAHs, protonation of the parent molecule, and/or ionization of neutral PAH radicals. The latter can be a significant contributor to the odd-hydrogen-number PAHs. For analytical applications, a balance in laser power is needed to vaporize the non-volatile pitch molecules while also minimizing potential secondary thermal reactions during the LDI process. When laser power is controlled at a similar level, LDI–MS provides useful information to understand pitch compositional change from thermal treatment.

2 citations


Journal ArticleDOI
TL;DR: The nucleation potential model (NPM) as discussed by the authors is a novel nucleation model that dramatically simplifies the diverse reactions between sulfuric acid and any combination of precursor gases, and it is applied to experimental and field observations of SAA nucleation to demonstrate how SAA varies for different stabilizing compounds, mixtures, and sampling locations.
Abstract: Abstract. Observations over the last decade have demonstrated that the atmosphere contains potentially hundreds of compounds that can react with sulfuric acid to nucleate stable aerosol particles. Consequently, modeling atmospheric nucleation requires detailed knowledge of nucleation reaction kinetics and spatially and temporally resolved measurements of numerous precursor compounds. This study introduces the Nucleation Potential Model (NPM), a novel nucleation model that dramatically simplifies the diverse reactions between sulfuric acid and any combination of precursor gases. The NPM predicts 1 nm nucleation rates from only two measurable gas concentrations, regardless of whether all precursor gases are known. The NPM describes sulfuric acid nucleating with a parameterized base compound at an effective base concentration, [Beff]. [Beff] captures the ability of a compound or mixture to form stable clusters with sulfuric acid and is estimated from measured 1 nm particle concentrations. The NPM is applied to experimental and field observations of sulfuric acid nucleation to demonstrate how [Beff] varies for different stabilizing compounds, mixtures, and sampling locations. Analysis of previous field observations shows distinct differences in [Beff] between locations that follow the emission sources and stabilizing compound concentrations for that region. Overall, the NPM allows researchers to easily model nucleation across diverse environments and estimate the concentration of non-sulfuric acid precursors using a condensation particle counter.

1 citations


Journal ArticleDOI
TL;DR: In this article, the authors report an aerobic partial oxidation of isobutylene into isoprene, acetone, and para-xylene using a mesoporous Ni TiOx catalytic material.
Abstract: We report an aerobic partial oxidation of isobutylene into isoprene, acetone, and para-xylene using a mesoporous Ni TiOx catalytic material. In this work, two catalysts were found to synthesize two of these three valuable products with high selectivity, with p-xylene being synthesized with a selectivity of 46.0% and isoprene being synthesized with a selectivity of 64.7%, with overall conversions of isobutylene being 34.0% and 11.9% respectively. These reactions were done at relatively low temperatures (300 °C or below) and are conducted at flow rates of 10 sccm oxygen and isobutylene. The nickel titania catalysts were studied extensively using various characterization methodologies such as TEM, Raman, XRD, and XRF.

Book ChapterDOI
Bianca Dettino1
25 May 2022
TL;DR: In this article , the authors report on the results of a survey carried out with interns during the COVID-19 pandemic and examine the advantages and disadvantages of remote work that were identified by the interns.
Abstract: The COVID-19 pandemic led to the rapid spread of remote work around the world. This chapter focuses on the question of student internships. It reports on the results of a survey carried out with interns during the pandemic. Internships constitute a crucial time for interns, who hope to develop skills linked to their education programme, gain experience and perhaps be recruited for a permanent position. The remote working conditions imposed during the COVID-19 pandemic represented a potential challenge which could well compromise such hopes. The chapter examines the advantages and disadvantages of remote work that were identified by the interns. It links their underlying ambivalent feelings with attitudes to work generally, and to a discussion of the future of work which, due to the pandemic, is undergoing profound changes.

DissertationDOI
Qi Guo1
13 Jun 2022
TL;DR: In this paper , the authors evaluated a method of introducing obstructive sleep apnea patients to CPAP prior to the administering CPAP titration in the laboratory and found that participants who experienced CPAP habituation would have better sleep quality during CPAP, would be more likely to accept CPAP and would use CPAP more on a nightly basis than control participants.
Abstract: Obstructive sleep apnea syndrome (OSAS) is a serious medical condition that occurs during sleep and consists of episodes of complete (respiratory pauses) or partial obstruction (hypoventilation) of the upper airway. Approximately 80% of persons diagnosed with OSAS are prescribed nasal Continuous Positive Airway Pressure (CPAP) treatment, which has proven to be the treatment of choice for OSAS. However, noncompliance with CPAP treatment in OSAS patients is a widely recognized problem, and many persons refuse CPAP as a treatment option or fail to use it reliably. Investigations of CPAP use in OSAS patients have generally found that nightly use averages less than five hours. Few interventions have been scientifically evaluated for improving CPAP compliance. The current study evaluated a method of introducing OSAS patients to CPAP prior to the administering CPAP titration in the laboratory. Participants in the treatment groups underwent a 30-minute CPAP habituation trial, with a range of pressures, prior to the polysomnography with CPAP. It was hypothesized that the participants who experienced CPAP habituation would have better sleep quality during CPAP, would be more likely to accept CPAP, and would use CPAP more on a nightly basis than control participants who experienced the usual laboratory procedures for introducing CPAP (CPAP education) to OSAS patients. There were no statistically significant differences for any of the dependent variables between participants who experienced CPAP habituation and participants who experienced CPAP education. Men were found to use CPAP 1.61 hours more on a nightly basis than women (p = .03). This difference is most likely attributable to severity, as men were observed to have an A+HI that was twice the observed A+HI of women participants. Overall, CPAP acceptance and compliance for the complete sample was comparable to what has been reported in the CPAP treatment literature.

Proceedings ArticleDOI
Laurent White1
01 May 2022
TL;DR: In this paper , the authors discuss the challenges of co-designing the application, which requires domain experts to collaborate with other experts across the stack for workload mapping and data orchestration, and also adopting a decentralized strategy that embeds processing units where the data need them.
Abstract: More than ever, the semiconductor industry is asked to answer society's call for more computing capacity and capability, which are driven by rapid digitalization, the widespread adoption of artificial intelligence, and the ever-increasing need for high-fidelity scientific simulations. While facing high demand, the supply of computing capability is being technically challenged by the slowdown of Moore's law and the need for high energy efficiency. This tug-of-war has now pushed the industry towards domain-specific accelerators, perhaps likely past the point of no return. The mix of general-purpose CPUs and high-end GPGPUs, which has pervaded data centers over the past few years, is likely to be expanded to a much richer set of application-specific accelerators, including AI engines, reconfigurable hardware, and even perhaps quantum, annealing, and neuromorphic devices. While acceleration and better efficiency may be enabled by using domain-specific accelerators for selected workloads, a much more holistic (i.e., system-wide) approach will have to be adopted to achieve significant performance gains for complex applications that consist of a variety of workloads where each could benefit from a specific accelerator. As an important example, scientific computing, which increasingly incorporates AI training and inference kernels in a tightly-integrated fashion, provides a rich and exciting laboratory for addressing the challenges of efficiently using highly-heterogeneous systems and for ultimately realizing their promises. Those challenges include co-designing the application, which requires domain experts to collaborate with other experts across the stack for workload mapping and data orchestration, and also adopting a decentralized strategy that embeds processing units where the data need them. Finally, the early experience of those co-design efforts should help the industry devise a longer-term strategy for developing programming models that would relieve application experts from what is often perceived as the burden of hardwareaware development and code optimization.

Book ChapterDOI
08 Sep 2022
TL;DR: In this article , the authors analyzed the overall picture of coal structure which emerges from a variety of probe techniques, including heat capacity studies of water molecules in coal pores, nuclear magnetic resonance (NMR) measurements on the pore water, and structural information inferred by small-angle x-ray scattering and small angle neutron scattering.
Abstract: Coal is an extremely heterogeneous material. It contains organic matter, mineral matter, and an extensive pore network. This chapter analyzes the overall picture of coal structure which emerges from this variety of probe techniques. In principle, the goal of these physical characterization techniques is to determine the volume of pores in a coal, the distribution in pore sizes and shapes, and the extent and physical characteristics of the coal surface. A complementary concern is to be able to describe the area and nature of the resulting coal surfaces. The chapter describes three experimental techniques which have been applied to coal: heat capacity studies of water molecules in coal pores, nuclear magnetic resonance (NMR) measurements on the pore water, and structural information inferred by small-angle x-ray scattering and small angle neutron scattering. Naturally occurring water molecules in coal have been surveyed by NMR primarily with the intent of establishing a noninvasive measure of moisture content.

Posted ContentDOI
Benjamin Marschke1
15 Feb 2022
TL;DR: In this paper , the authors used databases and models used to estimate crustal thickness and elevations during the evolution of the Sevier-Laramide orogen and Basin and Range extension.
Abstract: Appendices S1–S7: Databases and models used to estimate crustal thickness and elevations during the evolution of the Sevier-Laramide orogen and Basin and Range extension<br>


Posted ContentDOI
Peter C. Rowe1
27 Jan 2022
TL;DR: In this article , shape metrics are used to describe 2D data to help make analyses more explainable and interpretable, which is particularly important in applications in the medical community where the right to explainability is crucial.
Abstract: Traditional machine learning (ML) algorithms, such as multiple regression, require human analysts to make decisions on how to treat the data. These decisions can make the model building process subjective and difficult to replicate for those who did not build the model. Deep learning approaches benefit by allowing the model to learn what features are important once the human analyst builds the architecture. Thus, a method for automating certain human decisions for traditional ML modeling would help to improve the reproducibility and remove subjective aspects of the model building process. To that end, we propose to use shape metrics to describe 2D data to help make analyses more explainable and interpretable. The proposed approach provides a foundation to help automate various aspects of model building in an interpretable and explainable fashion. This is particularly important in applications in the medical community where the `right to explainability' is crucial. We provide various simulated data sets ranging from probability distributions, functions, and model quality control checks (such as QQ-Plots and residual analyses from ordinary least squares) to showcase the breadth of this approach.

Book ChapterDOI
01 Jan 2022


Book ChapterDOI
Mostafa El-Feky1
14 Jun 2022
TL;DR: In this paper , CNNs were used to classify and predict bulk mechanical properties of a series of polymer blends based on their microstructure, as measured by atomic force microscopy (AFM) using both quantitative and qualitative imaging modes.
Abstract: Convolutional neural nets (CNN) are used to classify and predict bulk mechanical properties of a series of polymer blends based on their microstructure, as measured by atomic force microscopy (AFM) using both quantitative and qualitative imaging modes. The polymer blends were 3-component impact copolymers (ICPs) comprising of a polypropylene matrix and various distributions, density, morphology, and size of ethylene-propylene rubbers and associated ethylene inclusions. The models successfully classified the ICPs and predicted the Young’s modulus, flexural modulus, and yield strength. The model was unsuccessful at predicting the impact toughness of the material. The success or failure of the deep learning models provide insight into whether morphological or mechanical properties of the microstructure have a stronger influence on the various bulk mechanical properties.

Journal ArticleDOI
Rene Jonk1
TL;DR: In this paper , the authors apply the sequence-stratigraphic method to assess the containment potential and risk for storage of immiscible supercritical carbon dioxide at basin-to-prospect scales.

Journal ArticleDOI
Mark S. Rzepczynski1
TL;DR: In this article , a qualitative investment and operational due-diligence review is based on a manager's story of current and future behavior, and the selection process balances competing stories and fund perceptions drawn from narratives between the manager and investor (receiver) of information.
Abstract: In Narrative, Storytelling and Qualitative Due Diligence, from the Winter 2022 Due Diligence Special Issue of The Journal of Alternative Investments, Mark Rzepczynski of AMPHI Research and Trading describes how the manager selection process employs narratives to impart non-quantitative information and solve a significant asymmetric information problem. Narratives, or stories, are an integral part of the due diligence process, with selection often centered around the manager’s description of his investment strategy and clarification of noisy signals from past performance. Similarly, investors form manager narratives to clarify their assessment of skill and performance. After a quantitative performance filter, a qualitative investment and operational due diligence review is based on a manager’s story of current and future behavior. The selection process balances competing stories and fund perceptions drawn from narratives between the manager (sender) and investor (receiver) of information. Narratives, the qualitative description of skill, are often used to explain the manager’s distinct edge for risk-taking and return generation. A classic storyline where the manager is a hero overcoming an investment obstacle is often employed to demonstrate qualities not shown in track records, such as the manager’s character and ability to deal with adversity. An effective narrative closes the information gap between the manager as agent and investor as principal.

Book ChapterDOI
Marcus Diekmann1
01 Jan 2022




Posted ContentDOI
Lauren Mike Ansell1
19 May 2022
TL;DR: In this paper , the authors used historic port call data to predict the increase in additional energy demand required for battery powered vessels through a period of 24 hours as a greater proportion of the fleet move to battery powered propulsion.
Abstract: The international Maritime Organization (IMO) has set the target of reducing the emissions from the shipping sector to at least 50% of the 2008 levels. One potential method to cut emissions is to convert vessels to battery powered propulsion in a similar manner to that which has been adopted for motor vehicles. Although, battery powered propulsion will not be suitable for all vessels, the conversion of those that are will lead to an increase in the energy demand from the national grid. This study uses historic port call data is used to model the timings of arrivals and the number of vessels in the port of Plymouth to predict the increase in additional energy demand required for battery powered vessels through a period of 24 hours as a greater proportion of the fleet move to battery powered propulsion.

DissertationDOI
Rohan Panchadhara1
10 Jun 2022
TL;DR: In this article , an explicit, two-dimensional, Lagrangian finite and discrete element technique is formulated and used to computationally characterize meso-scale fluctuations in thermomechanical fields induced by low pressure deformation waves propagating through particulate energetic solids.
Abstract: An explicit, two-dimensional, Lagrangian finite and discrete element technique is formulated and used to computationally characterize meso-scale fluctuations in thermomechanical fields induced by low pressure deformation waves propagating through particulate energetic solids. Emphasis is placed on characterizing the relative importance of plastic and friction work as meso-scale heating mechanisms which may cause bulk ignition of these materials and their dependence on piston speed (vp ~ 50-500 m/s). The numerical technique combines conservation principles with a plane strain, thermoelastic-viscoplastic constitutive theory to describe deformation within the material meso-structure. An energy consistent, penalty based, distributed potential force method, coupled to a penalty regularized Amontons Coulomb law, is used to enforce kinematic and thermal contact constraints between particles. The technique is shown to be convergent, and its spatial (~ 2.0) and temporal (~ 1.5) convergence rate is established. Predictions show that alhough plastic work far exceeds friction work, considerably higher local temperatures result from friction work. Most mass within the deformation wave (~ 99.9%) is heated to approximately 330, 400, and 500 K, for vp = 50, 250, and 500 m/s, respectively, due to plastic work, whereas only a small fraction of mass (~ .001%) is respectively heated to temperatures in excess of 600, 1100 and 1400 K due to friction work. In addition to low speed impact, and contrary to conventional belief, friction work is shown to also be an important heating mechanism at higher impact speeds. The variation in spatial partitioning of bulk energy within the deformation wave structure with particle morphology and material properties is demonstrated.

Journal ArticleDOI
D.E. Nierode1
TL;DR: The global warming/climate change underway on earth today is a totally natural occurrence caused by solar cycles with solid scientific and historical support as mentioned in this paper , and the earth is currently in the upswing part of its normal temperature cycle.
Abstract: The global warming/climate change underway on earth today is a totally natural occurrence caused by solar cycles with solid scientific and historical support. Earth temperatures are controlled by three solar cycles of nominally 1,000, 70, and 11 years. A supporting 73-year cycle within measured earth temperatures is documented in this work. The earth is currently in the upswing part of its normal temperature cycle. Very warm (Medieval Warming) and very cold (Little Ice Age) temperature epochs have been historically documented on earth for at least the last 3,000 years. The primary 1,000-year solar cyclicity was first estimated to be approximately every 1,500 ± 500 (1,000 - 2,000) years from many, diverse scientific studies [1]. The explanation for the earth’s temperature increases since 1850 is captured in a mathematical model called the Cyclical Sine Model. This model fits measured temperatures since 1850, past climate epochs, and correlates closely with the thousand year cyclicity of solar activity from 14C/12C ratio studies [2], Bond Atlantic drift ice cycles [3,4], sunspot history [5], the Atlantic Multidecadal Oscillation [6], and the Pacific Decadal Oscillation [7]. In addition, this model quantitively presents an explanation for the time span 1945-1975 when an impending new ice age was feared [8].