scispace - formally typeset
Search or ask a question
Author

A F Rogachev

Bio: A F Rogachev is an academic researcher. The author has contributed to research in topics: Artificial neural network. The author has an hindex of 1, co-authored 1 publications receiving 3 citations.

Papers
More filters

Cited by
More filters
DOI
10 Mar 2022
TL;DR: A thermospheric neutral mass density model with robust and reliable uncertainty estimates is developed based on the Space Environment Technologies (SET) High Accuracy Satellite Drag Model (HASDM) density database, and a storm‐time comparison shows that HASDM‐ML also supplies meaningful uncertainty estimates during extreme geomagnetic events.
Abstract: A thermospheric neutral mass density model with robust and reliable uncertainty estimates is developed based on the Space Environment Technologies (SET) High Accuracy Satellite Drag Model (HASDM) density database. This database, created by SET, contains 20 years of outputs from the U.S. Space Force's HASDM, which currently represents the state of the art for density and drag modeling. We utilize principal component analysis for dimensionality reduction, which creates the coefficients upon which nonlinear machine‐learned (ML) regression models are trained. These models use three unique loss functions: Mean square error (MSE), negative logarithm of predictive density (NLPD), and continuous ranked probability score. Three input sets are also tested, showing improved performance when introducing time histories for geomagnetic indices. These models leverage Monte Carlo dropout to provide uncertainty estimates, and the use of the NLPD loss function results in well‐calibrated uncertainty estimates while only increasing error by 0.25% (<10% mean absolute error) relative to MSE. By comparing the best HASDM‐ML model to the HASDM database along satellite orbits, we found that the model provides robust and reliable density uncertainties over diverse space weather conditions. A storm‐time comparison shows that HASDM‐ML also supplies meaningful uncertainty estimates during extreme geomagnetic events.

13 citations

Journal ArticleDOI
TL;DR: In this paper , the authors used principal component analysis (PCA) for dimensionality reduction, which creates the coefficients upon which nonlinear machine-learned (ML) regression models are trained.
Abstract: A thermospheric neutral mass density model with robust and reliable uncertainty estimates is developed based on the Space Environment Technologies (SET) High Accuracy Satellite Drag Model (HASDM) density database. This database, created by SET, contains 20 years of outputs from the U.S. Space Force's HASDM, which currently represents the state of the art for density and drag modeling. We utilize principal component analysis for dimensionality reduction, which creates the coefficients upon which nonlinear machine-learned (ML) regression models are trained. These models use three unique loss functions: Mean square error (MSE), negative logarithm of predictive density (NLPD), and continuous ranked probability score. Three input sets are also tested, showing improved performance when introducing time histories for geomagnetic indices. These models leverage Monte Carlo dropout to provide uncertainty estimates, and the use of the NLPD loss function results in well-calibrated uncertainty estimates while only increasing error by 0.25% (<10% mean absolute error) relative to MSE. By comparing the best HASDM-ML model to the HASDM database along satellite orbits, we found that the model provides robust and reliable density uncertainties over diverse space weather conditions. A storm-time comparison shows that HASDM-ML also supplies meaningful uncertainty estimates during extreme geomagnetic events.

7 citations

Journal ArticleDOI
TL;DR: This study applied seven popular machine learning and deep learning algorithms, including Naïve Bayes, Support Vector Machine, Random Forest, XGBoost, Multilayer Perception, Transformer Neural Network, and stacking and voting ensemble models to build a customized classification model for vaping-related tweets.
Abstract: There are increasingly strict regulations surrounding the purchase and use of combustible tobacco products (i.e., cigarettes); simultaneously, the use of other tobacco products, including e-cigarettes (i.e., vaping products), has dramatically increased. However, public attitudes toward vaping vary widely, and the health effects of vaping are still largely unknown. As a popular social media, Twitter contains rich information shared by users about their behaviors and experiences, including opinions on vaping. It is very challenging to identify vaping-related tweets to source useful information manually. In the current study, we proposed to develop a detection model to accurately identify vaping-related tweets using machine learning and deep learning methods. Specifically, we applied seven popular machine learning and deep learning algorithms, including Naïve Bayes, Support Vector Machine, Random Forest, XGBoost, Multilayer Perception, Transformer Neural Network, and stacking and voting ensemble models to build our customized classification model. We extracted a set of sample tweets during an outbreak of e-cigarette or vaping-related lung injury (EVALI) in 2019 and created an annotated corpus to train and evaluate these models. After comparing the performance of each model, we found that the stacking ensemble learning achieved the highest performance with an F1-score of 0.97. All models could achieve 0.90 or higher after tuning hyperparameters. The ensemble learning model has the best average performance. Our study findings provide informative guidelines and practical implications for the automated detection of themed social media data for public opinions and health surveillance purposes.

2 citations

Journal ArticleDOI
01 Jun 2021
TL;DR: This study uses 2D core CT scan image slices to train a convolutional neural network whose purpose is to automatically predict the lithology of a well on the Norwegian continental shelf and identifies and merged similar lithofacies classes through ad hoc analysis considering the degree of confusion from the prediction confusion matrix and aided by porosity–permeability cross-plot relationships.
Abstract: X-ray computerized tomography (CT) images as digital representations of whole cores can provide valuable information on the composition and internal structure of cores extracted from wells. Incorporation of millimeter-scale core CT data into lithology classification workflows can result in high-resolution lithology description. In this study, we use 2D core CT scan image slices to train a convolutional neural network (CNN) whose purpose is to automatically predict the lithology of a well on the Norwegian continental shelf. The images are preprocessed prior to training, i.e., undesired artefacts are automatically flagged and removed from further analysis. The training data include expert-derived lithofacies classes obtained by manual core description. The trained classifier is used to predict lithofacies on a set of test images that are unseen by the classifier. The prediction results reveal that distinct classes are predicted with high recall (up to 92%). However, there are misclassification rates associated with similarities in gray-scale values and transport properties. To postprocess the acquired results, we identified and merged similar lithofacies classes through ad hoc analysis considering the degree of confusion from the prediction confusion matrix and aided by porosity–permeability cross-plot relationships. Based on this analysis, the lithofacies classes are merged into four rock classes. Another CNN classifier trained on the resulting rock classes generalize well, with higher pixel-wise precision when detecting thin layers and bed boundaries compared to the manual core description. Thus, the classifier provides additional and complementing information to the already existing rock type description.

2 citations

DOI
TL;DR: This work uses a combination of environmental and experimental data, such as atmospheric pressure, gas temperature, and the flux of incident particles as inputs to a sequential Neural Network to recommend a high voltage setting and the corresponding calibration constants in order to maintain consistent gain and optimal resolution throughout the experiment.
Abstract: The AI for Experimental Controls project is developing an AI system to control and calibrate detector systems located at Jefferson Laboratory. Currently, calibrations are performed offline and require significant time and attention from experts. This work would reduce the amount of data and the amount of time spent calibrating in an offline setting. The first use case involves the Central Drift Chamber (CDC) located inside the GlueX spectrometer in Hall D. We use a combination of environmental and experimental data, such as atmospheric pressure, gas temperature, and the flux of incident particles as inputs to a sequential Neural Network (NN) to recommend a high voltage setting and the corresponding calibration constants in order to maintain consistent gain and optimal resolution throughout the experiment. Utilizing AI in this manner represents an initial shift from offline calibration towards near real time calibrations performed at Jefferson Laboratory.