V
Vadim Sokolov
Researcher at George Mason University
Publications - 70
Citations - 1906
Vadim Sokolov is an academic researcher from George Mason University. The author has contributed to research in topics: Deep learning & Stochastic gradient descent. The author has an hindex of 15, co-authored 60 publications receiving 1349 citations. Previous affiliations of Vadim Sokolov include Argonne National Laboratory & Northern Illinois University.
Papers
More filters
Posted Content
Solving large-scale 0-1 knapsack problems and its application to point cloud resampling
Duanshun Li,Jing Liu,Noseong Park,Dongeun Lee,Giridhar Kaushik Ramachandran,Ali Seyedmazloom,Kookjin Lee,Chen Feng,Vadim Sokolov,Rajesh Ganesan +9 more
TL;DR: This paper presents a deep learning technique-based method to solve large-scale 0-1 knapsack problems where the number of products is large and/or the values of products are not necessarily predetermined but decided by an external value assignment function during the optimization process.
Housing Market Forecasting using Home Showing Events
TL;DR: In this paper , the authors developed a housing demand index based on microscopic home showings events data that can provide decision-making support for buyers and sellers on a very granular time and spatial scale.
Journal ArticleDOI
Quantum Bayes AI
TL;DR: This work provides a duality between classical and quantum probability for calculating of posterior quantities of interest in statistical and machine learning problems and illustrates the behaviour of quantum algorithms on two simple classification algorithms.
Deep Learning Gaussian Processes For Computer Models with Heteroskedastic and High-Dimensional Outputs
TL;DR: Deep Learning Gaussian Processes (DL-GP) are proposed as a methodology for analyzing (approximating) computer models that produce heteroskedastic and high-dimensional output.
Posted Content
Merging Two Cultures: Deep and Statistical Learning.
TL;DR: In this paper, a general framework for machine learning arises that first generates nonlinear features (a.k.a factors) via sparse regularization and stochastic gradient optimisation, and then uses a probabilistic output layer for predictive uncertainty.