A
Ali Nouri
Researcher at Shahrekord University of Medical Sciences
Publications - 65
Citations - 1215
Ali Nouri is an academic researcher from Shahrekord University of Medical Sciences. The author has contributed to research in topics: Medicine & Chemistry. The author has an hindex of 15, co-authored 45 publications receiving 1001 citations. Previous affiliations of Ali Nouri include Islamic Azad University & Rutgers University.
Papers
More filters
Proceedings Article
A Bayesian sampling approach to exploration in reinforcement learning
TL;DR: In this paper, the authors present a modular approach to reinforcement learning that uses a Bayesian representation of the uncertainty over models, and drive exploration by sampling multiple models from the posterior and selecting actions optimistically.
Posted Content
A Bayesian Sampling Approach to Exploration in Reinforcement Learning
TL;DR: This work presents a modular approach to reinforcement learning that uses a Bayesian representation of the uncertainty over models and achieves near-optimal reward with high probability with a sample complexity that is low relative to the speed at which the posterior distribution converges during learning.
Journal ArticleDOI
Wave propagation of embedded viscoelastic FG-CNT-reinforced sandwich plates integrated with sensor and actuator based on refined zigzag theory
TL;DR: In this paper, a piezoelectric sandwich plate is used to simulate the orthotropic visco-Pasternak model and a proportional-derivative (PD) controller is employed to control the phase velocity in the structure.
Journal ArticleDOI
Learning and planning in environments with delayed feedback
TL;DR: An algorithm is presented, Model Based Simulation, for planning in Markovian environments with constant observation and reward delays and model-based reinforcement learning is used to extend this approach to the learning setting in both finite and continuous environments.
Proceedings Article
Multi-resolution Exploration in Continuous Spaces
Ali Nouri,Michael L. Littman +1 more
TL;DR: This work proposes a new methodology for representing uncertainty in continuous-state control problems by using a hierarchical mapping to identify regions of the state space that would benefit from additional samples and demonstrates MRE's broad utility by using it to speed up learning in a prototypical model-based and value-based reinforcement-learning method.