#9281. Hierarchical Bayesian models of reinforcement learning: Introduction and comparison to alternative methods
August 2026 | publication date |
Proposal available till | 29-05-2025 |
4 total number of authors per manuscript | 3510 $ |
The title of the journal is available only for the authors who have already paid for |
|
|
Journal’s subject area: |
Applied Mathematics;
Psychology (all); |
Places in the authors’ list:
1 place - free (for sale)
2 place - free (for sale)
3 place - free (for sale)
4 place - free (for sale)
Abstract:
Reinforcement learning models have been used extensively to capture learning and decision-making processes in humans and other organisms. One essential goal of these computational models is the generalization to new sets of observations. Extracting parameters that can reliably predict out-of-sample data can be difficult, however. The use of prior distributions to regularize parameter estimates has been shown to help remedy this issue. While previous research has suggested that empirical priors estimated from a separate dataset improve predictive accuracy, this paper outlines an alternate method for the derivation of empirical priors: hierarchical Bayesian modeling. We provide a detailed introduction to this method, and show that using hierarchical models to simultaneously extract and impose empirical priors leads to better out-of-sample prediction while being more data efficient.
Keywords:
Bayesian statistics; Model comparison; Reinforcement learning
Contacts :