Asymptotic bayes analysis for the finite-horizon one-armed-bandit problem

Citation:

Burnetas, A.N. & Katehakis, M.N., 2003. Asymptotic bayes analysis for the finite-horizon one-armed-bandit problem. Probability in the Engineering and Informational Sciences, 17, pp.53-82.

Abstract:

The multiarmed-bandit problem is often taken as a basic model for the trade-off between the exploration and utilization required for efficient optimization under uncertainty. In this article, we study the situation in which the unknown performance of a new bandit is to be evaluated and compared with that of a known one over a finite horizon. We assume that the bandits represent random variables with distributions from the one-parameter exponential family. When the objective is to maximize the Bayes expected sum of outcomes over a finite horizon, it is shown that optimal policies tend to simple limits when the length of the horizon is large.

Notes:

cited By 3

Website