Causal Reinforcement Learning: An Instrumental Variable Approach
96 Pages Posted: 25 Feb 2021
Date Written: February 25, 2021
Abstract
In the standard data analysis framework, data is first collected (once for all), and then data analysis is carried out. Moreover, the data-generating process is typically assumed to be exogenous. This approach is natural when the data analyst has no impact on how the data is generated. The advancement of digital technology, however, has facilitated firms to learn from data and make decisions at the same time. As these decisions generate new data, the data analyst---a business manager or an algorithm---also becomes the data generator. In this article, we formulate the problem as a Markov decision process (MDP) and show that the interaction generates a new type of bias---reinforcement bias---that exacerbates the endogeneity problem in static data analysis. When the data are independent and identically distributed, we embed the instrumental variable (IV) approach in the stochastic gradient descent algorithm to correct for the bias. For general MDP problems, we propose a class of IV-based reinforcement learning (RL) algorithms to correct for the bias. We establish asymptotic properties of the algorithms by incorporating them into two-timescale stochastic approximation (SA). Our formulation requires unbounded state space and more importantly, Markovian noise. Therefore, standard techniques in RL and SA literature, which rely on boundedness of the state space and martingale-difference structure of the noise, do not apply. We develop new techniques to establish finite-time risk bounds, finite-time bounds for trajectory stability, and asymptotic distribution of a class of IV-RL algorithms.
Keywords: Endogeneity, Markov Decision Process, Instrumental Variable, Reinforcement Bias, Reinforcement Learning, Q-Learning, Stochastic Approximation
Suggested Citation: Suggested Citation