EHS
EHS

Separating value functions across time-scales. (arXiv:1902.01883v1 [cs.LG])


In many finite horizon episodic reinforcement learning (RL) settings, it is
desirable to optimize for the undiscounted return – in settings like Atari, for
instance, the goal is to collect the most points while staying alive in the
long run. Yet, it may be difficult (or even intractable) mathematically to
learn with this target. As such, temporal discounting is often applied to
optimize over a shorter effective planning horizon. This comes at the cost of
potentially biasing the optimization target away from the undiscounted goal. In
settings where this bias is unacceptable – where the system must optimize for
longer horizons at higher discounts – the target of the value function
approximator may increase in variance leading to difficulties in learning. We
present an extension of temporal difference (TD) learning, which we call
TD($Delta$), that breaks down a value function into a series of components
based on the differences between value functions with smaller discount factors.
The separation of a longer horizon value function into these components has
useful properties in scalability and performance. We discuss these properties
and show theoretic and empirical improvements over standard TD learning in
certain settings.

Source link

EHS
Back to top button