Offline RL Simulation
Need to simulate an RL agent on logged states.
Policy is known but no feedback mechanism.
Option 1 - Learn the world
- Can learn a world representation using the logged states - generate new states
as they come.
- Pros - Easy.
- Cons - Hard to reason about, prone to bias. Model can veer off into unseen territory,
deeming it possibly useless in the real world.
Option 2 - Cluster the observations, use the nearest one
- Just learn the state abstractions so you can cluster and sample them efficiently,
agent receives a real state closest to what it achieves theoretically.
- Pros - Model still uses logged observations, no stray behaviour.
- Cons - Higher chances of behaviour that doesn't correspond with online (I think).
Bib
Currently exploring offline RL for work, its pretty interesting.
Models for Option 2
involve some crazy math, learning "kinematic inseperability" to create a state encoder
for clustering. Learning a representation of states in a general RL problem is hard because it is inextricably
tied to exploration.
The papers below are a really good read, also a good intro into RL
research for newbies.
[1] MSFT offline-rl-sim
[2] HOMER