Dynamic Experience Replay

Jieliang Luo, Hui Li

Conference on Robot Learning
2019

Dynamic Experience Replay (2:14 min.)

Abstract

We present a novel technique called Dynamic Experience Replay (DER) that allows Reinforcement Learning (RL) algorithms to use experience replay samples not only from human demonstrations but also successful transitions generated by RL agents during training and therefore improve training efficiency. It can be combined with an arbitrary off-policy RL algorithm, such as DDPG or DQN, and their distributed versions. We build upon Ape-X DDPG and demonstrate our approach on robotic tight-fitting joint assembly tasks, based on force/torque and Cartesian pose observations. In particular, we run experiments on two different tasks: peg-in-hole and lap-joint. In each case, we compare different replay buffer structures and how DER affects them. Our ablation studies show that Dynamic Experience Replay is a crucial ingredient that either largely shortens the training time in these challenging environments or solves the tasks that the vanilla Ape-X DDPG cannot solve. We also show that our policies learned purely in simulation can be deployed successfully on the real robot.​

Related Publications

Loading...

Welcome ${RESELLERNAME} Customers

Please opt-in to receive reseller support

I agree that Autodesk may share my name and email address with ${RESELLERNAME} so that ${RESELLERNAME} may provide installation support and send me marketing communications.  I understand that the Reseller will be the party responsible for how this data will be used and managed.

Email is required Entered email is invalid.

${RESELLERNAME}