My name is Adrián Prados and I am doing my PhD in imitation learning focused on manipulation tasks.

We are testing the "imitation" framework developed to make use of a clean implementation of PyTorch. This framework (https://github.com/HumanCompatibleAI/imitation) associated with the "imitation: Clean Imitation Learning Implementation" paper presents a quick way to implement imitation learning tasks based on Gym ty Stable Baselines 3. The documentation allows to get into the logic of the framework but does not express correctly how to add data taken by humans. We have tried to use GitHub to contact the developers but have not received any response. We would like to know if anyone knows how to perform this process as it seems to us a highly interesting framework to work on learning by imitation.

Additionally, we also have some doubts regarding the addition of experts, as they clearly present how to generate experts from reinforcement learning algorithms based on stable baselines 3 (such as an expert based on PPO) but do not discuss how one could add one's own expert or even the human's own policy as the expert's policy. We are also open to work with other framewokrs that allow to make the imitation learning process simple or even to set up our own imitation learning algorithms. Any help would be very useful for us to have a better working path.

Thank you very much for your help.

More Adrián Prados's questions See All
Similar questions and discussions