GenSim: Generating Robotic Simulation Tasks via Large Language Models

Lirui Wang1, Yiyang Ling*2,3, Zhecheng Yuan*4,
Mohit Shridhar5, Chen Bao6, Yuzhe Qin3, Bailin Wang2, Huazhe Xu4, Xiaolong Wang3
MIT CSAIL1, Shanghai Jiao Tong University2, UCSD3, Tsinghua University4, UW5, CMU6,

Workshop on Language Grounding and Robot Learning (Workshop Best Paper), CoRL 2023
International Conference on Learning Representations (Spotlight), ICLR 2024


GenSim uses LLMs to generate vast amounts of simulated robotic tasks.


We learn one multi-task policy for 100 simulation tasks generated by GPT that can generalize zero-shot to new tasks, as well as adapt to 10 real world tasks.

Abstract

Collecting large amounts of real-world interaction data to train general robotic policies is often prohibitively expensive, thus motivating the use of simulation data. However, existing methods for data generation have generally focused on scene-level diversity (e.g., object instances and poses) rather than task-level diversity, due to the human effort required to come up with and verify novel tasks. This has made it challenging for policies trained on simulation data to demonstrate significant task-level generalization.

In this paper, we propose to automatically generate rich simulation environments and expert demonstrations by exploiting a large language models' (LLM) grounding and coding ability. Our approach, dubbed GenSim, has two modes: goal-directed generation, wherein a target task is given to the LLM and the LLM proposes a task curriculum to solve the target task, and exploratory generation, wherein the LLM bootstraps from previous tasks and iteratively proposes novel tasks that would be helpful in solving more complex tasks.

We use GPT4 to expand the existing benchmark by ten times to over 100 tasks, on which we conduct supervised finetuning and evaluate several LLMs including finetuned GPTs and Code Llama on code generation for robotic simulation tasks. Furthermore, we observe that LLMs-generated simulation programs can enhance task-level generalization significantly when used for multitask policy training. We further find that with minimal sim-to-real adaptation, the multitask policies pretrained on GPT4-generated simulation tasks exhibit stronger transfer to unseen long-horizon tasks in the real world and outperform baselines by 25%.

Generated Task Library

GPT 4

Task
instance


Code Llama 13B Instruct (Finetuned)

Task
instance



Strongly recommend to checkout Gradio for live demos of generating your own task!

GenSim

Overview


Tasks



Real-Robot Experiments (4x)

CLIPort

Task
instance


GenSim

Task
instance