Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Add Unitree G1 Walking Demo #359

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

0nhc
Copy link

@0nhc 0nhc commented Dec 27, 2024

I think it's cool to have another humanoid RL demo, so I added an RL demo for Unitree G1 walking in this PR. Here's my modification:

  • Added Unitree G1's urdf under assets/urdf, with original BSD-3 License.

  • Added 3 Python scripts (g1_env.py, g1_train.py, g1_eval.py) under examples/locomotion.

Demo video after training for 69.5 seconds on my PC:

file_v3_00i0_64cebb24-16ec-4d67-970d-02ea2dd42dcg.mp4

@zhouxian
Copy link
Collaborator

Cool! @ziyanx02 can you review this?

@0nhc
Copy link
Author

0nhc commented Dec 27, 2024

Actually I borrowed some reward functions from https://github.com/unitreerobotics/unitree_rl_gym, which is under BSD-3 License. So I added another commit to mention the source in each function's docstring.

@ziyanx02
Copy link
Collaborator

Looks impressive!
I have two questions before merging:

  • Have you tried sim-to-sim transfer with the policy you trained?
  • The motion appears slightly different from Unitree RL Gym. Could you try to make the upper body more stable?

@ziyanx02 ziyanx02 changed the title Add Unitree G1 Walking Demo [Feature] Add Unitree G1 Walking Demo Dec 27, 2024
@ziyanx02 ziyanx02 changed the title [Feature] Add Unitree G1 Walking Demo [FEATURE] Add Unitree G1 Walking Demo Dec 27, 2024
@0nhc
Copy link
Author

0nhc commented Dec 27, 2024

Thanks!

For the 2 questions.

  • I haven't tried sim2sim. Actually I just made this demo work few hours ago.
  • To solve this, I have added more reward functions according to unitree_rl_gym to make the upper body more stable. I found the functions work indeed. However, the original code in unitree_rl_gym trained for 10,000 iterations, but this demo only trained 100 iterations. I think training time is also an important factor.

I have pushed latest code with more reward functions.

@0nhc
Copy link
Author

0nhc commented Dec 27, 2024

Latest video after training for 64 seconds:

file_v3_00i0_34a51ae6-ceb3-4127-875a-1896355191ag.mp4

@ziyanx02
Copy link
Collaborator

The training code does not include domain randomization or observation noise, which might explain why it requires a much shorter training time. However, the current results show no sign of possibly successful deployment. Improving the motion and trying sim-to-sim transfer or directly deploying it to the real world would be better.

@0nhc
Copy link
Author

0nhc commented Dec 28, 2024

I totally understand your consideration. Because it's hard for me to acquire a G1 hardware, I think the most feasible way for me to verify the policy is Sim2Sim. Also, I'll see how to make it walk better. Thanks.

@0nhc 0nhc closed this Dec 28, 2024
@erwincoumans
Copy link

erwincoumans commented Jan 10, 2025

I wonder why it was closed? I have the G1 hardware and am willing to try it out, after sim-to-sim and additional randomization.
@0nhc do you mind re-opening? Then someone can reportsim-to-sim results with Isaac Gym or MuJoCo (or other simulator) as a first step.

@0nhc
Copy link
Author

0nhc commented Jan 10, 2025

Hi @erwincoumans! I closed this PR because I recently returned to university and was worried I wouldn't have enough time to continue working on it. However, I’ve already added domain randomization and observation noise to my forked repo, as well as another MuJoCo environment to test the trained policy. While the policy works well in Genesis after training, it crashes in MuJoCo for reasons I haven’t been able to identify. The Sim2Sim transfer isn’t functioning as expected.

I’m happy to reopen the issue, and if you’re able to test this pipeline on a real robot, that would be amazing!

@0nhc 0nhc reopened this Jan 10, 2025
@miguelalonsojr
Copy link

I developed a similar version. I can PR it if anyone is interested. It uses the full dof G1. I also did the same with the H1. Here's a video: https://bsky.app/profile/miguelalonsojr.bsky.social/post/3lez5qcpe5k26

@0nhc
Copy link
Author

0nhc commented Jan 13, 2025

Hi @miguelalonsojr, I think we have the same problem: it is not enough to only make it work in Genesis. We have to prove the policy can be transferred to another simulator (Sim2Sim) or the real robot (Sim2Real). This usually requires some techniques such as domain randomization and observation noise.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants