Berkeley Humanoid: A Research Platform for Learning-based Control
Qiayuan Liao, Bike Zhang, Xuanyu Huang, Xiaoyu Huang, Zhongyu Li, Koushil Sreenath
2024-08-01

Summary
This paper introduces Berkeley Humanoid, a new, affordable robot designed for research in learning-based control. It is built to perform well in various tasks while being easy to use and maintain.
What's the problem?
Many existing humanoid robots used for research are either too expensive or too fragile, making them difficult to work with. Researchers need a reliable platform that can easily be tested and modified without high costs or complex maintenance. Current robots often lack the ability to adapt to different environments and tasks effectively.
What's the solution?
To solve these issues, the authors developed Berkeley Humanoid, a lightweight robot that is cost-effective and built from custom parts. This robot is designed for easy operation by a single person and can handle various movements like walking on different terrains and hopping. It uses a simple learning-based control system that allows it to learn from its experiences and improve its performance over time. The design minimizes unnecessary complexity, making it easier to simulate and train the robot quickly.
Why it matters?
This research is important because it provides a valuable tool for advancing the field of robotics. By making humanoid robots more accessible and functional for academic research, Berkeley Humanoid can help researchers develop better algorithms for robot control. This could lead to significant improvements in how robots interact with their environments, paving the way for more advanced robotic applications in the future.
Abstract
We introduce Berkeley Humanoid, a reliable and low-cost mid-scale humanoid research platform for learning-based control. Our lightweight, in-house-built robot is designed specifically for learning algorithms with low simulation complexity, anthropomorphic motion, and high reliability against falls. The robot's narrow sim-to-real gap enables agile and robust locomotion across various terrains in outdoor environments, achieved with a simple reinforcement learning controller using light domain randomization. Furthermore, we demonstrate the robot traversing for hundreds of meters, walking on a steep unpaved trail, and hopping with single and double legs as a testimony to its high performance in dynamical walking. Capable of omnidirectional locomotion and withstanding large perturbations with a compact setup, our system aims for scalable, sim-to-real deployment of learning-based humanoid systems. Please check http://berkeley-humanoid.com for more details.