Luyang Hu (胡路杨)
huluyang@seas.upenn.edu / hulu@umich.edu
Philadelphia, PA 19104
I’m an incoming PhD student at Oregon State University, advised by Prof. Alan Fern. I’m currently a second-year Robotics M.S.E. student in the GRASP Lab at the University of Pennsylvania, advised by Prof. Antonio Loquercio and Prof. Dinesh Jayaraman.
I earned my bachelor’s degree in Computer Science, Data Science, and Linguistics from the University of Michigan, where I worked with Prof. Joyce Chai on how semantic knowledge can guide robot planning and policy learning.
At Penn, my research focuses on scalable and physics-grounded robot learning for manipulation and humanoid control. I build systems that leverage foundation models, large-scale simulation, and human motion capture to teach robots generalizable behaviors across sensing modalities and embodiments. Looking ahead, I’m particularly interested in how robots can move beyond passive data scaling —learning efficiently through predictive world models and information-aware representations that enable adaptive, resource-efficient embodied intelligence.
My broader interests lie in embodied intelligence, robot learning, and data-driven methods that connect human and robotic understanding. Outside the lab, it’s sim2real for me: ⛰️ trails, 🎾 courts, and 🎞️ 35 mm frames.
Link to my CV (last update: Nov 2025).
News
| Apr, 2025 | Our new paper RoSHI is now on arXiv! RoSHI is a versatile robot-oriented suit for capturing human motion in the wild. |
|---|---|
| Mar, 2025 | Decided to join the DRAIL Lab at Oregon State University for my PhD! |
| Aug, 2024 | Joined the GRASP Lab at the University of Pennsylvania |
| May, 2024 | Graduated from the University of Michigan. Forever go blue |
Selected Publications
-
RoSHI: A Versatile Robot-oriented Suit for Human Data In-the-Wild2025We present RoSHI, a low-cost wearable that fuses 9 IMUs with Aria glasses to capture full-body motion, articulated hands, and egocentric video. Our design emphasizes long-horizon stability and occlusion robustness for humanoid learning. We introduce a simulation-in-the-loop retargeting framework that converts human data into physically feasible robot actions. 61.9% of captured sequences successfully deployed on a Unitree G1 humanoid, providing a scalable foundation for human-to-humanoid imitation learning.
-
EUREKAWORLD: Scalable Real-World Manipulation via LLM-Automated RL2025We present Eureka for Manipulation, a large-scale RL framework that integrates LLMs to automate environment setup, reward shaping, and curriculum design for dexterous manipulation. Leveraging multi-GPU compute, our system couples LLM-guided simulation construction with massive RL optimization to generate diverse digital twins and achieve zero-shot sim-to-real transfer. We demonstrate robust transfer across manipulation tasks from single-arm tool use to bimanual coordination, and propose a paradigm for scalable reproducibility.
-
Flash Parking: Consumer Sentiment Analysis