Luyang Hu (胡路杨)

huluyang@seas.upenn.edu / hulu@umich.edu

headshot.jpg

Philadelphia, PA 19104

I’m a second-year Robotics M.S.E. student in the GRASP Lab at the University of Pennsylvania, advised by Prof. Antonio Loquercio and Prof. Dinesh Jayaraman.

I earned my bachelor’s degree in Computer Science, Data Science, and Linguistics from the University of Michigan, where I worked with Prof. Joyce Chai on how semantic knowledge can guide robot planning and policy learning.

At Penn, my research focuses on scalable and physics-grounded robot learning for manipulation and humanoid control. I build systems that leverage foundation models, large-scale simulation, and human motion capture to teach robots generalizable behaviors across sensing modalities and embodiments. Looking ahead, I’m particularly interested in how robots can move beyond passive data scaling —learning efficiently through predictive world models and information-aware representations that enable adaptive, resource-efficient embodied intelligence.

My broader interests lie in embodied intelligence, robot learning, and data-driven methods that connect human and robotic understanding. As I approach the end of my master’s training, I’m eager to continue pursuing research and am actively seeking PhD opportunities.

Outside the lab, it’s sim2real for me: ⛰️ trails, 🎾 courts, and 🎞️ 35 mm frames.

Link to my CV (last update: Nov 2025).

News

Jan 15, 2025 Our paper RoSHI: A Versatile Robot-oriented Suit for Human Data In-the-Wild has been submitted to ICRA 2026!
Aug 27, 2024 Joined the GRASP Lab at the University of Pennsylvania :robot:
May 08, 2024 Graduated from the University of Michigan. Forever go blue :mortar_board:

Selected Publications

  1. roshi.gif
    RoSHI: A Versatile Robot-oriented Suit for Human Data In-the-Wild
    Wenjing Mao, Jefferson Ng, Luyang Hu, Daniel Gehrig, and Antonio Loquercio
    ICRA 2026(under review), 2025

    We present RoSHI, a low-cost wearable that fuses 9 IMUs with Aria glasses to capture full-body motion, articulated hands, and egocentric video. Our design emphasizes long-horizon stability and occlusion robustness for humanoid learning. We introduce a simulation-in-the-loop retargeting framework that converts human data into physically feasible robot actions. 61.9% of captured sequences successfully deployed on a Unitree G1 humanoid, providing a scalable foundation for human-to-humanoid imitation learning.

  2. eureka.png
    EUREKAWORLD: Scalable Real-World Manipulation via LLM-Automated RL
    Luyang Hu, others, Dinesh Jayaraman, and Osbert Bastani
    RSS 2026(expected submission), 2025

    We present Eureka for Manipulation, a large-scale RL framework that integrates LLMs to automate environment setup, reward shaping, and curriculum design for dexterous manipulation. Leveraging multi-GPU compute, our system couples LLM-guided simulation construction with massive RL optimization to generate diverse digital twins and achieve zero-shot sim-to-real transfer. We demonstrate robust transfer across manipulation tasks from single-arm tool use to bimanual coordination, and propose a paradigm for scalable reproducibility.

  3. pinfonav.gif
    Persona-Nav: A Large-Scale Personalized Household Navigation Dataset Generated by LLMs
    Luyang Hu, Yichi Zhang, and Joyce Chai
    ICML 2026 (expected submission), 2024

    We present a large-scale household navigation dataset of 2,700+ houses with personalized semantics reflecting resident identities, preferences, and relationships. Using LLM-driven top-down generation, we create environments where robots navigate via user-centric queries (e.g., "the owner’s favorite mug") rather than generic spatial commands. This work bridges spatial navigation and context-aware behavior, enabling human-aligned embodied intelligence in personalized home environments.