Full-stack robotics developer — perception to actuation:
arm manipulation · autonomous navigation · animatronic HRI · reinforcement learning
LinkedIn • Medium • Portfolio • RoboCloud Dashboard
I believe robotics is most compelling when simulation fidelity, learned behaviour, and real hardware form a tight deployable loop — from servo-level lip sync to fleet-scale coordination.
- Focus: robot manipulation, reinforcement learning, autonomous navigation, animatronic HRI, multi-robot coordination
- Current direction: RL policies on UR arms and quadrupeds; multimodal emotion transformers and phoneme-level face animation on a 25-servo animatronic head
- Open to: robotics developers, RL practitioners, HRI researchers, ROS 2 peers, open-source contributors
Manipulation & Locomotion — UR arms + Robotiq grippers, CHAMP quadruped, RL policies in MuJoCo 3 & Isaac Sim, deployed via ROS 2 policy nodes.
Animatronics & HRI — 25-servo Dynamixel face, multimodal emotion recognition (audio + text + vision + prosody), lip sync transformer.
Navigation & Fleet — Nav2 + SLAM Toolbox, Open-RMF multi-robot coordination, MoveIt 2.
- care about real deployable robotics and HRI, not just simulation demos
- work on manipulation, locomotion, navigation, or expressive robot faces
- want to collaborate on open-source ROS 2, RL tooling, or multimodal AI
- enjoy discussing engineering trade-offs, servo kinematics to transformer attention
If you are building on robotics, reinforcement learning, HRI, or expressive animatronics, feel free to reach out.



