Skip to content
View darshmenon's full-sized avatar

Block or report darshmenon

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
darshmenon/README.md

Hi, I'm Darsh Menon 👋

Full-stack robotics developer — perception to actuation:
arm manipulation · autonomous navigation · animatronic HRI · reinforcement learning

LinkedInMediumPortfolioRoboCloud Dashboard


I believe robotics is most compelling when simulation fidelity, learned behaviour, and real hardware form a tight deployable loop — from servo-level lip sync to fleet-scale coordination.

At a Glance

  • Focus: robot manipulation, reinforcement learning, autonomous navigation, animatronic HRI, multi-robot coordination
  • Current direction: RL policies on UR arms and quadrupeds; multimodal emotion transformers and phoneme-level face animation on a 25-servo animatronic head
  • Open to: robotics developers, RL practitioners, HRI researchers, ROS 2 peers, open-source contributors

What I Build

Manipulation & Locomotion — UR arms + Robotiq grippers, CHAMP quadruped, RL policies in MuJoCo 3 & Isaac Sim, deployed via ROS 2 policy nodes.

Animatronics & HRI — 25-servo Dynamixel face, multimodal emotion recognition (audio + text + vision + prosody), lip sync transformer.

Navigation & Fleet — Nav2 + SLAM Toolbox, Open-RMF multi-robot coordination, MoveIt 2.

Looking for People Who...

  • care about real deployable robotics and HRI, not just simulation demos
  • work on manipulation, locomotion, navigation, or expressive robot faces
  • want to collaborate on open-source ROS 2, RL tooling, or multimodal AI
  • enjoy discussing engineering trade-offs, servo kinematics to transformer attention

GitHub Stats

Let's Connect

If you are building on robotics, reinforcement learning, HRI, or expressive animatronics, feel free to reach out.

📧 darshmenon02@gmail.comLinkedInMedium

Pinned Loading

  1. UR3_ROS2_PICK_AND_PLACE UR3_ROS2_PICK_AND_PLACE Public

    UR Robotic Arm with Robotiq 2-Finger Gripper for ROS2

    C++ 38 6

  2. pickplace-rl-mobile-manipulator pickplace-rl-mobile-manipulator Public

    Mobile manipulator pick-and-place via reinforcement learning — UR3 arm on a diff-drive base, trained end-to-end with TQC in Gazebo/ROS2

    C++ 25

  3. rosnav rosnav Public

    Full-stack ROS 2 autonomous navigation: Nav2, SLAM Toolbox, Gazebo Harmonic, multi-robot fleet coordination, coordinated frontier exploration, MPPI controller, behavior trees & waypoint following o…

    Python 13 2

  4. ur-arm-rl ur-arm-rl Public

    Reinforcement learning environment for UR5e arms with MuJoCo 3 — SAC training for reach, pick-and-place, and symmetric multi-arm cooperative tasks. Includes a ROS 2 policy node for Gazebo deployment.

    Python 3

  5. quadruped-dog-rl quadruped-dog-rl Public

    Unitree Go2 quadruped robot dog — RL locomotion training (MuJoCo + Gazebo Harmonic), ROS2 CHAMP walking controller, PPO policy, keyboard teleop, and multi-terrain simulation

    Python 4

  6. multi-robot-fleet-ros2 multi-robot-fleet-ros2 Public

    ROS 2 monorepo for multi-robot fleet management — AMRs, UR3 mobile manipulators, Open-RMF fleet coordination, MoveIt 2, Nav2, and SmolVLA AI inference

    Python 1