Research Engineer - Autonomy
All the best with your application!
Want more jobs like this straight to your inbox?
Get Job Alerts
Get a curated list of the top robotics roles delivered straight to your inbox each week. We sift through hundreds of postings to find the high-salary positions, leading companies, and remote opportunities you actually want.
Unsubscribe anytime. We respect your privacy.
Summary
Singapore
Full-time
About this Job
About Us
We are building the next generation of embodied intelligence: humanoid and mobile robots capable of performing complex, long-horizon tasks in unstructured real-world environments. Our mission is to enable scalable general-purpose autonomy through large-scale learning, multi-modal data, and robust control.
We are looking for passionate engineers and scientists who thrive at the intersection of machine learning, robotics, and systems engineering, and want to see their research come alive in real robots.
Role Overview
You will lead development of the algorithms and architectures that give our robots the ability to walk, balance, grasp, manipulate, and reason in the physical world. This role bridges foundational model research and real-time robotics. You will design learning systems that power whole-body locomotion, dexterous manipulation, and embodied understanding.
Responsibilities
- Train and adapt large-scale VLA & VLMs that predict multi-modal futures (video, proprioception, audio, actions)
- Design and build reinforcement and imitation learning agents for whole-body locomotion, manipulation, and long-horizon tasks
- Integrate learned policies into real-time control loops for humanoid and mobile robots
- Build closed-loop evaluation and scaling pipelines to measure generalization and safety
- Collaborate with simulation and hardware teams (Isaac Sim, MuJoCo) to bridge sim-to-real transfer
- Translate research results into robust autonomy deployed across robot fleets
Preferred Qualifications
BS/MS/PhD in Robotics, AI/Computer Science, or related field
Proficiency in Python and C++, and deep learning frameworks (PyTorch / JAX)
Deep experience in RL/IL, control, or multimodal learning
Understanding of scaling laws, evaluation metrics, and scaling training for large models
Familiarity with real-robot systems, sensing, and embedded control integration
Familiarity with industry SOTA and latest research, e.g. Gr00t, Pi0, etc
Bonus Skills
- Experience with transformer-based control policies or diffusion policy learning.
- Work on humanoid locomotion, manipulation, or whole-body coordination.
- Prior open-source or research contributions in robotics, control, or deep learning.
About the Company
