Computer Vision Researcher (Simulation and Embodied AI)
All the best with your application!
Want more jobs like this straight to your inbox?
Get Job Alerts
Get a curated list of the top robotics roles delivered straight to your inbox each week. We sift through hundreds of postings to find the high-salary positions, leading companies, and remote opportunities you actually want.
Unsubscribe anytime. We respect your privacy.
Summary
New York, United States
Full-time
About this Job
Computer Vision Researcher (Simulation & Embodied AI)
About Mecka AI
Mecka AI is building the data infrastructure layer for robotics and embodied AI.
We design and operate global systems for data capture, data labeling, and hardware-enabled workflows used by leading AI labs and robotics companies. Our datasets power models that learn from the physical world — enabling robots to understand, reason, and act in real environments.
As robotics systems evolve, combining real-world data with simulation-driven learning is critical to unlocking robust, generalizable behavior.
The Role
We’re hiring a Computer Vision Researcher with a focus on simulation-driven learning for robotics.
This role sits at the intersection of vision, simulation, and control. You’ll work on using real-world data to inform simulated environments, and apply techniques like reinforcement learning and contact modeling to improve motion — particularly for hands, manipulation, and lower-body movement.
You’ll work closely with data, engineering, and customer teams to bridge the gap between captured data → simulation → deployable behavior.
Responsibilities
Simulation & Learning Systems
Build and iterate on simulation environments for robotic learning
Use real-world datasets to inform and improve simulated environments
Apply reinforcement learning (RL) to learn contact-rich behaviors and motion policies
Focus on improving dexterous manipulation and lower-body motion
Vision & Data Integration
Develop pipelines that translate captured video and sensor data into usable simulation inputs
Work on perception systems that support simulation fidelity (pose, state estimation, object understanding)
Align real-world data distributions with simulation environments
Contact Modeling & Motion
Model physical interactions (contact, force, constraints) in simulation
Improve smoothness, stability, and realism of learned motion
Help bridge sim-to-real gaps for manipulation and locomotion
Experimentation & Evaluation
Design experiments to evaluate model performance in simulation and real-world settings
Analyze failure modes and iterate on data, models, and environments
Work with customers to validate whether data + simulation outputs meet their needs
Cross-Functional Collaboration
Work closely with:
Data teams (capture + labeling pipelines)
Engineering teams (infrastructure + deployment)
External customers (robotics / AI labs)
Translate research ideas into practical, usable systems
Who You Are
Required Experience
MSc or PhD in robotics, computer vision, machine learning, or a related field
Strong experience with simulation environments (e.g., Isaac Gym, MuJoCo, or similar)
Experience applying reinforcement learning to control or robotics problems
Strong programming skills in Python (C++ is a plus)
Solid understanding of vision, state estimation, and/or perception systems
Strong Signals
Experience working on dexterous manipulation, hands, or locomotion
Experience modeling contact-rich interactions in simulation
Experience working on sim-to-real transfer
Familiarity with vision-language-action (VLA) or multimodal systems
Experience working with large-scale real-world datasets
You Are
Deeply curious about how robots learn and move
Comfortable working across research and engineering boundaries
Able to move from idea → experiment → iteration quickly
Excited by messy, real-world problems — not just clean benchmarks
Motivated to build systems that actually get used
Why This Role
Work on core problems in simulation-driven robotics learning
Help define how real-world data and simulation interact at scale
Partner with leading AI labs and robotics companies
High ownership and direct impact on product and research direction
Opportunity to push forward how robots learn manipulation and movement
About the Company
