Senior Perception Fusion and Deep Learning Engineer - Autonomous Vehicles
All the best with your application!
Want more jobs like this straight to your inbox?
Get Job Alerts
Get a curated list of the top robotics roles delivered straight to your inbox each week. We sift through hundreds of postings to find the high-salary positions, leading companies, and remote opportunities you actually want.
Unsubscribe anytime. We respect your privacy.
Summary
Beijing / Shanghai
Full-time
3+ years
About this Job
NVIDIA has continuously reinvented itself over the past two decades. The invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU‑accelerated deep learning ignited modern AI — the next era of computing — with GPUs serving as the brains of computers, robots, and self‑driving cars that can perceive and understand the world. This is our life’s work: to amplify human imagination and intelligence.
We are seeking a systems software expert with deep expertise in perception fusion and deep learning for autonomous vehicles, based in Beijing or Shanghai. Leveraging your background in computer vision, multimodal sensor fusion, and deep learning, you will design, adapt, and scale state‑of‑the‑art multimodal perception solutions that enable NDAS features across multiple OEM partners, carlines, and regions. Your work will play a pivotal role in enabling vehicles to understand and interpret their surroundings in real time. This is an exciting opportunity to contribute to cutting‑edge autonomous‑driving technology, collaborate with cross‑functional teams, and shape the future of mobility.
What you will be doing
Evaluate perception‑fusion algorithms and KPIs across multiple OEM carlines for both driving and parking functions, including near‑field scene understanding for parking and full‑field scene understanding for active safety and driving.
Triage and diagnose perception‑fusion issues, identifying root causes behind KPI variations across carlines, regions, ODDs, and operating conditions.
Propose and prototype innovative perception‑fusion solutions to meet new sensor configurations and platform requirements.
Collaborate cross‑functionally with perception, planning, controls, systems, and platform teams to drive the development, optimization, and evolution of perception‑fusion algorithms.
What we need to see
MS or PhD (or equivalent industry experience) in Computer Science, Computer Engineering, Mathematics, Physics, or a related field, with 3+ years of industry experience in perception, computer vision, or multimodal sensor fusion for autonomous driving or robotics.
Experience with advanced perception‑fusion components such as multi‑object tracking, data association, sensor modeling and calibration, or BEV / transformer‑based fusion architectures.
Strong algorithmic fundamentals in digital image processing, multi‑view 3D geometry, nonlinear optimization, and classical fusion frameworks (KF/EKF), supported by a solid foundation in linear algebra and uncertainty modeling.
Hands‑on experience with deep learning, including developing, training, and deploying perception or fusion models; strong Python skills for prototyping, analysis, and evaluation.
Proficiency in C/C++ and Linux, with deep knowledge of modern C++ features, data structures, algorithms, and performance‑oriented system design.
Excellent communication and collaboration skills, with the ability to work effectively across cultures, nationalities, and time zones.
Ways to stand out from the crowd
Experience working with diverse sensor modalities, including camera, lidar, radar, IMU, GNSS, and CAN‑based odometry.
Extensive deep‑learning experience applied to autonomous driving or robotics.
A strong track record of designing perception‑fusion algorithms that have shipped in production ADAS or autonomous‑driving programs.
About the Company
