Torc Robotics - ML Engineer, II - Birds Eye View (BEV)
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• PhD in machine learning or data science. • Experience working in autonomous driving, robotics, or simulation-based training environments. • Experience with distributed training frameworks or large-scale ML infrastructure (e.g., Ray, Anyscale). • Experience working with simulation environments or large-scale behavior datasets. • Familiarity with vehicle dynamics, motion planning, or multi-agent decision-making systems. • Experience deploying ML models into production or real-world robotics systems. • Experience with multi-modal sensor fusion, including LiDAR, cameras, radar, or map inputs. • multi-modal sensor fusion • Experience working with BEV representations, occupancy grids, or 3D scene representations. • BEV representations, occupancy grids, or 3D scene representations
Responsibilities
• Develop and train machine learning models for scene understanding, including tasks such as object detection, road and lane prediction, semantic voxel grid classification, occupancy prediction, and map understanding in bird’s-eye-view (BEV) space. • Implement production-quality ML code to support model training, evaluation, and inference within the perception stack. • Analyze model performance, identify failure modes, and propose improvements to increase robustness across diverse driving environments and conditions. • Identify and interpret objects, lanes, obstacles, and weather conditions in the driving environment. • Contribute to multi-modal perception systems, combining signals from LiDAR, cameras, radar, and map sources into unified scene representations. • Work with large-scale datasets from simulation, fleet logs, and on-vehicle data to curate training data and improve model performance. • Collaborate with data, deployment, and infrastructure teams to evaluate perception models and ensure reliable performance in real-world driving scenarios. • Help integrate perception models into the autonomy stack and testing pipelines, enabling faster experimentation and iteration. • Contribute to tooling and infrastructure that improves training efficiency, experiment tracking, and reproducibility. • Participate in technical discussions around model architectures, sensor fusion strategies, and training approaches within the team.
Benefits
• $153,200 - $183,300 USD
No credit card. Takes 10 seconds.