spaitial - Research Scientist - 3D Diffusion
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• PhD in computer science, computer vision, graphics, machine learning, or a related field. • Top-tier publication record at venues such as CVPR, ECCV/ICCV, NeurIPS, and SIGGRAPH. • Strong fundamentals in deep learning and generative modeling, in particular diffusion models and large transformer models. • Hands-on experience training diffusion models and working with cutting-edge image and video model stacks (e.g., Stable Diffusion, FLUX, WAN, or similar). • Solid understanding of 3D processing concepts such as camera geometry, depth, reconstruction, point clouds, meshes, or Gaussian splats. • Proficiency in Python and deep learning frameworks such as PyTorch, with experience in large-scale model training and optimization. • Ability to implement research ideas, run rigorous experiments, and ship reliable ML code.
Responsibilities
• Design and develop diffusion-based methods for 3D generation from images, video, and other inputs. • Build, train, optimize, and evaluate 3D diffusion models, including research on architectures, losses, and sampling strategies. • Apply and adapt cutting-edge image and video diffusion backbones (e.g., Stable Diffusion, FLUX, WAN, or comparable systems) to 3D generation. • Implement and experiment with state-of-the-art 3D representations including point clouds, meshes, and 3D Gaussian Splatting. • Develop training pipelines and loss functions that improve geometry accuracy, visual fidelity, and spatiotemporal consistency. • Collaborate with researchers to integrate physics-aware priors and world model capabilities into diffusion systems. • Analyze model performance, debug failure cases, and iterate rapidly to improve quality and robustness.
No credit card. Takes 10 seconds.