Sr Data Infrastructure Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• 5+ years of professional experience in data engineering or data infrastructure roles • Strong proficiency in Python and SQL, with the ability to write production-quality, scalable, and well-tested code. • Proven experience designing and operating ingestion pipelines and staging layers (streaming and batch) that support downstream analytics and product use cases. • Experience deploying and managing cloud data infrastructure in AWS using infrastructure-as-code (e.g., Terraform, Kubernetes, Docker). • Hands-on experience with cloud-based data platforms, storage systems, and infrastructure. • Familiarity with data quality practices, testing frameworks, and CI/CD for data pipelines. • Highly motivated teammate with excellent oral and written communication skills. • Enjoy working in a fast paced, collaborative and dynamic start-up environment as part of a small team. • Willingness to travel occasionally for on-site support or testing, as needed. • Must have and maintain US work authorization. • Proven experience as the technical owner or lead for a data ingestion platform or foundational data system. • Experience with Databricks (Delta Live Tables, SQL Warehouse) and familiarity with dbt or similar tools to support analytics engineers. • Strong understanding of multi-tenant architectures, with a track record of balancing cost-efficiency, performance, and reliability at scale. • Background in streaming systems (Kafka, Flink, Kinesis, or Spark Structured Streaming). • Familiarity with data quality and observability tools (e.g., Great Expectations, Monte Carlo). • Exposure to IoT/robotics telemetry or 3D/spatial data processing (e.g., point clouds, LiDAR, time-series). • Experience working in a product-facing data role, collaborating directly with product, engineering, and AI/ML teams to deliver data systems that enable new customer-facing features. • Demonstrated ability to translate product requirements into scalable data solutions with measurable business impact. • The base salary range for this position is $180,000-$215,000 plus equity and comprehensive benefits. Our salary ranges are determined by role and experience level. The range reflects the minimum and maximum target for new hire salaries for the position in the noted geographic area. Within the range, individual pay is determined by additional factors, including job-related skills, experience, and relevant education or training.
Responsibilities
• Own the full ingestion path from edge to cloud, ensuring robot telemetry, sensor data, and warehouse events are reliably captured, transported, and made available for downstream systems. • Design, build, and operate scalable pipelines and foundational data layers (streaming and batch) that deliver low-latency, reliable data for analytics, AI/ML, and product features. • Build and maintain ingestion pipelines from object storage (e.g., S3) into Databricks, including raw → staged → analytics-ready layers, supporting both streaming and batch workloads. • Own the reliability and CI/CD of the data warehouse and foundational data layers, enabling safe, repeatable deployment of schema changes, transformations, and infrastructure that analytics engineers depend on. • Implement observability, monitoring, and data quality checks to ensure pipeline correctness, detect failures or drift, and maintain trust in data used by Vista, Portal, and Scoutmap. • Scale and optimize multi-tenant data infrastructure, balancing performance, reliability, and cost-efficiency as Cobot’s customer base and data volume grow. • Collaborate directly with robotics, AI/ML, product, and analytics teams to translate product requirements into resilient data systems that unlock customer-facing features. • Establish and enforce best practices for data engineering, reliability, security, and CI/CD across ingestion, staging, and warehouse layers—owning the foundations while enabling analytics engineers to ship metrics, marts, and dashboards efficiently.
Benefits
• Equity options mentioned as part of compensation package: "equity." • Paid time off (PTO) is included in the benefits: PTO. • Perks such as flexible work hours and remote work opportunities available from multiple locations including San Francisco Bay Area, Seattle, Boston, Denver/Boulder, or Pittsburgh: perks (remote work options).