Terawatt Infrastructure - Senior Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • 6+ years in data engineering, platform development, or large-scale data systems. • Hands-on experience with Databricks or modern lakehouse platforms and cloud platforms (AWS, GCP, or Azure). • Experience building scalable ETL/ELT pipelines using Spark and SQL. • Proficiency in SQL and experience with NoSQL databases (e.g., MongoDB, Cassandra, DynamoDB). • Strong understanding of data modeling, schema design, and performance optimization. • Experience building reliable, production-grade data pipelines with a focus on data quality and observability. • Experience supporting analytics and/or ML workflows, including preparing ML-ready datasets. • Working knowledge of data governance, security, and access control frameworks. • Familiarity with Infrastructure as Code (IaC) and automated deployment workflows (e.g., Terraform). • Proven ability to collaborate across teams and contribute to technical direction. • Experience working with time-series, IoT, or high-volume telemetry data systems. • Familiarity with EV charging ecosystems, including OCPP (Open Charge Point Protocol). • Domain experience in electric vehicles (EV), energy systems, or distributed energy resources (DERs). • Experience building ML feature pipelines, training datasets, or batch inference workflows. • Experience designing self-service data platforms for analysts and data scientists. • Background in event-driven or real-time data architectures. • Solid software engineering experience, including writing maintainable production code, testing, and applying engineering best practices to data systems. • Proven ability to influence technical direction and collaborate across teams. • $110,000 - $135,000 a year • Compensation for this role is determined by several factors, including the cost of labor in specific geographic markets, and these ranges are intended to provide a helpful reference. The actual compensation offer will be based on the candidate’s location, skills, level of expertise and experience, and internal equity considerations. In addition to base salary, we offer a comprehensive benefits package and, where applicable, performance-based incentives. • We are building a team that represents a variety of backgrounds, perspectives, and skills. At Terawatt, we continuously strive to foster inclusion, humility, energizing relationships, and belonging, and welcome new ideas. We're growing and want you to grow with us. We encourage people from all backgrounds to apply. • If a reasonable accommodation is required to fully participate in the job application or interview process, or to perform the essential functions of the position, please contact [email protected]. • Terawatt Infrastructure is an equal-opportunity employer.
Responsibilities
• Turning messy operational data into reliable signals by building pipelines that transform noisy, incomplete, and high-volume time-series data into trusted datasets for analytics, product features, and ML workflows • Design a resilient lakehouse platform by architecting a scalable Databricks-based platform that support both streaming and batch workloads while ensuring governance, observability, and reliability • Enable production-ready ML pipelines by creating reproducible workflows, reliable feature datasets, and batch prediction pipelines that downstream systems can depend on • Enable self-service analytics and ML by building infrastructure and abstractions that allow analysts, engineers, and data scientists to independently explore and use data • Scale a platform for product and analytics by designing systems that support operational product features, internal reporting, and ML use cases without compromising performance or data quality • Architect and evolve a Databricks-based data platform that serves as the scalable foundation for product features, internal reporting, and ML workflows. • Set technical standards for modeling raw data into clean, reliable datasets, ensuring high integrity and point-in-time accuracy for both BI and ML applications. • Build and maintain self-service tooling and infrastructure abstractions that improve the developer experience for data producers, analysts, and data scientists. • Design and optimize high-performance ETL/ELT pipelines using Delta Live Tables and Structured Streaming to handle seamless ingestion from diverse data sources. • Own platform observability, testing, and proactive monitoring to ensure the performance and reliability of critical data delivery and pipeline health. • Architect and enforce data security, compliance, and access controls by implementing Unity Catalog and IAM (Identity and Access Management) best practices across the enterprise. • Build and maintain production-grade pipelines that transform raw data into ML-ready features, training datasets, and reliable batch predictions. • Lead Infrastructure as Code (IaC) initiatives using Terraform and improve team productivity by identifying technical debt and automating complex deployment workflows. • Partner with Engineering, Product, and Business teams to resolve ambiguities and ensure shipped data features are impactful, reliable, and aligned with business outcomes. • Build and maintain a self-service data lake environment, empowering non-data engineers and stakeholders to discover, explore, and analyze data independently. • Promote engineering excellence through code reviews, documentation, and technical standards for orchestration and testing.
No credit card. Takes 10 seconds.