Google Cloud Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• 3–5 years of experience in data engineering or a closely related data infrastructure role. • Proven experience designing and implementing scalable data pipelines and warehouse architectures. • Strong expertise in Google Cloud Platform (BigQuery, Cloud Storage, Cloud Composer, Pub/Sub, Dataflow). • Hands-on experience with dbt (data build tool) — models, tests, macros, sources, and documentation — at production scale. • Experience building and maintaining data pipelines with Apache Airflow or a comparable workflow orchestration tool. • Strong proficiency in SQL, including advanced BigQuery SQL (window functions, partitioning, clustering, query optimization). • Proficiency in Python for data engineering tasks, including API integrations, data processing scripts, and custom operators. • Familiarity with data modeling concepts: star schema, dimensional modeling, slowly changing dimensions (SCD). • Experience with version control (Git) and collaborative development workflows (pull requests, code review). • Understanding of data quality, lineage, and observability best practices. • Startup or growth-stage mindset — comfortable with ambiguity, rapid iteration, and evolving priorities. • Excellent communication skills, with the ability to collaborate effectively across technical and non-technical teams • Preferred: • Experience with Terraform or similar infrastructure-as-code tools for managing cloud data infrastructure. • Familiarity with streaming technologies such as GCP Pub/Sub, Dataflow, or Apache Kafka. • Knowledge of Looker, Tableau, or other BI tools and how data models power them. • Google Cloud Professional Data Engineer certificationWhy join us: • $90,000—$120,000 USD
Responsibilities
• Build and maintain production-grade ELT pipelines that ingest data from internal applications, third-party SaaS tools, and event streams into our BigQuery data warehouse. • Own specific data domains end-to-end — from raw ingestion through to marts — ensuring your areas of the warehouse are accurate, tested, and well-documented. • Write and maintain dbt models, tests, macros, and documentation within our established dbt project conventions and code review process. • Develop and manage Airflow DAGs on Cloud Composer or other similar tools to orchestrate data workflows, following patterns and standards set by the team. • Implement data quality checks and monitoring to catch anomalies before they reach downstream consumers. • Optimize BigQuery queries and models for cost and performance within your domain, escalating architectural tradeoffs to senior engineers when appropriate. • Collaborate with analysts and stakeholders to translate business data needs into well-scoped pipeline and modeling tasks. • Participate in on-call rotations, respond to pipeline incidents, and write clear postmortems.
Benefits
• Work on a modern, best-in-class GCP and BigQuery data stack with a high-performing team. • Influence data platform architecture decisions and grow into a senior or staff engineering role. • Competitive compensation, equity, and benefits with a culture that values engineering craft and continuous learning • As well as being a part of something exciting everyday, you will also receive the following benefits: • Annual salary + bonus • A remote first culture! • Health, Dental and Vision Insurance • 13 Paid Holidays • Company volunteer days