BigQuery: Expert-level SQL, performance tuning, and specifically BigQuery for advanced SQL, performance tuning, partitioning and clustering strategy, and production-grade curated marts and feature tables; working knowledge of BQML for scoring and evaluation in analytics workflows.
Dataflow (Apache Beam)
Strong proficiency in building reliable batch and incremental pipelines for GA4/Google Ads data into BigQuery, with streaming patterns (windowing/watermarks/triggers) used when near-real-time alerting is required.
Pub/Sub: Experience with event-driven architecture and message queuing.
Looker Core – Advanced proficiency in LookML (not just drag-and-drop), including derived tables, explores, and Liquid syntax.
Semantic Modeling: Develop robust LookML models to create a trusted layer of data governance, ensuring metrics are consistent across the organization.
Dashboard Creation: Design intuitive, high-impact dashboards in Looker for two distinct audiences: Operational teams (paid search performance monitoring, anomaly triage, and optimization workflows) and Executives (ROAS, conversion efficiency, budget pacing, and channel contribution reporting).
Ability to independently own workstreams while collaborating closely with data science, analytics, and engineering peers in agile delivery.
Advanced English skills
Google Cloud Professional Data Engineer certification
Google Professional Machine Learning Engineer certification
Google Cloud Professional Cloud Architect certification
Bachelor’s or Master’s degree in a quantitative or technical field (e.g., Computer Science, Engineering, Statistics)
Working knowledge of cloud architecture components in GCP
Proficiency in Big Data environments and tools such as Spark, Hive, Impala, Pig, etc.
Proficiency in Terraform
Familiarity with front and back-end web application stacks and frameworks, and API design and usage (REST/GraphQL)
Experience leading and managing technical data/analytics/machine learning projects
Experience supporting data products consumed by conversational analytics surfaces (BigQuery/Looker query routing, audit logging, and row-level access patterns).
If you do not meet all the listed qualifications or have gaps in your experience, we still encourage you to apply. At Valtech, we recognize that talent comes in many forms, and we value diverse perspectives and a willingness to learn.
Commitment to reaching all kinds of people
We design experiences that work for all kinds of people - and that starts with our own teams. At Valtech, we’re intentional about building an inclusive culture where everyone feels supported to grow, thrive and achieve their goals. No matter your background, you belong here. Explore our Diversity & Inclusion site to see how we’re creating a more equitable Valtech for all.
Your application process
⚠️ Beware of recruitment fraud: Only engage with official Valtech email addresses.
We are committed to inclusion and accessibility. If you need reasonable accommodations during the interview process, please either indicate it in your application or let your Talent Partner know.
Valtech is the experience innovation company that exists to unlock a better way to experience the world. By blending crafts, categories, and cultures, we help brands unlock new value in an increasingly digital world.
At the intersection of data, AI, creativity, and technology, we drive transformation for leading organizations, including L’Oréal, Mars, Audi, P&G, Volkswagen Dolby, and more.
At Valtech, we don’t just talk about transformation. We make it happen. Our people are the heart of our success, and we foster a workplace where everyone has the support to thrive, grow and innovate.
Are you ready to create what’s next? Join us.
Responsibilities
Demonstrate deep knowledge of the data engineering domain to build and support non-interactive (batch, distributed) & real-time, highly available data pipelines
Build fault-tolerant, self-healing, adaptive, and highly accurate data computational pipelines
Provide consultation and lead the implementation of complex programs
Develop and maintain documentation relating to all assigned systems and projects
Tune queries running over billions of rows of data running in a distributed query engine
Perform root cause analysis to identify permanent resolutions to software or business process issues
Implement and maintain dbt transformation models, CI pipelines, and data contracts for curated campaign, ad group, keyword, audience, and landing-page marts.
Build and monitor data quality gates (Great Expectations and reconciliation checks) and freshness SLOs.
Optimize BigQuery cost and performance using query tuning, storage design, and reservation strategy.
Implement platform hardening controls including retries, dead-letter queues, DR runbooks, and support for VPC-SC and DLP validation.