wagey.ggwagey.ggv1.0-4558734-20-Apr
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Analytics Engineer Role/Graphcore - Lead Analytics Engineer
Pro members applied to this job 36 hours before you saw itGet Pro ›
Graphcore

Graphcore - Lead Analytics Engineer

Bristol, UK2d ago
In OfficeStaffEMEAData AnalyticsOil & GasAnalytics EngineerdbtReportingSQLDocumentationData QualityPostgreSQLRedshiftMetabasePythonGitData Governance

Upload My Resume

Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT

Apply in One Click

Responsibilities

• Own the dbt transformation layer, building, maintaining and evolving data models that support reliable self-service analytics across Graphcore. • Build strong working relationships with stakeholders across business and technical functions to understand priorities, processes, definitions and decision-making needs. • Work closely with stakeholders to discover, clarify and challenge requirements, turning ambiguous questions into well-structured analytical datasets and trusted metrics. • Translate business processes and raw datasets into intuitive, flexible and governed analytical models that support reporting, planning and operational decision-making. • Design clear, maintainable SQL models with a well-structured approach to naming, layering, reuse and long-term sustainability. • Partner with stakeholders to define, document and maintain trusted metric and KPI logic, ensuring consistency as requirements evolve. • Implement robust testing, validation and documentation practices in dbt to improve data quality, trust and discoverability. • Work closely with Data Engineering to align on source data structures, manage upstream schema changes and support reliable downstream consumption. • Establish and maintain CI/CD practices for analytics engineering, including automated checks, review workflows and safe release processes. • Optimise model performance and warehouse efficiency through pragmatic design choices, including incremental approaches, efficient joins and platform-aware tuning. • Support self-service analytics by creating datasets that are easy to understand and consume, with clear documentation and guidance for common use cases. • Contribute to the effective use of visualisation and reporting tools by modelling data for dashboard performance, usability and consistency. • Help shape analytics engineering standards and day-to-day practices within the wider Data & Analytics function through collaboration, review and continuous improvement. • Candidate Profile • Candidate Profile • Essential • Demonstrable experience building production-quality dbt models that enable reliable self-service analytics. • Strong SQL skills and experience designing maintainable transformation layers within a modern data platform. • Proven ability to build strong relationships with stakeholders and work closely with business users to understand requirements, processes and data needs. • Proven ability to translate business requirements and raw datasets into flexible, intuitive data models that stakeholders can use confidently. • Strong grasp of analytics engineering best practices, including model layering, documentation, testing and semantic consistency. • Experience defining and maintaining trusted metrics, KPIs and curated datasets for business use. • Strong understanding of data quality, change management and the practices needed to maintain trust in analytical outputs. • Experience applying CI/CD practices to analytics workflows, including automated testing, deployment discipline and review processes. • Experience working with relational databases and analytical warehouse technologies. • Strong communication skills, including the ability to influence decisions, challenge assumptions constructively and work effectively with both technical and non-technical stakeholders. • A practical, delivery-focused approach to problem solving. • Desirable • Experience with data warehouse technologies such as Redshift, PostgreSQL or ClickHouse. • Experience supporting self-service visualisation and reporting tools such as Superset, Metabase or similar platforms. • Familiarity with semantic or metrics-layer tooling. • Python experience, including building lightweight data applications or utilities. • Experience improving dataset discoverability, documentation and adoption across an organisation. • Familiarity with data governance practices, including access control and sensitive data handling. • Experience working in a Git and pull-request based development workflow. • Experience working in a fast-moving product, technology or engineering-led environment.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X
Loading...