wagey.ggwagey.gg
Open Tech JobsCompaniesPricing
Log InGet Started Free
Jobs/Senior Data Engineer Role/Senior Data Engineer

Senior Data Engineer

Precision Medicine GroupRemote - Europe1mo ago
RemoteSeniorEMEAPharmaceuticalsLife SciencesSenior Data EngineerDirector of EngineeringSQLPythonData QualityDocumentationSnowflake

Upload My Resume

Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT

Apply in One Click

Requirements

• Education: Bachelor’s degree in computer science, information systems, engineering, or a related technical field. Advance degree is a plus but not required. • Education: • 5+ years of professional data engineering experience, with at least 3 years working in modern cloud data environments • 2+ years hands-on experience with Snowflake, including performance tuning, Snowpipe, and Snowflake-specific optimization patterns • 2+ years working with dbt (Core or Cload) in production environments, including incremental models, testing, documentation, and package management • Demonstrated experience with healthcare data. You understand ICD-10, NPI, NDC codes, and the messy realities of claims data quality • Production experience with orchestration tools, such as Airflow, Dagster, or similar. You’ve built and maintained DAGs that run reliably at scale • Strong SQL skills. You write efficient, readable, well-structured SQL and can debug complex query performance issues • Python proficiency for data processing, scripting, and pipeline automation • Preferred: • Experience with major claims data aggregators (e.g., McKesson, Symphony Health, IQVIA, Komodo) • Familiarity with formulary and payer data • Experience implementing medallion architecture or similar layered data patterns • Background in life sciences, pharma services, or healthcare analytics organizations • Experience with data observability tools • Exposure to Snowpipe, Matillion, Fivetran, or other managed ingestion solutions • Familiarity with Git-based workflows, CI/CD for data pipelines, and infrastructure-as-code • Technical Stack and Environments: • Data Platforms (Snowflake, AWS Redshift, S3); Transformation (dbt, Matillion); Data Architecture; Tooling (Jira, Confluence); Orchestration (Airflow, Dagster); Ingestion (Snowpipe, Matillion, Fivetran, custom Python-based pipelines); Quality Monitoring and Observability (dbt tests, Elementary, Monte Carlo); Version Control & CI/CD (Git, GitHub Actions or similar, PR-based workflows); Languages (SQL, Python); • Ways of Working and Collaboration: • Reporting Structure: You report directly to the Engineering Director. • Architecture-guided, not architecture-dependent. You take direction from the Architect on patterns and standards, but you don’t need handholding on implementation. You can turn a design decision into production code • Ownership mentality. You own your pipelines end-to-end, from ingestion to quality monitoring to incident response. “It’s not my job” isn’t in your vocabulary. • Clear communication. You can explain technical trade-offs to non-technical stakeholders and write documentation that others can use. You participate constructively in code reviews. • Collaborative, not siloed. You work closely other members of the engineering team. You understand that your Bronze and Silver layers exist to serve downstream consumers, and you build with their needs in mind. • Quality-first mindset. You believe in testing, documentation, and observability. You’re uncomfortable shipping code that isn’t tested and pipelines that aren’t monitored. • Location and Travel: This is a remote-friendly role with flexibility for EU-based or India-based candidates. Occasional travel for stakeholder meetings and strategic planning sessions. • Who This Role Is For: • This role is for you if: • You want to build foundational data infrastructure in a growing healthcare and life sciences company, not maintain legacy systems. • You’re energized by healthcare data complexity, claims data is messy, and you find that interesting rather than frustrating. • You thrive with autonomy and accountability. You want ownership of outcomes, not just tasks. • You care about craft. Clean code, clear documentation, and maintainable systems matter to you. • You want to see the impact of your work. The pipelines you build will power teams across the organization. • This role is not for you if: • you’re looking for a pure greenfield, no constraints environment. We have an architecture, standards, and patterns. You will contribute to them, but you won’t ignore them. • You prefer highly specialized, narrow scope. This role spans ingestion, transformation, quality, and collaboration. You’ll wear multiple hats. • You want to work in isolation. This is a collaborative team, and you’ll interact regularly within the engineering team and cross-functionally. • You seek a management track. This is a hands-on IC role. Technical leadership is absolutely a yes, but this is not a people management role. • Any data provided as a part of this application will be stored in accordance with our Privacy Policy. For CA applicants, please also refer to our CA Privacy Notice.

Responsibilities

• Build and Maintain Bronze/Silver Layer Pipelines: You will ensure core data sources lands accurately, on time, and with full lineage. • Build and Maintain Bronze/Silver Layer Pipelines: • Lead Data Ingestion, Transformation, and Enrichment: You will own the end-to-end pipeline from raw file landing through cleansed, conformed staging tables, including deduplication, standardization, code mapping, and entity resolution. • Lead Data Ingestion, Transformation, and Enrichment: • Develop Automated Ingestion Pipelines: You will use Snowpipe, Matillion, or custom solutions with reliability, observability, and minimal manual intervention in mind. • Develop Automated Ingestion Pipelines: • Implement dbt Models: You will write clean, tested, documented SQL that follows medallion architecture patterns and supports incremental processing at scale. • Implement dbt Models: • Establish Scalable Engineering Patterns: You will create reusable templates, macros, and testing frameworks that accelerate onboarding of new data sources without sacrificing quality. • Establish Scalable Engineering Patterns: • Ensure Data Quality and Observability: You will implement dbt tests, freshness checks, and anomaly detection. You will own the pipeline reliability and respond to data quality incidents. • Ensure Data Quality and Observability: • Collaborate with the Architect and Stakeholders: You will translate business requirements into technical specifications, participate in design reviews, and contribute to ADRs, and you will ensure pipelines align with enterprise standards. • Collaborate with the Architect and Stakeholders:

Similar Jobs

Senior Data Analyst, City and Vehicle Tech4h ago
LimeLime·Remote - Canada·$63k - $87k/year + Equity
RemoteNASeniorSenior Data ScientistData AnalystPythonSQLSnowflakedbtLookerTableauData AnalysisPublic Policy
Senior Backend Engineer4h ago
xyz-realityxyz-reality·London Office
In OfficeEMEASeniorBackend EngineerSenior Backend DeveloperNode.jsPostgreSQLRESTSQLGitJavaGraphQLDockerKubernetesDocumentation
Sr Software Engineer, Commercial Services4h ago
NateraNatera·US Remote - Hybrid·$125k - $125k/year
In OfficeNASeniorGenomicsDiagnosticsJunior Software EngineerSoftware EngineerSpring BootJavaSpringPerformance ReviewsKafkaAWSGraphQLNoSQLSQLMySQLMentoringDockerTerraformQuality AssuranceGitLabClaudeMockitoJUnitCursor
Software Engineer III, Commercial Services4h ago
NateraNatera·Remote - US Remote·$106k - $106k/year
RemoteNAMidGenomicsDiagnosticsSoftware EngineerJunior Software EngineerQuality AssuranceSpring BootSpringJavaKafkaAWSGraphQLNoSQLMySQLSQLDockerTerraformGitLabMockitoJUnitClaudeJestCursor
Data Engineer4h ago
Nas CompanyNas Company·Singapore·$208k - $208k/year + Equity
In OfficeAPACMidGenomicsNonprofitData EngineerSnowflakeProduct MarketingSQLRedshiftdbtAirflowBusiness IntelligenceData Quality
Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact