Resilient Co - Principal Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Bachelor’s or Master’s degree in Computer Science, Information Systems, or equivalent experience. • 10+ years of experience in data engineering or a related technical field. • Expert proficiency in SQL, Python, and Spark for large‑scale data processing. • Extensive experience designing and building cloud‑native data pipelines, data models, and distributed data systems (Delta Lake, Spark, Unity Catalog, Jobs, Workflows). • Strong experience designing and tuning distributed data processing systems at scale. • Deep knowledge of data engineering best practices including version control, CI/CD, automated testing, DevOps/DataOps, and observability. • Proven ability to lead cross‑functional technical initiatives and influence architectural direction. • Strong problem‑solving, debugging, analytical, and collaboration skills; ability to thrive in agile, dynamic teams. • Experience with Databricks Unity Catalog, Delta Live Tables, or Databricks Workflows. • Advanced data modeling skills (dimensional, data vault, semantic layers). • DataOps experience including pipeline observability, monitoring, and automated quality. • Experience with metadata management and governance platforms (Unity Catalog, Purview, Collibra, Alation). • Experience with streaming frameworks used with Spark Structured Streaming (Kafka, Event Hubs, Kinesis). • Experience contributing to architecture review boards, technical councils, or data governance forums. • We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.
Responsibilities
• Collaborate with Data Architects and business partners to design and evolve enterprise data architecture and platform capabilities. • Translate architectural strategy into technical designs and delivery plans across teams. • Design, code, and optimize complex distributed data processing systems using Spark, Databricks, and cloud‑native data services. • Develop canonical data models, semantic structures, and reusable datasets to support reporting and machine learning. • Drive platform modernization initiatives such as Delta Lake and metadata‑driven design. • Create reusable frameworks and platform capabilities to accelerate analytics, ML, and governed self‑service data access. • Lead root‑cause analysis for major data issues and implement long‑term improvements in data quality, lineage, and observability. • Provide technical leadership, guidance, and mentorship to Staff, Senior, and mid‑level data engineers. • Influence cross‑organizational roadmaps and engineering investments; participate in architecture reviews and governance forums.
Similar Jobs
No credit card. Takes 10 seconds.