Level Access - Senior Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• 8+ years of experience in data engineering, building production-grade ETL/ELT pipelines at scale. • Strong proficiency in Python (PySpark, scripting, API integrations, pipeline automation). • Hands-on experience with Databricks (notebooks, jobs, workflows, Unity Catalog) and AWS (Lambda, ECS/Fargate, S3, Glue, Athena, DynamoDB, EventBridge, Step Functions, IAM). • Expertise in data modeling and lakehouse architectures (Medallion, Delta Lake, Parquet, schema evolution, incremental upserts). • Proficiency in SQL and dbt for building modular, tested, and documented models. • Experience with REST API integration, entity resolution, and record matching strategies. • Familiarity with Git-based version control and CI/CD for data pipelines. • Preferred • Advanced Databricks features (Delta Live Tables, MLflow integration). • Salesforce API experience (Bulk API 2.0, REST API, SOQL). • Application Process • Application Process • If you thrive in a fast‑paced environment and you are eager to make a significant impact in the field of accessibility, we would love to hear from you! This is a full‑time, salaried position offering a competitive benefits package, including bonus opportunities, generous paid time off, paid holidays, and a range of programs designed to support employee well‑being and work‑life balance. Interested in immediate consideration? Please submit your resume to be considered for this role.
Responsibilities
• Design, build, and maintain a scalable data platform on Databricks and AWS, following Medallion (Bronze/Silver/Gold) architecture principles. • Develop and manage data ingestion and ETL/ELT pipelines from core business systems, ensuring data quality, reliability, and performance. • Implement data quality monitoring (freshness, completeness, validity, anomaly detection) with automated alerting. • Establish and maintain Golden Record identity resolution and reverse sync across source systems. • Govern metadata, lineage, and permissions using Databricks Unity Catalog and platform standards. • Collaborate with cross-functional teams to deliver analytics- and AI-ready datasets that support business objectives. • Communicate effectively with stakeholders, manage ambiguity, and drive results in a fast-paced environment.
No credit card. Takes 10 seconds.