Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Equivalent engineering school degree, or a Master's degree in Computer Science, Data Science, or Applied Mathematics. • 7–12 years in data engineering or backend data platforms. • Strong Python and SQL; experience with Spark and modern ELT/Orchestration (e.g., dbt, Airflow). • Hands-on with Databricks and/or Snowflake in production. • Experience on AWS (S3, Glue/Athena, Redshift, Lambda) and lakehouse formats (Iceberg or Delta Lake). • Familiarity with data security, governance, and compliance. • Knowledge of cost management principles. • Salesforce data knowledge is a plus, not mandatory. • Foundational AI/ML understanding and motivation to contribute to early use cases. • Fluent in English and French, clear communication, ownership mindset, and collaborative approach. • Excellent interpersonal skills and ability to interact with diverse business stakeholders • Where you'll be • Based in Paris (75002), France. • Hybrid: 3 days in the office / 2 days remote work.
Responsibilities
• Data Pipeline Development: Design, build, and maintain ingestion/processing pipelines at scale using Python/SQL and Spark; operate within a lakehouse stack (Apache Iceberg or Delta Lake). • Databricks & Snowflake Engineering: Implement and optimize workflows on Databricks (Jobs, Workflows, Delta) and/or Snowflake (Warehouses, Tasks, Streams) and/or native cloud provider solutions (AWS / AZURE). • Platform Optimization: Improve performance, reliability, and cost on AWS (S3, Redshift, Athena/Glue, Lambda), with strong observability and IaC practices. • Secure Data Management: Apply security-by-design, data governance, and compliance best practices across storage, compute, and sharing layers. • AI Use Case Enablement: Partner with Product team and R&D team to prepare data for initial AI/ML use cases (feature pipelines, data quality, lineage). • Data Sharing & Integration: Enable secure, efficient data access for customers via connectors, APIs, and lakehouse sharing patterns (e.g., Delta Sharing, Snowflake data sharing).
Benefits
• Salary: Explicitly stated as a benefit in the job posting. • PTO (Paid Time Off): Mentioned directly within the text of the job posting and thus considered an explicit benefit. • Insurance: Explicitly stated as a part of the compensation package in the job listing, making it another listed benefit. • Perks: Not specifically mentioned; hence not included among the benefits provided for this position based on the information given. • Remote work options: Mentioned directly within the text and therefore considered an explicit benefit offered by Odaseva as part of their compensation package.