DataOps Engineer
Upload My Resume
Drop here or click to browse ¡ PDF, DOCX, DOC, RTF, TXT
Requirements
⢠Senior Data Engineering Foundations (Minimum 5+ Years): Strong experience in Python and a deep mastery of SQL and Postgres. ⢠Modern Data Stack: Hands-on experience with Airflow, dbt, and Airbyte. ⢠Analytical Data Systems: Experience in constructing analytical data systems over Data Lakes (e.g.,AWS S3, Athena, EMR, Glue, Iceberg/Delta, etc.). ⢠Cloud Mastery: Solid understanding of AWS services (S3, EC2, RDS, IAM, etc.) ⢠.Infrastructure & Containers: Proficiency with Terraform, Docker, and Kubernetes. ⢠Operations Mindset: A strong understanding of GitOps, CI/CD principles, and a passion for automation. ⢠Teamwork: Great capacity for collaborative, async, and written communication. You take ownership and follow through on your commitments. ⢠Observability: Experience setting up monitoring and alerting systems to ensure the health of data pipelines and infrastructure. ⢠Code Hygiene: A sharp sense of code hygiene, including code review, documentation, testing, and CI/CD (Continuous Integration/Continuous Delivery). ⢠Stream Processing Knowledge: Experience with stream processing tools like Kafka Streams, Kinesis, or Spark. ⢠Python's Scientific Stack: Proficiency in Python's Scientific Stack, including numpy, pandas, jupyter, matplotlib, scikit-learn, and related tools. ⢠English Proficiency: Solid command of the English language for writing technical documents, such as Design Documents.
Responsibilities
⢠Develop and implement strategies for efficient data collection, storage, processing, analysis, and reporting to support business decisions. ⢠Design, develop, deploy, monitor, troubleshoot, optimize, maintain, document, test, secure, audit, migrate, backup, restore, archive, dispose of databases or other information systems as required by the organization's needs. ⢠Collaborate with cross-functional teams to identify and prioritize data requirements for new products/services; develop solutions that meet these requirements while ensuring compliance with relevant regulations (e.g., GDPR, CCPA). ⢠Monitor system performance using tools like SQL Server Management Studio or Oracle Enterprise Manager, identifying bottlenecks and optimizing queries to improve efficiency. ⢠Implement data quality controls through automated scripts that identify incomplete, inconsistent, inaccurate, or duplicate records; develop procedures for resolving these issues while maintaining compliance with relevant regulations (e.g., GDPR). ⢠Develop custom reports and dashboards using tools like Power BI to provide business users with actionable insights into data trends and patterns that inform decision making. ⢠Collaborate with stakeholders across the organization, including product managers, marketing teams, customer service representatives, etc., to understand their information needs; develop solutions tailored to these requirements while ensuring compliance with relevant regulations (e.g., GDPR). ⢠Stay up-to-date on emerging technologies and best practices in data management by attending industry conferences, webinars, workshops, etc.; share knowledge gained with cross-functional teams to improve overall efficiency and effectiveness of the organization's information systems.
Benefits
⢠Remote-first culture â work from anywhere! đ ⢠AWS, DBT, Google Cloud, Azure & Databricks certifications fully covered ⢠In-Company English Lessons. ⢠Birthday off + an extra vacation week (Mutt Week! đď¸) ⢠Referral bonuses â help us grow the team & get rewarded! ⢠Maslow: Monthly credits to spend in our benefits marketplace. ⢠âď¸đď¸ Annual Mutters' Trip â an unforgettable getaway with the team!