Data Engineer
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Airflow for workflow management • OLAP databases, distributed query engines and data processing frameworks (clickhouse, trino, spark etc) for analytical workloads • Solid understanding of data modeling and data engineering best practices • Familiarity with event-based data architectures and streaming (spark, flink) • IaC: terraform, terragrunt for cloud infrastructure deployments • IaC: helm, kustomize, argocd for k8s deployments • Experience with building and owning cloud infrastructure • What Else Matters? • Proactivity – We love team members who take initiative and provide feedback • Critical thinking – We value problem-solvers who think beyond just writing code • Adaptability – Our industry is evolving fast, and we need people who thrive in change
Responsibilities
• Design, build, and maintain data pipelines using Airflow, and Trino for ingesting and processing data from multiple systems. • Implement monitoring, validation, and alerting mechanisms to ensure the accuracy and consistency of pipeline outputs. • Deploy, monitor, and manage platform infrastructure as needed (specific tasks not detailed in this job posting).
Benefits
• Our customers — cannabis retailers — rely on accurate, timely, and trustworthy data to run their business. From sales analytics to inventory tracking and operational reporting, our data pipelines are mission-critical. • As a Data Engineer, you’ll be at the center of our data ecosystem: ensuring that data flows from our product, integrations, and third-party systems into our analytics and operational platforms with speed, accuracy, and reliability. Your work will enable analysts, product teams, and leadership to make better decisions — and will directly impact the features our customers use every day. • What to do in the project? • Design, build, and maintain data pipelines using Airflow, and Trino to ingest and process data from multiple systems (databases, APIs, event streams, 3rd-party integrations). • Improve data quality and reliability — implement monitoring, validation, and alerting for pipelines to ensure accuracy and consistency. • Develop and maintain the platform — deploy, monitor and maintain services in our aws cloud • Collaborate with cross-functional teams — work closely with Product Analytics, Data Architecture, and Engineering teams to define data requirements and deliver them effectively. • Optimize performance and cost — fine-tune queries, pipelines, and storage for speed and efficiency. • Document data assets — maintain clear documentation for data sources, pipelines, and models. • What professional skills are important to us? • 3+ years of experience in data engineering or a related field. • Strong proficiency in SQL and Python. • Salary in USD (B2B contract with the US company) • 100% remote – We’re a remote-first company, no offices needed! • Flexible working hours – Core team time: 09:00-15:00 GMT (flexible per team) • 20 paid vacation days per year • 12 holidays per year • 3 sick leave days • Medical insurance after probation • Equipment reimbursement (laptops, monitors, etc.) • Recruiter Call (1 hour) – Includes a short English check • Live-coding session (1 hour) • System Design (1 hour) • Final Interview (up to 45 minutes)