wagey.ggwagey.ggv1.0-e93b95d-4-May
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Senior Data Engineer Role/applike group - Senior Data Engineer (f/m/d)
Pro members applied to this job 36 hours before you saw itGet Pro ›
applike group

applike group - Senior Data Engineer (f/m/d)

Hamburg - Hybrid$206k - $206k5d ago
In OfficeSeniorEMEAMental HealthData AnalyticsCloud ComputingSenior Data EngineerSenior Data ScientistJavaGoPythonAWSKafka

Upload My Resume

Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT

Apply in One Click
Apply in One Click

Requirements

• We strongly believe in self-hosted open source technologies backed by battle-proven AWS technologies. • Apache Kafka as our event streaming system • Apache Flink (using Java) for stateful stream processing and real-time feature engineering • Go as our primary language for backend • Kubernetes and Terraform to manage our infrastructure • S3 and Druid for long-term data storage • DynamoDB and Redis to store per-user data and power low-latency real-time aggregations. • TensorFlow and PyTorch for training ML models, TensorFlow Serving and Triton for inference • Prometheus, Grafana, the ELK stack and OpenObserve for logging and monitoring • Airflow for orchestration • … and we’re always open to trying new technologies that suit each case best. • Scale at a glance • Thousands of requests for distributions every second • Our ML models handle those at p99 latency of 100ms • 100k+ ML predictions per second • 2TB data ingested in real-time every day • 100+ Airflow jobs and other data pipelines • You have 5+ years’ of software development experience, working on modern data engineering stack. • You have proven experience with Apache Flink for stateful stream processing and real-time feature computation. • You have worked extensively with real-time data streaming systems such as Kafka, Kinesis, or Pub/Sub. • You have experience with systems handling several TB of data per day and multiple thousand events per second. • You know how to identify bottlenecks in data pipelines and you have experience in optimizing them and scaling them up. • You have strong Java knowledge, Golang/Python knowledge would be a plus. • You have worked closely with Data Scientists on online ML systems with low latency. • You know how to move beyond "raw data" to design robust, multi-layered data architectures. You have hands-on experience using dbt to build these layers and can guide us on the best tools and formats to manage them at scale. • You know scheduling frameworks such as Airflow / Kubeflow. • You know the concepts of data quality and how to apply them in production. • You are familiar with relational and NoSQL databases. • You are open to relocating to Hamburg, Germany • You have strong problem-solving skills and ability to tackle complex technical challenges. • Plus: You have hands-on experience in working with AWS, Terraform and Kubernetes. • Plus: You are familiar with the Medallion Architecture and have experience building Semantic Layers for downstream data consumption. • Fuel for the Journey: Benefits to Support Your Ambitions • Invest in Your Future: Regular feedback and our development program support your growth, helping you expand your skillset and achieve your career goals. • Easy Arrival to adjoe: From signing to settling in Hamburg, we’ve got you covered. Need a visa? No problem. Ready to build your new life and career at adjoe in Hamburg? We support every ambition—from learning German to a relocation bonus that helps you settle in and make Hamburg feel like home. • Live Your Best Life, at Work and Beyond: We work in a hybrid setup with 3 core office days, plus flexible working hours. Enjoy 30 vacation days, 3 weeks of remote work per year, and free access to an in-house gym with lots of different fitness classes and mental health support through our Employee Assistance Program (EAP). • Thrive Where You Work: Enjoy the Alster lake view from our central office with top-notch equipment, fun open spaces, and a large variety of snacks and drinks. • We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Responsibilities

• Scale the Feature Store: Leverage Apache Flink to translate Data Science requirements into high-performance, real-time streams. Enable Data Scientists to contribute new features and implement them yourself when needed. • Ensure Data Quality, Observability & Integrity: Implement data validation, monitoring, and governance processes to maintain accuracy, consistency, and reliability across all datasets and features in Feature Store. • Optimize Pipelines Performance: Identify and eliminate bottlenecks in complex ETL jobs, transforming long-running processes into streamlined, rapid-iteration cycles. • Bridge Data Science & Backend Teams: Act as the key link between data science and backend engineering, ensuring seamless data integration and usage across the organization. • Explore new Data Sources: Partner with Data Scientists to build custom ingestion logic for unstructured or non-typical data sources, handling the heavy preprocessing needed for experimental research. • Evolve the Data Architecture: Maintain and optimize our Data Lake. You'll help us decide on the future of our storage (e.g., moving toward a Data Lakehouse model) and implement best practices. • Work in an International Environment: Join an international, English-speaking team focused on scaling our adtech platform to new heights.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X