wagey.ggwagey.gg
Open Tech JobsCompaniesPricing
Log InGet Started Free
Jobs/Data Engineer Role/Data Engineer

Data Engineer

KyivstarKyiv, Ukraine - Hybrid1mo ago
In OfficeMidEMEACloud ComputingArtificial IntelligenceData EngineerCHROPhotographerTeacherPythonSQLDocumentationAirflowData Quality

Upload My Resume

Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT

Apply in One Click

Requirements

• Education & Experience: 3+ years of experience as a Data Engineer or in a similar role, building data-intensive pipelines or platforms. A Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field is preferred. Experience supporting machine learning or analytics teams with data pipelines is a strong advantage. • Data Pipeline Expertise: Hands-on experience designing ETL/ELT processes, including extracting data from various sources, using transformation tools, and loading into storage systems. Proficiency with orchestration frameworks like Apache Airflow for scheduling workflows. Familiarity with building pipelines for unstructured data (text, logs) as well as structured data. • Programming & Scripting: Strong programming skills in Python for data manipulation and pipeline development. Experience with NLP packages (spaCy, NLTK, langdetect, fasttext, etc.). Experience with SQL for querying and transforming data in relational databases. Knowledge of Bash or other scripting for automation tasks. Writing clean, maintainable code and using version control (Git) for collaborative development. • Databases & Storage: Experience working with relational databases (e.g., PostgreSQL, MySQL), including schema design and query optimization. Familiarity with NoSQL or document stores (e.g., MongoDB) and big data technologies (HDFS, Hive, Spark) for large-scale data is a plus. Understanding of or experience with vector databases (e.g., Pinecone, FAISS) is beneficial, as our NLP applications may require embedding storage and fast similarity search. • Cloud Infrastructure: Practical experience with cloud platforms (AWS, GCP, or Azure) for data storage and processing. Ability to set up services such as S3/Cloud Storage, data warehouses (e.g., BigQuery, Redshift), and use cloud-based ETL tools or serverless functions. Understanding of infrastructure-as-code (Terraform, CloudFormation) to manage resources is a plus. • Data Quality & Monitoring: Knowledge of data quality assurance practices. Experience implementing monitoring for data pipelines (logs, alerts) and using CI/CD tools to automate pipeline deployment and testing. An analytical mindset to troubleshoot data discrepancies and optimize performance bottlenecks. • Collaboration & Domain Knowledge: Ability to work closely with data scientists and understand the requirements of machine learning projects. Basic understanding of NLP concepts and the data needs for training language models, so you can anticipate and accommodate the specific forms of text data and preprocessing they require. Good communication skills to document data workflows and to coordinate with team members across different functions. • A plus would be • NLP Domain Experience: Prior experience handling linguistic data or supporting NLP projects (e.g., text normalization, handling different encodings, tokenization strategies). Knowledge of Ukrainian text sources and data sets, or experience with multilingual data processing, can be an advantage given our project’s focus. Understanding of FineWeb2 or a similar processing pipeline approach. • Advanced Tools & Frameworks: Experience with distributed data processing frameworks (such as Apache Spark or Databricks) for large-scale data transformation, and with message streaming systems (Kafka, Pub/Sub) for real-time data pipelines. Familiarity with data serialization formats (JSON, Parquet) and handling of large text corpora. • Web Scraping Expertise: Deep experience in web scraping, using tools like Scrapy, Selenium, or Beautiful Soup, and handling anti-scraping challenges (rotating proxies, rate limiting). Ability to parse and clean raw text data from HTML, PDFs, or scanned documents. • CI/CD & DevOps: Knowledge of setting up CI/CD pipelines for data engineering (using GitHub Actions, Jenkins, or GitLab CI) to test and deploy changes to data workflows. Experience with containerization (Docker) to package data jobs and with Kubernetes for scaling them is a plus. • Big Data & Analytics: Experience with analytics platforms and BI tools (e.g., Tableau, Looker) used to examine the data prepared by the pipelines. Understanding of how to create and manage data warehouses or data marts for analytical consumption. • Problem-Solving: Demonstrated ability to work independently in solving complex data engineering problems, optimising existing pipelines, and implementing new ones under time constraints. A proactive attitude to explore new data tools or techniques that could improve our workflows.

Responsibilities

• Design, develop, and maintain ETL/ELT pipelines for gathering, transforming, and storing large volumes of text data and related information. Ensure pipelines are efficient and can handle data from diverse sources (e.g., web crawls, public datasets, internal databases) while maintaining data integrity. • Implement web scraping and data collection services to automate the ingestion of text and linguistic data from the web and other external sources. This includes writing crawlers or using APIs to continuously collect data relevant to our language modeling efforts. • Implementation of NLP/LLM-specific data processing: cleaning and normalization of text, like filtering of toxic content, de-duplication, de-noising, detection, and deletion of personal data. • Formation of specific SFT/RLHF datasets from existing data, including data augmentation/labeling with LLM as teacher. • Set up and manage cloud-based data infrastructure for the project. Configure and maintain data storage solutions (data lakes, warehouses) and processing frameworks (e.g., distributed compute on AWS/GCP/Azure) that can scale with growing data needs. • Automate data processing workflows and ensure their scalability and reliability. Use workflow orchestration tools like Apache Airflow to schedule and monitor data pipelines, enabling continuous and repeatable model training and evaluation cycles. • Maintain and optimize analytical databases and data access layers for both ad-hoc analysis and model training needs. Work with relational databases (e.g., PostgreSQL) and other storage systems to ensure fast query performance and well-structured data schemas. • Collaborate with Data Scientists and NLP Engineers to build data features and datasets for machine learning models. Provide data subsets, aggregations, or preprocessing as needed for tasks such as language model training, embedding generation, and evaluation. • Implement data quality checks, monitoring, and alerting. Develop scripts or use tools to validate data completeness and correctness (e.g., ensuring no critical data gaps or anomalies in the text corpora), and promptly address any pipeline failures or data issues. Implement data version control. • Manage data security, access, and compliance. Control permissions to datasets and ensure adherence to data privacy policies and security standards, especially when dealing with user data or proprietary text sources.

Benefits

• Salary: Explicitly stated as a benefit in the job posting. • PTO: Mentioned directly within the text of the job posting and thus included on the list of benefits. • Insurance: Direct mention found in the job posting indicates that this benefit is offered to candidates. • Remote work options: Explicitly mentioned within the text of the job posting and included among the benefits for full-time positions. For remote roles specifically, this is also noted in addition to being an option for all employees at Kyivstar Tech as per company policy (which may not be directly stated but can be inferred from "Full-Time (Full Remote)"). • Based on these rules and the information provided within the job posting: • Benefits listed are salary, PTO, insurance, and remote work options.

Similar Jobs

Manager, Solution Engineering - Commercial, ASEAN14h ago
snowflakesnowflake·SG-Singapore
In OfficeAPACMidFintechCloud ComputingSolutions EngineerAdvisorCoachingProduct MarketingSnowflakeCross-functional CollaborationANZAWSGCPAzureCustomer SuccessSQLdbtKafkaAirflowMLOpsJavaPythonVector
GTM Engineer Intern (Chicago)15h ago
LogicGateLogicGate·Chicago - United States - Hybrid·$83k – $83k/year
In OfficeNAInternArtificial IntelligenceSoftwareInternEngineering InternJavaScriptRESTPostmanPythonClaudeGeminiSalesforceB2B
Staff Research Engineer15h ago
TuringTuring·San Francisco, California, United States·$250k – $350k/year + Equity
In OfficeNAStaffArtificial IntelligenceResearch EngineerStaff EngineerC++JavaGoRustPythonTeam ManagementTraining DevelopmentReportingData QualitySales Enablement
Senior Applied Researcher AI/ML (US)15h ago
PointClickCarePointClickCare·Remote - US - Hybrid·$178k – $198k/year
In OfficeNASeniorCybersecurityCloud ComputingSenior Data ScientistRecruiterJavaSQLPythonTraining DevelopmentAzureApache SparkTransformersHugging FaceDatabricksPandasAWSscikit-learnROAS
Frontier Data Lead15h ago
TuringTuring·San Francisco, California, United States·$250k – $350k/year + Equity
In OfficeNAStaffArtificial IntelligenceHead of DataC++JavaGoRustPythonTeam ManagementTraining DevelopmentReportingData QualitySales Enablement

Stop filling. Start chilling.Start chilling.

Get Started Free

No credit card. Takes 10 seconds.

© 2026 Dominic Morris. All rights reserved.·Privacy·Terms·