wagey.ggwagey.ggv1.0-4558734-20-Apr
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Data Engineer Role/cygnify - Database Engineer - Singapore
cygnify

cygnify - Database Engineer - Singapore

Singapore, Hybrid3w ago
In OfficeMidAPACCloud ComputingData AnalyticsData EngineerAirflowDockerKubernetesJenkinsMLOps

Upload My Resume

Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT

Apply in One Click

Requirements

• Degree in Computer Science, Software Engineering, Data Science, or equivalent experience • 2-5+ years of hands-on experience in DevOps, Data Platform Engineering, or Infrastructure Engineering, supporting big data or analytics platforms. • Experience managing on-premises and cloud infrastructure, including compute, storage, and network configuration. • Hands-on experience with cloud and/or hybrid environments supporting big data pipelines, with Spark / PySpark and Airflow at an operational level (job execution, basic tuning, failure handling). • Knowledge of containerization and orchestration, including Docker fundamentals and Kubernetes (deployments, services, ingress, scaling, resource limits). • Strong understanding of networking fundamentals, including VPC design, routing, DNS, firewalls, and load balancing. • Knowledge of security concepts, such as IAM, access control, and compliance basics. • Experience with CI/CD pipelines and automation using tools such as GitLab CI/CD, Jenkins, or similar. • Basic understanding of MLOps concepts from a platform and infrastructure support perspective. • Strong troubleshooting and problem-solving skills in production environments. • Comfortable working with cross-functional teams (data engineers, ML engineers, security, product).

Responsibilities

• 1. Manage On-Prem and Cloud Data Platforms • Maintain and support on-premises clusters, including compute, storage, networking, and system configurations. • Provision, configure, and manage cloud infrastructure, including AWS S3, EMR, Redshift, and RDS. • Monitor platform performance, capacity, and availability; implement operational alerts, monitoring, and runbooks. • 2. Build, Operate & Support Data Pipelines • Develop and support ETL/ELT pipelines using PySpark and Airflow, ensuring reliable ingestion, transformation, and data loading. • Operate pipelines across hybrid environments, handling job execution, retries, basic performance tuning, and failure recovery. • Validate, clean, and standardize datasets; monitor pipeline health, data freshness SLAs, and failure patterns, and perform root cause analysis and remediation. • Support data storage platforms such as S3, PostgreSQL, Redshift, and MongoDB from an operational and platform perspective. • Automate DataOps and MLOps workflows using CI/CD pipelines (e.g., GitLab CI/CD, Jenkins). • 3. Security, Compliance & Governance • Implement secure access controls (IAM, VPC, security groups), encryption, backups, and disaster recovery mechanisms. • Partner with infrastructure and security teams to ensure compliance with PDPA, GDPR, and internal governance policies.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X
Loading...