wagey.ggwagey.ggv1.0-68eec7a-3-May
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Principal Role/cygnify - Senior Principal Data Engineering Lead
cygnify

cygnify - Senior Principal Data Engineering Lead

Singapore, Hybrid1mo ago
In OfficePrincipalAPACCloud ComputingArtificial IntelligencePrincipalSenior Data EngineerSnowflakeTeam LeadershipData QualityGovernanceAWS

Upload My Resume

Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT

Apply in One Click
Apply in One Click

Requirements

• Team Leadership: Recruit, mentor, and lead a hybrid team of data engineers and stewards across Singapore, Malaysia and India, establishing in-house technical leadership and delivery ownership. • Data Engineering Delivery: Oversee design, development, and optimization of ELT/ETL pipelines and data models, ensuring scalable, reusable, and cost-efficient workflows. • Data Quality & Stewardship: Institutionalize stewardship processes — define ownership models, implement DQ monitoring, and drive remediation workflows with cross-functional data users. • Operational Excellence: Manage daily pipeline operations, SLA compliance, and production issue resolution with strong root-cause analysis and continuous improvement. • Technical Governance: Set engineering standards for observability, RBAC, cost tagging, and CI/CD practices. • Collaboration & Enablement: Enable self-service analytics by curating trusted datasets and modeled views, working with BI and business teams. • 8–12 years of experience in cloud-native data engineering, with strong architecture and delivery experience on AWS. • Proven leadership of cross-functional and hybrid engineering teams, including vendor-augmented resources. • Experience partnering with BI and business teams to design modelled datasets and enable self-service analytics. • Deep hands-on technical expertise, including: Snowflake: schema design, Streams/Tasks, Stored Procedures, UDFs, RBAC, performance tuning, Cortex AI, Streamlit, cost monitoring. • Airflow or similar data orchestration tools: orchestration, scheduling, dependency management, and observability. • Python and SQL: pipeline scripting, transformation logic, and data validation. • ELT/ETL frameworks: Airbyte, Fivetran, and custom connector development. • AWS services: S3 (data lake structures and archival), Lambda, KMS, Transfer Family, CloudWatch, Sagemaker. • Demonstrated success delivering medallion architecture (Bronze/Silver/Gold) and enabling self-service data use cases. • Experience building data quality frameworks, stewardship policies, and data lineage tracking across enterprise datasets. • Familiarity with machine learning integration using platforms like AWS SageMaker. • Proven ability to troubleshoot complex data issues, lead root-cause analysis, and ensure production stability. • Track record of transitioning delivery ownership from vendors to internal teams while maintaining quality and velocity.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X