wagey.ggwagey.ggv1.0-38ee235-5-May
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Data Engineer Role/Abacus Insights - Data Operations Engineer Intern (Internship)
Abacus Insights

Abacus Insights - Data Operations Engineer Intern (Internship)

Pune, India - Hybrid+ Equity3mo ago
RemoteInternNACloud ComputingData AnalyticsMediaSoftwareNonprofitData EngineerEngineering ManagerAssociateInternAgile CoachSQLPower BIDigital MarketingAdobe Creative SuiteCanva

Upload My Resume

Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT

Apply in One Click
Apply in One Click

Requirements

• Transform How We Deliver: Redesign and standardize delivery methodologies, governance, and controls to drive consistency, predictability, and operational excellence across all projects. • Transform How We Deliver: • Lead with Data & Insight: Build a metrics‑driven operating model using KPIs such as cycle time, cost‑to‑deliver, quality, and client outcomes to proactively manage risk and performance. • Lead with Data & Insight: • Partner at the Executive Level: Work closely with Client Success, Product, Engineering, and Finance to manage scope decisions, align priorities, and resolve escalations with clarity and confidence. • Partner at the Executive Level: • Scale Global Teams: Lead and mentor high‑performing teams across geographies, fostering a culture of accountability, continuous improvement, and client focus. • Scale Global Teams: • Optimize Resources: Oversee delivery capacity planning and resource allocation, balancing client demand with available talent while identifying risks and tradeoffs early. • Optimize Resources: • Embed Operational Rigor: Ensure every aspect of delivery—from project initiation through completion—is insight‑driven, well‑governed, and designed to scale. • Embed Operational Rigor: • Champion Knowledge & Learning: Build and maintain strong knowledge‑management practices so teams can easily access playbooks, lessons learned, and client insights. • Champion Knowledge & Learning: • A strong foundation in healthcare payer operations, with experience supporting regional health plans and/or Blue plans. • Minimum of 10 years of experience in delivery, operations, project management, or related roles, including senior leadership responsibility. • Proven success leading large, distributed, global teams and driving results through influence, clarity, and trust. • A track record of transforming delivery organizations—improving efficiency, quality, and client outcomes through process redesign and automation. • Experience leveraging GenAI and modern tooling to enhance operational workflows and delivery execution. • Exceptional executive presence and communication skills, with the ability to navigate complex stakeholder and client conversations. • A data‑driven mindset with strong analytical capabilities and the ability to translate insights into action. • Comfort operating in a fast‑paced, scaling SaaS environment, balancing strategic thinking with hands‑on execution. • Bachelor’s degree in Business, Operations, or a related field; Advanced degrees are welcomed but not required. • Compensation: Compensation for this role is based on experience, skills, and location, and includes base salary plus eligibility for performance bonuses and equity grants.

Responsibilities

• Monitor production data pipelines and systems, identifying failures, latency issues, schema changes, and data quality anomalies. • Debug pipeline failures by analyzing logs, metrics, SQL outputs, and upstream/downstream dependencies. • Assist in root cause analysis (RCA) for data incidents and contribute to implementing corrective and preventive solutions. • Support the maintenance and optimization of ETL/ELT workflows to improve reliability, scalability, and performance. • Automate recurring data operations tasks using Python, shell scripting, or similar tools to reduce manual intervention. • Assist with data mapping, transformation, and normalization efforts, including alignment with Master Data Management (MDM) systems. • Collaborate on the generation and validation of synthetic test datasets for pipeline testing and data quality validation. • Shadow senior engineers to deploy, monitor, and troubleshoot data workflows on AWS, Databricks, and Kubernetes-based environments. • Ensure data integrity and consistency across multiple environments (development, staging, production). • Clearly document bugs, data issues, and operational incidents in Jira and Confluence, including reproduction steps, impact analysis, and resolution details. • Communicate effectively with cross-functional, onsite, and offshore teams to escalate issues, provide status updates, and track resolutions. • Participate in Agile ceremonies and follow structured incident and change management processes

Benefits

• What you’ll get in return: • Unlimited paid time off – recharge when you need it • Work from anywhere – flexibility to fit your life • Comprehensive health coverage – multiple plan options to choose from • Equity for every employee – share in our success • Growth-focused environment – your development matters here • Monthly cell phone allowance – stay connected with ease #LI-RF1 #LI-Remote

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X