wagey.ggwagey.gg
Open Tech JobsCompaniesPricing
Log InGet Started Free
Jobs/Product Owner Role/AI Engineering Manager

AI Engineering Manager

Abacus InsightsPune, India - Hybrid+ Equity2w ago
RemoteMidNAHealth InsuranceInsuranceProduct OwnerData EngineerEngineering ManagerAgile CoachScrum MasterDigital MarketingSQLGitPower BIAdobe Creative Suite

Upload My Resume

Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT

Apply in One Click

Requirements

• 8-10 years of experience in data engineering and architecture roles, with 2+ years in an engineering leadership role. • Strong experience leading ML/AI engineering efforts from concept through production in fast‑moving organizations. • Exposure to Databricks, Mosaic AI Gateway, and related Data + AI technologies. • Working knowledge of RAG architectures, SQL Agents, and MCP. • Strong analytical and problem‑solving skills, with the ability to translate complex problems into clear execution plans. • Proven ability to mentor and coach less‑experienced engineers, providing technical guidance, feedback, and growth support across complex AI initiatives. • Acts as a technical mentor across teams, elevating engineering quality by developing junior engineers and modeling best practices in AI system design and delivery. • Compensation: Compensation for this role is based on experience, skills, and location, and includes base salary plus eligibility for performance bonuses and equity grants. • What you’ll get in return: • Unlimited paid time off – recharge when you need it • Work from anywhere – flexibility to fit your life • Comprehensive health coverage – multiple plan options to choose from • Equity for every employee – share in our success • Growth-focused environment – your development matters here • Monthly cell phone allowance – stay connected with ease #LI-SB1 #LI-Remote • Bachelor’s degree in computer science, Information Systems, Data Analytics, or a related field, or equivalent experience. • 2-4 years of hands‑on experience in BI development and dashboard creation. • Proficiency with Power BI Desktop, Power BI Service, and Power BI Fabric. • Ability to leverage GenAI features in Power BI (Copilot, AI visuals, natural language queries). • Solid understanding of Power BI security, including RLS and permission management. • Strong analytical thinking, problem‑solving, and teamwork skills. • Skilled in data modeling and Power Query transformations. • Working knowledge of SQL and relational database concepts. • High attention to detail with strong troubleshooting ability. • Effective communication skills for working with cross‑functional stakeholders. • What we would like to see, but not required • Knowledge of workspace governance, deployment pipelines, and version control (Git). • Exposure to ETL tools. • What you’ll get in return • Bachelor’s Degree in Marketing, Communications, Business, or related field, or relevant equivalent work experience. • Minimum 2 years of professional experience in branding, marketing or digital media creation. • Experience with photography and video creation for branding or marketing. • Proficiency with social media, design tools and digital marketing software (e.g., Adobe Creative Suite, Canva, HubSpot, etc.). • Exceptional communication, storytelling, and presentation skills with the ability to tailor messaging to different audiences. • Self-starter with the ability to work independently and collaborate effectively with cross-functional team members across India, Nepal and US to drive toward action. • Knowledge of SEO and website analytics • Experience/interest in AI process improvement and automation • Bachelor’s or master’s degree in computer science, Information Technology, Data Analytics, or a related field with 3 - 5 years of relevant experience • Strong data analysis, problem‑solving, and logical thinking skills. • Advanced SQL expertise, including complex queries and stored procedures. • Ability to gather requirements, clarify needs, and translate them into clear analytical tasks. • Comfort working with multiple data sources and understanding data modelling basics. • Good communication and documentation skills with a collaborative mindset. • Moderate familiarity with Power BI and data integration concepts. • Basic DAX or M Query knowledge • Understanding of ETL tools or cloud platforms (Azure/AWS/GCP) • Bachelor’s or master’s degree in computer science, Information Technology, Data Analytics, or a related field with 3 - 5 years of relevant experience in data analysis/BI. • Strong analytical and problem‑solving skills help turn raw data into clear, actionable insights. • Advanced SQL expertise that ensures the business has fast, accurate, and trustworthy data. • Experience with Power BI data modelling that supports reliable, high‑quality dashboards. • Confidence working with multiple data sources to create a complete, consistent view of the business. • Clear communication and documentation skills that make complex findings easy to understand. • A collaborative approach that helps translate business needs into meaningful analytical solutions. • Exposure to DAX, M Query, or basic ETL tools • Understanding of cloud data ecosystems (Azure, AWS, or GCP) • Bachelor’s degree in Computer Science, Computer Engineering, or a closely related technical field. • 3+ years of hands‑on experience as a Data Engineer working with distributed data processing systems in modern cloud environments. • Working knowledge of U.S. healthcare data domains — including claims, eligibility, and provider datasets — and ability to apply this knowledge to ingestion and transformation workflows. • Strong communication skills to articulate technical concepts across both technical and non‑technical stakeholders. • Proficiency in Python, SQL, and PySpark, including distributed data transformations and performance‑optimized queries. • Demonstrated experience building and operating production ETL/ELT pipelines using Databricks, Airflow, or similar orchestration frameworks. • Familiarity with cloud‑native data platforms and services such as dbt, Kafka, Delta Lake, and event‑driven/streaming architectures. • Experience with structured and semi‑structured data formats (Parquet, JSON, Avro, ORC), including schema evolution techniques. • Working knowledge of AWS data ecosystem components—S3, SQS, Lambda, Glue, IAM—used to support scalable data engineering workloads. • Experience with Terraform and modern CI/CD pipelines (e.g., GitLab) for automating deployment and versioning. • Understanding of SQL optimization strategies including partitioning, pruning, Z‑Ordering, clustering, and caching for large analytical workloads. • Experience using cloud data warehouse technologies such as Snowflake, BigQuery, or Redshift, including performance tuning and data modeling practices. • What we would like to see but not required: • Experience in healthcare or payer data environments at scale. • 3-5 years of experience in product management, including product strategy execution, data analytics, agile delivery, and relevant technologies and tools. • Strong knowledge of US healthcare payer data, payer operations and/or analytics, and experience across Medicare, Medicaid, Marketplace, and Commercial insurance programs. • Hands‑on experience with payment integrity pre‑pay and post‑pay approaches, including relevant datasets such as claims, provider, member, enrollment, contracts/fee schedules, and clinical or adjustment notes. • Experience applying advanced analytics and/or AI to operational or analytical payment integrity use cases • Excellent written and verbal communication skills with the ability to clearly explain complex concepts, influence stakeholders, and tell a compelling product story. • Demonstrated ability to lead cross‑functional teams and coordinate efforts across multiple roles to achieve product objectives. • Proven ability to independently drive workstreams, lead through influence, manage risk, and escalate issues appropriately. • Comfort operating in a fast‑paced, evolving startup environment. • Bachelors Degree in a relevant field (e.g., Data Science, Business Administration, Healthcare Management) or equivalent relevant professional experience. • Relevant certifications (e.g., Product Management, Data Analytics, Scrum Product Owner) • Experience with analytics and visualization tools (SQL, Power BI, Tableau) • Familiarity with Agile methodologies and product management tools • Monthly cell phone allowance – stay connected with ease #LI-RF1 #LI-Remote • Proven experience managing engineering teams in data-driven environments. • Strong technical background in distributed systems, data platforms, and cloud infrastructure. • Excellent communication, leadership, and organizational skills. • Familiarity with agile methodologies and cross-functional collaboration. • Experience with US healthcare data analysis. • What we would like to see, but not required: • Hands-on experience with Databricks, Snowflake, SQL, PySpark. • Knowledge of Git, CI/CD pipelines. • Exposure to cloud platforms like Azure or AWS. • Working Conditions: • Standard hours: 8 hours/day, 10 AM - 7 PM • Location: Onsite • At least 7 years in data engineering roles with experience in leading high performing teams. • Deep technical expertise in data engineering, distributed systems, and software architecture. • Proven experience mentoring and developing engineering talent. • Strong track record in defining and enforcing engineering best practices. • Ability to lead cross-functional collaboration and communicate effectively with diverse stakeholders. • Experience managing technical projects from planning through production handover. • Fluency in English (required for Nepal and India). • Commitment to continuous learning and staying current with industry trends. • Experience driving innovation and continuous improvement in engineering processes. • Familiarity with cloud services and modern deployment workflows. • Ability to foster a culture of experimentation and excellence. • Prior experience coordinating resource allocation across multiple engineering streams. • Working Conditions (Nepal Only) • Standard hours: 9 hours/day; 5 days/week • Location: On-Site • 10+ years of experience in software engineering, SRE, sustaining engineering, or production operations • Deep hands-on experience operating production systems in AWS • Strong experience troubleshooting Databricks and large-scale data platforms • Proficiency in Python and experience building production services or tooling • Strong understanding of: • Distributed systems • Incident management and RCA practices • Monitoring, alerting, and observability • CI/CD Pipelines that leverage Infrastructure as Code. • Proven ability to own problems end-to-end, from detection to permanent resolution • Excellent communication skills, especially during incidents and customer escalations • Ability to work backward from customer impact to root cause across systems and codebases, delivering fixes in environments with minimal documentation. • Strong instinct for operational risk, with the ability to proactively identify failure modes and harden systems before they impact customers. • Experience in healthcare, health insurance, or regulated data environments • Familiarity with: • Kubernetes (EKS), EMR, Lambda • Spark internals • Snowflake or similar data warehouses • Experience with FHIR, MDM systems, or entity resolution • Prior experience in SWAT, escalation engineering, or tiger-team roles • Experience contributing to or operating within SRE/on-call programs • Monthly cell phone allowance – stay connected with ease#LI-MS1 • · Bachelor's degree in Computer Science, Engineering, Business, or related field. • · Exceptional Product Mangament acccumen in a related field - a minimum of 8 years of total experience, with 5-6 years of product management experience leading cloud-native B2B infrastructure and interoperability solutions. • · Expertise in leading and expanding cloud infrastructure capabilities of Enterprise solutions hosted in AWS and Azure. Experience with GCP is a plus. • · Proven experience developing interoperability solutions processing HL7 V2 and FHIR and other healthcare focused message standards, such as X12. • · Industry experience leveraging Snowflake, Airbyte, Databrciks and other leading cloud technologies to develop highly scalable data solutions. • · Strong technical background with the ability to understand complex software and cloud infrastructure systems in close collaboration with Engineering, SecOps and CloudOps. • · Hands-on experience with evangelizing and applying Agile development methodologies. • · Previous experience working with US teams and proven track record of successful and on-time delivery working in an international orgnization. • · Exceptional communication and leadership skills, with the ability to articulate complex concepts to diverse audiences. • · Exceptionally strong client acumen with the ability to effectively communicate product roadmap while listening empathetically to understand client needs to solve them holistically. • · Strong analytical skills, with the ability to leverage data to drive product decisions. • · Strong project management skills and experience collaborating with Client Delivery to ensure on-time and on-budget delivery of committed features and solutions. • · Demonstrated ability to thrive in a fast-paced, dynamic startup-like environment. • · Experience working in an Enterprise Healthcare IT field, specifically US Healthcare Payers. • · Demonstrable experience in improving operational efficiencies and leading CI/CD process optimization initiatives. • · Operational expertise leveraging GenAI to solve business problems with a detailed understanding of how compound AI systems function. • Bachelor’s Degree (or equivalent) and/or Professional Certification(s) in Marketing, Business, and/or relevant creative services • 5+ years of professional experience in product marketing, including developing and executing data-driven, multi-channel marketing initiatives and campaigns. • 3+ years of experience in the US healthcare and/or Health IT industry. • Experience in Design, Creative Services, and/or Copy Editing with the ability to showcase sample marketing materials and/or creative services content. • Proficiency with design tools and digital marketing software (e.g., Adobe Creative Suite, Canva, HubSpot, etc.). • Self-starter with the ability to work independently and collaborate effectively with cross-functional team members to drive toward action. • Process driven with high attention to detail and strong organization; consistent on deadlines, documentation, and tracker hygiene • Experience marketing healthcare technology/SaaS products to U.S. healthcare payers • Familiarity with U.S. healthcare payer program areas - Payment Integrity, Total Cost of Care, Interoperability, Risk adjustment, or Utilization management • Technical and/or B2B product marketing experience. • Proficiency with data analysis tools for campaign performance tracking. • Experience/interest in AI process improvement/workflow optimization enablement for product marketing • A deep understanding of data software engineering or a minimum of 3 years of software development or data engineering experience. • Domain expertise in shipping Software as a Service (SaaS) products. • Deep domain expertise in cloud technologies, including AWS, Azure and/or GCP. • Skilled in database concepts and tools (preferably on Databricks and Snowflake). • Deep understanding of the data platform R&D processes and agile methodologies. • Experience with big data, ETL and data pipelines, security and privacy, DevOps, or automation. • BS or MS in Computer Science, or related fields, or equivalent experience in a technical product owner role or an adjacent position. • Proven ability to influence and coordinate cross-functional teams to execute plans in highly technical environments. • Exceptional communication skills critical for translating business requirements into technical specifications, and communicating with technical and non-technical audiences. • Well-developed strategic-thinking skills, with the ability to inspire and lead others. • A highly proactive get it done attitude and the skills to back it up. • Demonstrated ability to thrive in a fast-paced, dynamic startup environment. • US Healthcare Payor or Provider industry experience. • Experience with healthcare interoperability standards such as HL7 V2, FHIR and X12. • Experience with integrating or implementing healthcare-related EMPI/MDM workflows. • Coach and mentor software and AI developers, develop staff skills, provide continuous feedback • Has relentlessly high standards (is never satisfied with the status quo) • Responsible for protecting, securing, and proper handling of all confidential data held by Abacus to ensure against unauthorized access, improper transmission, and/or unapproved disclosure of information that could result in harm to Abacus or our clients. • Minimum of 3 years of ML/AI engineering experience in high-velocity, high-growth companies. Alternatively, a strong background in relevant ML/AI research in academia will be considered as contributing qualification • Minimum of 3 years of experience as a software developer at some point in career • Minimum of 2 years of experience with Databricks, Mosaic AI Gateway and associated technologies in Databricks stack related to Data and AI engineering • Strong track record of working with language modeling technologies and GenAI. This could include the following: Developing generative and embedding techniques, modern model architectures, fine tuning / pre-training datasets, evaluation benchmarks, Agents and Agentic workflows (e.g. orchestration, workflow management, observability, debugging), RAG, SQL Agents, MCP, etc. • Proficiency in Python, TensorFlow/PyTorch, and scalable ML/AI architecture • Ability to drive end-to-end model and system development, from research and prototyping to deployment and monitoring • Strong analytical and problem-solving skills, with a passion for improving AI-driven user experiences • Strong coding and software engineering skills, and familiarity with software engineering principles around testing, code reviews and deployment • Experience and good understanding of designing scalable, distributed systems for running small to medium scale data processing applications and services (10s – 100s of TBs of data) • Possesses a level of breadth and depth of software and AI development experience that allows for influence and competence in technical discussions with internal and external stakeholders • Solid understanding of roles adjacent to the software development (product management, project management, client delivery, operations etc.). Ability to adapt to these roles as defined at Abacus and work with others in these roles • Proven record of success of driving and/or leading engineering teams/projects towards developing production grade systems supporting customers • Nice to have experience with, but not required: Snowflake, FHIR, Healthcare Data exposure • Bachelor’s degree in Computer Science, or related field, or equivalent work experience. • Strong ability to enhance relationships with clients and internal teams. • Demonstrated multitasking abilities and responsiveness to priority changes. • Superior communication skills for engaging various stakeholders. • Extensive experience in gathering requirements and delivering effective solutions. • Leadership skills in process improvement and project execution. • Proven leadership, exceptional organizational skills, self-motivation, and a track record of successfully managing complex projects and client relationships while delivering innovative, market-leading solutions. • Working Conditions • Standard hours: 8 hours/day, 5 days/week • Location: On-site. • Work time:1 PM - 10 PM • 5+ years of hands‑on experience as a Data Engineer working with large‑scale, distributed data processing systems in modern cloud environments. • Working knowledge of U.S. healthcare data domains—including claims, eligibility, and provider datasets—and experience applying this knowledge to complex ingestion and transformation workflows. • Strong ability to communicate complex technical concepts clearly across both technical and non‑technical stakeholders. • Expert‑level proficiency in Python, SQL, and PySpark, including developing distributed data transformations and performance‑optimized queries. • Demonstrated experience designing, building, and operating production‑grade ETL/ELT pipelines using Databricks, Airflow, or similar orchestration and workflow automation tools. • Proven experience architecting or operating large‑scale data platforms using dbt, Kafka, Delta Lake, and event‑driven/streaming architectures, within a cloud‑native data services or platform engineering environment—requiring specialized knowledge of distributed systems, scalable data pipelines, and cloud‑scale data processing. • Experience working with structured and semi‑structured data formats such as Parquet, ORC, JSON, and Avro, including schema evolution and optimization techniques. • Strong working knowledge of AWS data ecosystem components—including S3, SQS, Lambda, Glue, IAM—or equivalent cloud technologies supporting high‑volume data engineering workloads. • Proficiency with Terraform, infrastructure‑as‑code methodologies, and modern CI/CD pipelines (e.g., GitLab) supporting automated deployment and versioning of data systems. • Deep expertise in SQL and compute optimization strategies, including Z‑Ordering, clustering, partitioning, pruning, and caching for large‑scale analytical and operational workloads. • Hands‑on experience with major cloud data warehouse platforms such as Snowflake (preferred), BigQuery, or Redshift, including performance tuning and data modeling for analytical environments. • Experience in large-scale healthcare or payer data environments. • 8+ years of experience in data modeling, enterprise data architecture, and cloud-based data engineering. • 4+ years leading data modeling teams in enterprise environments. • Expertise in enterprise data modeling tools (Erwin, ER/Studio, DBSchema, or similar). • Strong hands-on experience with cloud data platforms (Databricks, Snowflake) and data lakehouse architectures. • Deep understanding of healthcare data models across Core Payer (Claims, Membership, Enrollment, Provider, Billing, Premium) and Clinical Interoperability (FHIR, HL7, ADT, CCDA, etc.) domains. • Experience with schema evolution, lossless data modeling, and data versioning strategies. • Strong background in SQL, performance tuning, and query optimization for large-scale datasets. • Hands-on experience with data engineering workflows, including data pipeline integration, ELT/ETL optimizations, and data transformation frameworks. • Ability to mentor junior modelers, enforce modeling best practices, and drive adoption of enterprise data standards. • Excellent collaboration skills to work with data architects, engineers, and client implementation teams. • Experience with OBT, data Lakehouse, and metadata management frameworks. • Familiarity with business intelligence (BI) and reporting solutions on modern cloud data warehouses. • Knowledge of APIs, SDKs, and data distribution frameworks for enabling real-time and batch data access. • Experience with data governance, security, and regulatory compliance in healthcare data management. • 5+ years in Quality Assurance for software platforms, preferably cloud systems. • 2+ years in software development. • 2+ years working with FHIR databases and standards, including FHIR API testing. • Strong understanding of Software Lifecycle and QA processes for manual and automated testing. • Expertise in Azure and AWS platforms. • Knowledge of healthcare standards. • Data QA validation and strong analytical skills • Knowledge of Medallion architecture and hands-on experience with Databricks • Excellent analytical, problem-solving, and troubleshooting skills. • Creating and maintaining frameworks using pytest, JUnit, or similar. • Database systems (MongoDB, SQL, MySQL) and data validation. • Microservices architecture, container management, API gateways, and tools like Kubernetes. • Coach and mentor software developers, develop staff skills, provide continuous feedback (i.e. one on ones) and are responsible for annual reviews. • Innovation and outside the box thinking • Has passion and conviction and the innate ability to inspire passion in others • Minimum of 5 years of software development experience using cloud services in production environment (AWS, Azure, GCP, etc) • Minimum of 2 years of experience with Data Lakehouse, ideally Databricks. SQL and Python query experience • Experience building production solutions on top of data systems at scale • Experience working with FHIR standards and FHIR databases preferred • Ability to drive end-to-end development, from research and prototyping to deployment and monitoring • Strong analytical and problem-solving skills, with a passion for improving UI/UX-driven user experiences • Strong coding and software engineering skills, and familiarity with software engineering principles around testing, debugging, code reviews and deployment • Possesses a level of breadth and depth of software and UI/UX development experience that allows for influence and competence in technical discussions with internal and external stakeholders • Proven record of success of driving projects towards delivering production grade systems supporting customers • Bachelor’s degree in computer science, Engineering, Information Technology, or related field, with 2+ years of hands‑on experience in Data QA or Data Engineering QA • Strong expertise in SQL, including complex queries, joins, aggregations, performance optimization, and data validation • Working knowledge of Python for data analysis, synthetic data creation, and automation of QA processes • Proven ability to analyze large datasets, identify inconsistencies, and validate business logic • Solid understanding of QA methodologies with experience designing test cases for data‑centric systems • Experience with PySpark, Databricks, and AWS services such as Redshift, S3, and Athena; familiarity with analytics and reporting tools is a plus • Experience with BigQuery, Snowflake, and ETL test automation • Healthcare domain knowledge, including HEDIS quality measures • Understanding and passion for healthcare data and systems – including payer, provider, and integration partners. Strong working knowledge of US healthcare data, including but not limited to Medical Claims, Pharmacy Claims, Membership / Enrollment, Provider and Clinical. • Domain expertise in CMS Interoperability Mandate (9115 / 0057) - including the technology used to serve the mandated APIs as well as the data structures (FHIR) required to be compliant. • Ability to define the solution architecture vision in architecture design documents • Broad and general technical knowledge as well as competencies in infrastructure, data models, databases, service orientation, programming, frameworks, standards, and technical modeling. Strong experience with one or more cloud vendors (AWS / Azure / GCP) and data platforms (Databricks / Snowflake) desired. • Strong understanding of healthcare industry trends, regulations, and customer needs with preference for previous payer / Interoperability vendor experience • Experience translating business needs into technology solutions that enable delivery of the Abacus platform. • Strong communicator who can leverage their domain and technical expertise effectively with the client and build working relationships with internal and external team members. Ability to communicate succinctly and clearly, both verbally and written, across technical and non-technical audiences at multiple levels, including business leaders. • Experience working in the payer / healthcare data space and understanding of the historical data architecture needs of our clients. • Understanding of client services or experience working for a cloud-based data management provider to large clients. • Working knowledge of Provider (EHR) Data formats and topics – including FHIR, X-12, HL-7 V2.3, and CCDA/CDA - as well as the general system architecture required to consume and make use of this data. • Working knowledge of Utilization Management / Care Management / Disease Management. • Monthly cell phone allowance – stay connected with ease • 7+ years of hands‑on experience as a Data Engineer working with large‑scale, distributed data processing systems in modern cloud environments. • Proven experience hiring top-tier talent in competitive markets (we’d love to see ~4+ years, but impact matters more than years). • Experience running full-cycle recruiting, including sourcing strategy, stakeholder management, and closing. • Strong ability to manage high-complexity searches with many moving parts — you stay structured, adaptable, and focused. • Skilled at translating technical and business needs into clear hiring strategies and compelling candidate narratives. • Excellent written and verbal communication — you build trust with both deeply technical candidates and senior stakeholders. • Comfortable operating in fast-paced, remote-first environments and collaborating across global time zones. • Experience recruiting globally across multiple regions. • Background in high-growth startups or similarly fast-moving environments. • A builder’s mindset: you experiment, learn quickly, spot patterns, and continuously refine recruiting playbooks and processes.

Responsibilities

• Lead architecture, design, and implementation of AI/ML systems from concept to production • Define technical standards, best practices, and AI engineering patterns across teams • Translate complex business problems into scalable AI solutions • Guide model development, evaluation, deployment, and lifecycle management • Partner with Product and Engineering leaders on AI strategy and roadmap • Mentor senior and junior engineers; raise overall technical maturity • Ensure responsible AI practices, security, privacy, and compliance • Evaluate emerging technologies and drive innovation initiatives • Build and maintain interactive dashboards, reusable datasets, clean data models, and well‑structured DAX logic while managing updates and deployments in Power BI Service and Fabric. • Use GenAI tools like Copilot to speed up report creation, model refinement, narrative generation, and insight discovery, helping teams interpret data more easily. • Connect to and transform data from various sources—including SQL databases, APIs, SharePoint, Excel/CSV, and cloud storage—using Power Query and solid modelling principles. • Investigate issues, validate data, troubleshoot performance concerns, and use analytical thinking to identify trends and ensure accuracy. • Write SQL queries for data extraction and validation while maintaining a good understanding of relational database structures. • Collaborate with cross‑functional stakeholders to gather requirements, explain insights, and document models, transformations, security logic, and dashboards clearly. • Translate company culture, values, and work environment into compelling visual and digital content • Enhance and maintain a consistent company brand identity across digital platforms • Create visually appealing assets for social media, presentations, recruitment campaigns, and internal communication • Capture, edit & publish photos and videos for office events, employee stories, leadership messages, and branding campaigns • Maintain quality checks on branding, including tone, voice, and formatting. • Assist with event related branding support, including internal events, hiring events, and employer branding initiatives. • Analyze large and complex data sets to spot trends, validate data quality, and generate meaningful insights that support business decisions. • Gather requirements through discussions and workshops, translate business needs into clear analytical tasks, and document data flows, definitions, and outputs. • Work with data from multiple sources—databases, files, APIs, cloud platforms—and ensure consistency, accuracy, and completeness across systems. • Write advanced SQL queries, build stored procedures and optimized scripts, and collaborate with data engineering or database teams to strengthen data pipelines. • Prepare structured datasets and foundational models for reporting tools, supporting BI dashboard development and improving existing reports for better usability. • Present insights clearly to both technical and non‑technical audiences and work closely with product, business, engineering, and BI teams. • Maintain thorough documentation across requirements, analysis, data logic, and process workflows. • Analyze and interpret complex datasets, check data quality across systems, and turn findings into clear insights that support business decisions. • Write and optimize SQL queries, stored procedures, and scripts to extract, validate, and maintain accurate, reliable data. • Build and maintain data models for Power BI dashboards while preparing clean, structured datasets from various sources like databases, APIs, files, and cloud platforms. • Support Power BI dashboard development by refining visuals, improving performance, and troubleshooting or resolving reported issues. • Investigate data gaps or inconsistencies through root‑cause analysis and help improve the accuracy of reporting outputs. • Document data flows, definitions, processes, and analytical findings in a clear, structured way. • Work closely with stakeholders to understand their needs, explain insights, and translate requirements into practical analytical tasks and deliverables. • Product Strategy and Vision • Develop and execute the product strategy for assigned pre‑pay and post‑pay payment integrity data products, aligned with company objectives and payer business needs. Define and prioritize product roadmaps and features that deliver measurable business outcomes. • Domain and Subject Matter Expertise • Cross‑Functional Leadership • Partner closely with Subject Matter Experts, Data Modeling, Data Quality, Engineering (Data & AI), Sales, and Customer Success teams to deliver high‑quality data products on time and within scope. Foster collaboration and ensure clear communication across functions. • Stakeholder Management • Engage internal and external stakeholders to gather requirements, understand payer workflows, and align product capabilities to unmet buyer needs. Clearly communicate product vision, progress, and performance metrics. • Performance Monitoring • Define and track KPIs to measure the success of pre‑pay and post‑pay payment integrity solutions. Use data‑driven insights to inform prioritization, optimize performance, and drive continuous improvement. • Lead & Develop Teams: Mentor engineers, provide feedback, and support career growth. • Deliver Impactful Projects: Oversee planning, execution, and delivery of high-quality data solutions. • Drive Technical Excellence: Guide architecture decisions, champion best practices, and encourage modern tools like Databricks, Snowflake, and PySpark. • Collaborate Across Functions: Partner with Product, QA, and business stakeholders to align engineering work with company goals. • Optimize Processes: Implement agile practices, improve pipeline performance, and ensure compliance with data governance and security standards. • Provide hands-on technical mentorship to engineers, supporting growth in data modeling, distributed systems, and analytics. • Define, refine, and uphold engineering standards for data quality, reliability, performance, maintainability, and security. • Champion modern engineering practices, including CI/CD, observability, automation, and code quality. • Lead the adoption of new technologies and methodologies to enhance system efficiency and productivity. • Collaborate with business analysts, QA engineers, product teams, and stakeholders to translate requirements into scalable technical solutions. • Manage end-to-end technical execution, including project planning, backlog prioritization, and roadmap delivery. • Guide the team in Agile methodologies, facilitating sprint planning, reviews, and retrospectives. • Production Operations & Incident Response • Act as a senior technical escalation point during production incidents • Lead real-time incident triage, mitigation, and recovery efforts • Drive root cause analysis (RCA) with a focus on systemic, long-term fixes • Identify recurring failure patterns and push for architectural or operational improvements • Partner with Customer Success and Engineering to manage customer impact during incidents • Sustaining Engineering & Post‑Launch Ownership • Own post-launch reliability, stability, and operational quality of core systems • Investigate and resolve complex field issues and production defects • Ensure fixes developed during incidents or customer escalations are up streamed into the core product • Improve operational readiness of services through better runbooks, monitoring, and alerting • Reduce operational toil by converting repeated manual work into automation • Forward Deployed / Customer‑Facing Engineering • Engage directly with strategic customers to solve real-world, production-grade technical challenges • Support complex deployments, integrations, and escalations in customer environments • Act as a trusted technical partner to customers during high-impact issues • Translate customer learnings into concrete product, platform, and operational improvements • Contribute to reusable tools, playbooks, and best practices that accelerate future deployments • AWS & Databricks Technical Expertise • Serve as a subject matter expert for AWS-hosted production systems • Troubleshoot and resolve issues across: • AWS compute, storage, networking, IAM, and security • Databricks jobs, clusters, and Spark-based data pipelines • Debug performance degradation, scalability issues, job failures, and data correctness problems • Partner with platform and data teams to harden systems for reliability, scale, and operability • Software Development & Automation • Write production-quality code to: • Automate operational workflows • Improve reliability and observability • Eliminate manual intervention and reduce incident frequency • Contribute primarily in Python, with exposure to JVM-based systems as needed • Review code with a strong emphasis on operability, resiliency, and maintainability • Advocate for “build it so it can be operated” engineering standards • Technical Leadership & Collaboration • Provide technical leadership without formal authority, influencing design and operational decisions • Mentor engineers through pairing, reviews, and incident leadership • Collaborate closely with Product, Engineering, Data, and Customer teams • Operate effectively in high-pressure, ambiguous environments, especially during customer-impacting incidents • · Own and execute Abacus Platform Cloud strategy and roadmap, ensuring continuous alignment with the Company’s strategic goals, revenue targets and customer needs. • · Onwn and execute Abacus Platform Cloud connectivity capabilities focused on Healthcare interoperarbility, such as HL7 V2 and FHIR. • · Enable and drive growth of Abacus Platform capabilities across AWS, Azure and GCP. • · Collaborate with engineering, marketing, delivery and sales teams to define and lead scalable, secure, and compliant cloud-native platform features that solve biggest customer pain points and needs, while delivering high return on investment (ROI) for Abacus. • · Stay on top of the latest developments and competitive landscape related to Healthcare centric cloud technologies to effectively inform Abacus’ cloud strategy and roadmap. • · Be the voice of Customer by relentlessly engaging clients and stakeholders to ensure that our cloud and connectivity capabilities meet industry standards and customer expectations. • · Lead the entire product lifecycle from concept through launch, in close collaboration with Director of Product, CTO, Engineering and Client Delivery leads, including fulfillment of pre-launch activities and post-launch analysis and iteration. • · Write exceptional Epics with defined user personas and user journeys that articulate clear requirements and success criteria to Product Engineering and stakeholders. • · Work closely with Engineering and Product Managment teams to maintain and improve the Agile development process, ensuring timely and efficient delivery against roadmap. • · Continuously seek ways to improve efficiency of our development and cloud operations teams to reduce manual effort and human error. • · Plan Quarterly Product Increments and collaborate with Product Owners and Product Engineering to ensure its successful execution while keeping stakeholders and leadership continuously informed of its On Track status. • · Drive product adoption by collaborating with Marketing to develop go-to-market strategies, product messaging and positioning, and help author blogs and other public collateral. • · Provide mentorship and guidance to junior product team members, fostering a culture of learning and growth within the team. • Market Research and Competitive Intelligence • Conduct market research to identify trends, customer needs, and the competitive landscape • Collect and analyze customer feedback to inform product and marketing strategies • Translate the strategic direction set by the Product Manager into specific product features and oversee development of these features efficiently within the agile framework. • Support engineers comprehensively throughout all aspects of the Software Development Life Cycle (SDLC), providing guidance, insights, and assistance as necessary. • Assume end-to-end responsibility for the lifecycle of product features you own • Act as the primary translator between business needs and technical requirements, effectively bridging the gap to ensure alignment between stakeholders and engineering teams. • Work with cross-functional teams to analyze current product performance and identify areas for improvement. • Communicate comprehensive updates, progress reports, and insights related to the product's development to company leadership and other audiences. • Collaborate with stakeholders, customers, and the development team to gather comprehensive requirements and document customer scenarios, pain points, and success criteria for product feature-sets and features. • Estimate anticipated impact of each product feature and craft effective value statements to be used in roadmap prioritization. • Create, prioritize, and manage the product backlog to ensure that the team is working on the most valuable items first. • Ensure that the development team understands the scope and complexity of the tasks and can accurately estimate the level of development effort. • Participate in sprint planning, reviews, and retrospectives, ensuring that the development team is building the product as envisioned and meeting the acceptance criteria. • Act as the primary liaison between stakeholders and the development team, conveying updates, managing expectations, and gathering feedback. • Ensure visibility, transparency, and clarity of the product backlog for all stakeholders involved. • Manage product acceptance testing phases to ensure features not only add value but also align with customer needs. • Coordinate release activities to ensure seamless deployment. • Prepare relevant documentation and conduct internal user training for new features. • Collaborate with client management to ennsure they have what they need to support our clients in optimizing the impact of the product. • Lead consulting engagements aimed at optimizing software platform performance for healthcare. • Utilize expertise in healthcare claims, provider management, benefits, eligibility, or contract management to develop solutions. • Conduct client calls to gather and analyze requirements, designing solutions for use cases. • Participate in planning and requirements gathering sessions with clients. • Develop new components or modify existing ones to integrate with the current platform. • Conduct UAT, prepare and validate comprehensive test cases. • Evaluate emerging technologies and propose strategic enhancements. • Conduct detailed process analysis and enact improvements. • Communicate complex concepts to varied audiences, including technical and non-technical. • Provide top-tier QA and application support as needed. • Undertake additional roles as agreed with management. • Architect, design, and implement high-volume batch and real-time data pipelines using PySpark, SparkSQL, Databricks Workflows, and distributed processing frameworks. • Build end‑to‑end ingestion frameworks integrating with Databricks, Snowflake, AWS services (S3, SQS, Lambda), and vendor data APIs, ensuring data quality, lineage, and schema evolution. • Develop data modeling frameworks, including star/snowflake schemas and optimization techniques for analytical workloads on cloud data warehouses. • Translate complex business requirements into detailed technical specifications, engineering artifacts, and reusable components. • Establish and enforce data engineering best practices, such as CI/CD for data pipelines, code versioning, automated testing, orchestration, logging, and observability patterns. • Conduct performance profiling and optimize compute costs, cluster configurations, partitions, indexing, and caching strategies across Databricks and Snowflake environments. • Produce high-quality technical documentation including runbooks, architecture diagrams, and operational standards. • Participate in planning, design discussions, technical reviews, and continuous improvement activities as part of an iterative development lifecycle. • Proactively monitor, troubleshoot, and support production data pipelines to ensure reliability, performance, and timely data availability. • Identify root causes of data pipeline issues and implement corrective and preventive actions to improve operational stability and data quality. • Mentor junior engineers through technical reviews, coaching, and training sessions for both internal teams and clients. • Data Modeling Leadership & Best Practices • Lead a global team of data modelers to build scalable, high-performance physical data models aligned with enterprise architecture and industry standards. • Define and enforce data modeling best practices, including schema evolution, lossless data modeling, metadata management, and data versioning. • Convert conceptual and logical data models into optimized physical ERDs using enterprise data modeling tools (Erwin, ER/Studio, DBT, or similar). • Ensure models support analytical (data science, actuarial, reporting) and operational (Claims, Care Management) use cases. • Maintain metadata, business glossary, and data dictionaries to support data lineage and governance. • Implement data quality rules and validation frameworks to meet industry SLAs and compliance requirements (HIPAA, HITRUST, CMS regulations). • Technical Data Modeling Engineering • Own data modeling workflows, including version control, schema deployment, and upgrade automation. • Work with engineering teams to implement data models in Databricks, Snowflake, and modern cloud architectures. • Optimize data models for query performance, indexing, partitioning, and storage efficiency. • Ensure data enrichment (standardization, transformations, algorithms, and grouping) is seamlessly integrated into the data models. • Design and document data delivery patterns using complex events or business rules to enable real-time and batch processing. • Implement data federation strategies, ensuring interoperability between enterprise data lakes, cloud data warehouses, and transactional systems. • Collaboration & Client Engagement • Collaborate with Director of Data Modeling and client implementation teams to understand client needs, assess data model impact, and enhance models for efficient implementations. • Partner with engineering and product teams to define and prioritize data model features in the product roadmap. • Work with client management teams to provide support and technical expertise during client engagements. • Define test strategies for Abacus products, covering design, implementation, and maintenance of test cases for functional, end-to-end, and performance testing. • Develop test suites aligned with best practices and standards. • Ensure software quality through continuous and iterative testing. • Participate in internal reviews of software components and systems. • Create and maintain automation test cases and CI/CD pipelines using Python. • Work with complex data applications developed on Azure and AWS platforms. • Help shape the direction of our data solution features in our products. Drive the deployment of state-of-the-art data products and user experiences that directly impact on the capabilities and performance of Abacus’s products and services • Work closely with cross-functional teams, including AI engineers, data engineers, and product teams, to deliver impactful data solutions that enhance user experience with our data products • Skillful at driving critical projects (e.g. Data products engineering and Innovation, Development Productivity and Benchmarking) • Adept at clear, confident communication with executive staff • Meaningful experience in the world of Data and AI • Capable of credible customer interactions • Mentoring development team members to ensure delivered solutions adhere to the software architecture strategy, coding standards, and established organizational policies and procedures • Participating in software architectural discussions, influencing decisions, and collaborating with peers to maintain consistency across the organization • Identifying people and process improvement strategies for the Agile/Scrum team(s) • Communicate organizational updates to ensure teams adhere to the established policies and procedures • Help shape the roadmap and execute its delivery • Ensure projects are completed on time and according to our quality standards • Facilitates communication upward around architecture, design, and implementation objectives • Proven experience in leading software development teams or projects • Excellent knowledge of software development, data products, QA and test automation, and experience with agile development methodologies • Demonstrated knowledge of Cloud Architecture, AI Architecture, Massive Parallel Processing (MPP) compute frameworks for Data+AI platforms, and Security, • Good understanding of Incident Management, Configuration Management, Operational efficiency and Customer Escalation Management preferred. Can manage the balance of delivering on our roadmap commitments while dealing with interruptions and client escalations • Review mapping specifications to identify business requirement gaps and estimated QA efforts. • Design ETL test scenarios and test cases, validated input file layouts and parsers. • Write source-to-target SQL queries for data validation and user acceptance criteria • Automate data validation using PySpark and Databricks • Execute testing using synthetic data resembling real‑time production scenarios • Support UAT, worked in Agile methodology, tracked QA activities in Jira/test management tools, and coordinated onsite–offshore teams • Author the project architecture design documents, including data flows, sequence diagrams and overall system architecture, that will drive the technical implementation of the solutions. This role will primarily focus on defining solutions associated with CMS Interoperability, but may also work on other data solutions. • Work across business and IT teams to understand the current business and system capabilities and determine the desired state to define the overall technology strategy roadmap • Work closely with cross-functional teams, including business experts, technical business analysts, project managers, sales / account executives and data engineers to successfully implement client projects which reduce technology costs and solution redundancy. • Collaborate with Product Management to identify client-specific requirements that may translate into an existing standard product feature or a future feature on the Abacus product roadmap. • Define the solution architecture visions for designated components that describe the long-term goals as well as near-term needs of the client. • Provide mentorship to internal teams on business needs and use cases for seamless client implementation. Provide ongoing support to address questions and issues raised by the implementation team. • Lead technical solution design for health plan clients, creating highly available, fault-tolerant architectures across multi-account AWS environments. • Implement security automation, including RBAC, encryption at rest/in transit, PHI handling, tokenization, auditing, and compliance with HIPAA and SOC 2 frameworks. • Owning full-cycle recruiting for roles across product engineering, ML/AI, infrastructure, G&A, and beyond. • Partnering closely with hiring managers to deeply understand problems, calibrate on signals, and continuously raise the hiring bar. • Designing and executing thoughtful search strategies — from market mapping and outreach to interview design and closing. • Managing multiple complex searches simultaneously without sacrificing candidate experience or quality. • Acting as a trusted advisor to candidates, guiding them through a rigorous process with clarity, empathy, and conviction. • Serving as an ambassador for Abacus Insights — every interaction reflects our values and sets the tone for the candidate experience. • Continuously improving how we hire by iterating on processes, tooling, and data-driven insights.

Benefits

• Purpose & Growth • Purpose & Growth • Save for your retirement with confidence, leveraging our company-sponsored plan. • Vacation • Recharge with unlimited paid time off. • Ownership • Become not only an employee but an owner of a company on a mission to improve healthcare. • Health Insurance • Health Insurance • From Day 1, maintain peace of mind with our healthcare insurance coverage for you and your family. • Bond with your fellow teammates during company-sponsored social events. • Current Job Openings • AI Engineering Manager / Remote- USA

Similar Jobs

Engineering Manager, CAD Applications11h ago
DandyDandy·USA·$201k - $237k/year + Equity
In OfficeNAStaffMental HealthCloud ComputingEngineering ManagerProduct OwnerCoachingJavaScriptFull StackTypeScriptAWSAzureGCP
Regional Marketing Manager - AMER West14h ago
GitLabGitLab·Remote - USA·$95k - $95k/year + Equity
RemoteNAPaymentsDeveloper ToolsMarketing ManagerRegional Sales ManagerMarketoSalesforceContent CreationDigital MarketingEvent PlanningProduct MarketingBudget ManagementReportingCRM ManagementDocumentationProcurementSales Enablement
Director, Client Strategy, B2B (ABM Focus)14h ago
wpromotewpromote·Remote - USA·$100k - $128k/year
RemoteNADirectorSoftwareDirector of StrategyDigital MarketingB2BTeam ManagementAccount ManagementCampaign ManagementReportingSales EnablementSalesforceGA4MarketoQuality Control
Product Marketing Director - Remote, USA14h ago
SuccessKPI Inc.SuccessKPI Inc.·Remote - USA
RemoteNADirectorSoftwareProduct Marketing ManagerVP of MarketingProduct MarketingProspectingMarket ResearchGoSales EnablementDigital MarketingBusiness IntelligenceReportingSales Strategy
Organic Social Media Manager15h ago
Grayscale InvestmentsGrayscale Investments·Stamford, Connecticut, United States
In OfficeNAMidCryptocurrencyFintechSocial Media ManagerProduct MarketingDigital MarketingMarket ResearchStorytellingReporting
Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact