Mercury - Senior Software Engineer - AI Enablement
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• Has 5+ years of backend development experience in complex, production systems—you've built things that other engineers depended on. • Is fluent across programming languages and can navigate platform engineering, infrastructure, and developer tooling without needing a map. • Has hands-on experience building LLM-powered systems—RAG pipelines, agents, eval frameworks—and has shipped at least one of these to production. • Understands the real tradeoffs in AI deployments: cost modeling, observability, latency, and safety—not just the exciting parts. • Is high-agency and self-directed. You can operate effectively without tightly-defined scope, find the highest-leverage work, and get it done. • Communicates clearly across technical and non-technical audiences—you can explain what you built and why it matters. • The total rewards package at Mercury includes base salary, equity, and benefits. Our salary and equity ranges are highly competitive within the SaaS and fintech industry and are updated regularly using the most reliable compensation survey data for our industry. New hire offers are made based on a candidate’s experience, expertise, geographic location, and internal pay equity relative to peers. • Our target new hire base salary ranges for this role are the following: • US employees (any location): $166,600 - $218,700 • Canadian employees (any location): CAD 157,400 - 206,650
Responsibilities
• You'll join a team that has already started building Mercury's internal AI platform and enablement layer. Your work will be to extend, harden, and scale what's in motion, and to help partner teams adopt it. • extend, harden, and scale what's in motion • Extend the AI platform foundation • Build and evolve MCP servers that connect internal systems and data sources into a coherent interface for agents and engineers. • Expand and operate our LLM gateway infrastructure: routing, rate limiting, cost attribution, and observability across teams. • Turn early patterns into durable defaults: shared prompt libraries, guardrails, and policy-as-code so teams can move fast safely. • Strengthen the shared company knowledge layer • Shape and maintain structured context artifacts—clean, reliable, agent-consumable—so LLMs working in Mercury's systems can reason accurately about our domain. • Improve internal knowledge discoverability and retrieval so both humans and agents can quickly find accurate answers. • Partner with domain teams to standardize key sources of truth, and keep them fresh. • Enable faster prototyping and iteration across the company • Build and refine sandbox environments and tooling that let engineers experiment with AI safely and at speed. • Create self-service scaffolding so non-engineers—PMs, ops, finance—can prototype and deploy AI-powered workflows with minimal hand-holding. • Build playgrounds and evaluation harnesses so internal AI agents can be tested and iterated in controlled environments before hitting production. • This list is illustrative. Priorities will shift as we learn; the right person will help choose the next highest-leverage work.
No credit card. Takes 10 seconds.