Pareto.AI - Technical Data Delivery Lead
Upload My Resume
Drop here or click to browse · PDF, DOCX, DOC, RTF, TXT
Requirements
• Proficiency in Python and SQL for data manipulation, pipeline monitoring, and quality analysis — you should be comfortable writing light scripts to parse formats, run statistical checks, and build lightweight tooling • Working knowledge of LLM internals: RLHF/SFT training loops, how prompt structure affects output distribution, RL environment setup qualities (tool use) for agentic data collection / eval projects. • Hands-on experience with at least one agentic or LLM workflow framework (LangChain, DSPy, AutoGen, direct tool-use via API, or equivalent) • Demonstrated ownership of a data or ML pipeline from scoping through delivery — including quality design, not just throughput tracking • Strong written communication: you'll write technical guidelines and rubrics that distributed expert workers follow accurately, and you'll brief senior researchers on pipeline performance • Comfort operating with ambiguity in a fast-moving environment where model requirements shift and client priorities evolve • Direct experience with RL environment data pipelines, evaluation framework design, and red-teaming workflows • Background in data engineering, ML research support or equivalent • Experience designing or operating agentic systems in a production or near-production context • Familiarity with inter-rater reliability methods, calibration set design, and annotation quality frameworks • Prior client-facing or technical program management experience in an AI/ML-adjacent context • Prior experience on scoping or driving projects with fuzzy upfront specs or evolving requirement. This is a high-ownership position where you are expected to take charge and lead, not simply follow project guidelines. • What we value in candidates • We care less about credentials than about demonstrated ability to own complex technical work and build toward better systems. A strong background in software engineering, data science, or ML research is the most common path into this role, but we've also seen excellent DDLs come from ML operations, computational linguistics, and applied research support. What matters is that you can read a dataset, reason about what's wrong with it, write code to fix it, and design a workflow that prevents the problem next time.
Responsibilities
• Pipeline architecture Design end-to-end data collection and evaluation pipelines for RLVR, RLHF, SFT, red-teaming, and model evaluation workflows. This includes expert sampling strategy, annotation schema, rubric structure, inter-rater calibration, and QA system design. You'll be expected to prototype novel workflows quickly, identify architectural risks before launch, and make tradeoff decisions with confidence. You’ll also need to understand how agents interact with tools to solve expert-driven tasks, and you’ll need to communicate with engineering to ensure the environment is built accordingly to enable such tasks. • Agentic system deployment Build, test, and iterate on AI agents that automate pipeline tasks — quality gate review, expert matching, output flagging, throughput anomaly detection. You'll work closely with our engineering team to scope agent capabilities, write the prompts and evaluation logic that make them reliable, and monitor their performance in production. This is a growing part of the role; comfort with agentic tooling (LangChain, DSPy, custom tool-use frameworks, or equivalent) is a meaningful differentiator. • Quality systems Define data quality standards across annotation, evaluation, and expert output review. Design and run audits using inter-rater reliability metrics, calibration sets, and statistical sampling. You'll be responsible not just for catching quality issues but for building systems that prevent them — automated checks, structured output validation, and model-assisted review layers where appropriate. Aside from programmatic quality testing, you’ll be responsible for spot-checking tasks & understanding what makes a datapoint meaningful & high quality. The ability to do so, and translate your findings into clear feedback and expert guidelines is particularly helpful for this role. • Client interface Engage directly with AI researchers, TPMs, and PMs at our client organizations. Translate research-driven requirements — evaluation rubrics, domain coverage targets, latency constraints, benchmark specifications — into operational workflows. Communicate pipeline performance clearly, escalate technical risks early, and contribute to project scoping and pricing decisions. • Research integration Stay current with developments in LLM post-training, evaluation methodology, and data tooling. Evaluate new approaches — model-assisted annotation, structured output formats, automated calibration methods — and integrate them into active pipelines where they improve quality or efficiency. Understand what method applies to what domain and project, and work towards implementing it accordingly.
Benefits
• $120K – $160K • Offers Equity • Upload your resume here to autofill key application fields. • Drop your resume here! • Parsing your resume. Autofilling key fields... • or drag and drop here • What city and country will you be working from? • Where did you discover us or our open roles?
Similar Jobs
No credit card. Takes 10 seconds.