Sayari - Staff Applied Scientist - AI Evaluation & Trust
Upload My Resume
Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT
Requirements
• 10+ years of Machine Learning experience with a focus on Deep Neural Network activities, evaluating model performance & trust. • 10+ years of Machine Learning experience • 1-2+ years’ experience focused on post-training activities • 1+ year experience creating benchmarks to evaluate LLMs • Technical Mastery: Deep expertise in LLM-as-judge architectures, multi-turn evaluation, and Reinforcement Learning (RL/RLHF/RLAIF). • Technical Mastery: • Statistical Rigor: Mastery of statistics and experimental design, including significance testing, distribution analysis, and inter-rater reliability. • Statistical Rigor: • Architectural Depth: Experience with Mixture-of-Experts (MoE) systems, routing behavior, and expert specialization. • Architectural Depth: • Builder Mindset: Proven ability to own the path from data collection to production deployment; we are a small team and every role is "hands-on." • Builder Mindset: • Domain Fluency: Understanding of Graph RAG and the unique challenges of evaluating non-deterministic, agentic workflows. • Domain Fluency: • Preferred: • Judgment Task Models: Experience building, fine-tuning (LoRA, etc.), or pre-training models specifically for judgment, preference modeling, or classification tasks. • Judgment Task Models: • Domain Context: Background in cognitive science, intelligence community tradecraft, or research literature on expert judgment under uncertainty. • Domain Context: • Infrastructure at Scale: Experience building or managing large-scale annotation infrastructure and quality assurance protocols. • Infrastructure at Scale: • Academic/Research Track Record: A record of published research or recognized work in preference modeling or AI alignment. • Academic/Research Track Record: • The target base salary for this position is $195,000-$205,000 plus company bonus and equity. Final offer amounts are determined by multiple factors including location, local market variances, candidate experience and expertise, internal peer equity, and may vary from the amounts listed above.
Responsibilities
• Lead the development of specialized "judge models," moving from general-purpose frontier models to architectures purpose-built for evaluation and failure mode detection. • Design and execute rigorous scoring pipelines and empirical threshold calibrations for agentic systems, including multi-turn conversation and Graph RAG reasoning. • Establish domain-specific evaluation frameworks that measure whether a system can perform the work of human experts rather than just passing general capability benchmarks. • Own the full lifecycle of evaluation data, from designing annotation infrastructure and protocols to deploying evaluation services into production. • Research and implement advanced techniques in Mixture-of-Experts (MoE) routing, expert specialization evaluation, and ensemble calibration. • Collaborate cross-functionally with Product, Data Engineering, and the SVP of AI to translate complex statistical uncertainty into clear, actionable product signals. • Act as a technical leader and "Scientific Conscience" within the AI pod, ensuring every AI-driven risk signal is backed by an empirical derivation story.
Benefits
• 100% fully paid medical, vision, and dental for employees and their dependents • Generous time off; we observe all US federal holidays, close our office for a winter break (12/24-12/31), in addition to granting 18 PTO days and 10 sick days • Outstanding compensation package; competitive commissions for revenue roles and bonuses for non-revenue positions • A strong commitment to diversity, equity, and inclusion • Eligibility to participate in additional benefits such as 401k match up to 5%, 100% paid life insurance (up to $100,000 coverage),, and parental leave • A collaborative and positive culture - your team will be as smart and driven as you • Limitless growth and learning opportunities
No credit card. Takes 10 seconds.