wagey.ggwagey.ggv1.0-e93b95d-4-May
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Senior Community Manager Role/Bumble Inc. - Senior Policy Manager
Bumble Inc.

Bumble Inc. - Senior Policy Manager

US TX Austin / US NY New York / UK London$125k - $155k2mo ago
In OfficeSeniorNAArtificial IntelligenceData AnalyticsSenior Community ManagerSenior AdvisorData AnalysisGovernanceDocumentationRisk AssessmentRisk Management

Upload My Resume

Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT

Apply in One Click
Apply in One Click

Requirements

• Lead the transition of policy governance from legacy operational models toward an LLM-first enforcement architecture, embedding appropriate guardrails, escalation pathways, and human oversight to minimize risk. • Own one or more complex policy domains — or drive cross-policy alignment across issue areas — auditing, iterating, and strengthening frameworks to ensure robustness and responsiveness to member needs. • Partner cross-functionally with Product, Engineering, Data Science, Legal, and Operations to ensure effective policy implementation and consistent enforcement across platforms and regions. • Design and maintain policy lifecycle governance processes, improving transparency, efficiency, and alignment between enforcement systems and written standards. • Develop and support Responsible AI frameworks, including model governance principles, labeling standards, safety interventions, and review mechanisms that reflect regulatory and ethical best practices. • Oversee moderation system performance by defining and monitoring key enforcement metrics, identifying gaps in precision, recall, and consistency at scale. • Build structured feedback loops with internal teams and external partners to surface emerging risks, sociocultural nuances, and operational friction points — demonstrating Curiosity and collaborative ownership. • Support programs that maintain compliance with global online safety and platform regulations, ensuring documentation, audit readiness, and defensible policy decisioning. • Use AI-enabled analytics tools responsibly to evaluate enforcement trends, stress-test policy outcomes, and generate insights that translate into measurable member impact. • Typically requires 6–8 years of experience, though we welcome candidates with alternative backgrounds that demonstrate equivalent skills. • Deep experience in Trust & Safety policy, ML governance, AI safety, or related domains within technology platforms. • Strong understanding of LLM systems, content moderation architectures, labeling frameworks, model evaluation methodologies, and safety intervention mechanisms. • Experience designing or contributing to Responsible AI governance frameworks, including risk assessment, bias mitigation, and human-in-the-loop systems. • Demonstrated ability to translate between technical and non-technical stakeholders, bridging policy, engineering, legal, and operational perspectives with clarity and Respect. • Proven ability to operate autonomously in ambiguous, fast-moving environments, taking ownership of complex initiatives and seeing them through from insight to impact. • High fluency in data analysis and experimentation, with the ability to interpret enforcement metrics and drive data-informed decisions. • Excellent written and verbal communication skills, including experience representing policy perspectives in cross-functional or regulator-adjacent discussions. • Embodies Bumble’s values of Courage and Excellence by balancing innovation with responsible risk management, and approaches evolving AI systems with thoughtful curiosity and principled judgment.

Responsibilities

• Lead the transition of policy governance from legacy operational models toward an LLM-first enforcement architecture, embedding appropriate guardrails, escalation pathways, and human oversight to minimize risk. • Own one or more complex policy domains — or drive cross-policy alignment across issue areas — auditing, iterating, and strengthening frameworks to ensure robustness and responsiveness to member needs. • Partner cross-functionally with Product, Engineering, Data Science, Legal, and Operations to ensure effective policy implementation and consistent enforcement across platforms and regions. • Design and maintain policy lifecycle governance processes, improving transparency, efficiency, and alignment between enforcement systems and written standards. • Develop and support Responsible AI frameworks, including model governance principles, labeling standards, safety interventions, and review mechanisms that reflect regulatory and ethical best practices. • Oversee moderation system performance by defining and monitoring key enforcement metrics, identifying gaps in precision, recall, and consistency at scale. • Build structured feedback loops with internal teams and external partners to surface emerging risks, sociocultural nuances, and operational friction points — demonstrating Curiosity and collaborative ownership. • Support programs that maintain compliance with global online safety and platform regulations, ensuring documentation, audit readiness, and defensible policy decisioning. • Use AI-enabled analytics tools responsibly to evaluate enforcement trends, stress-test policy outcomes, and generate insights that translate into measurable member impact.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X