wagey.ggwagey.ggv1.0-e93b95d-4-May
Browse Tech JobsCompaniesFeaturesPricingFAQs
Log InGet Started Free
Jobs/Chief of Staff Role/coefficientgiving - Grantmakers & Senior Generalists, Global Catastrophic Risks
Pro members applied to this job 36 hours before you saw itGet Pro ›
coefficientgiving

coefficientgiving - Grantmakers & Senior Generalists, Global Catastrophic Risks

Remote - Global$152k - $189k2d ago
RemoteSeniorWWBiotechnologyCybersecurityChief of StaffAssociateGovernanceTeam LeadershipStaff DevelopmentProgram ManagementReporting

Upload My Resume

Drop here or click to browse · Tap to choose · PDF, DOCX, DOC, RTF, TXT

Apply in One Click
Apply in One Click

Requirements

• Put our mission first, and act with urgency to help us realize our ambitious goals. • Work to model our operating values https://coefficientgiving.org/our-approach/operating-values/ of ownership, openness, calibration, and inclusiveness. • AI GOVERNANCE & POLICY • The AI Governance & Policy (AIGP) team, led by Luke Muehlhauser https://coefficientgiving.org/team/luke-muehlhauser/, funds work to shape the norms, policies, laws, and institutions that govern how the most capable AI systems are developed and deployed. In 2025, the AIGP team moved over $140 million to impactful organizations and projects, making it one of the largest philanthropic funders of frontier AI policy. Its work includes AI governance research to improve our collective understanding of how to achieve effective AI governance, and AI governance policy and practice to improve the likelihood that good ideas are actually implemented by companies, governments, and other actors. Our portfolio spans U.S. and international policy research and advocacy, field-building, fundamental strategic research, technical governance (e.g. evaluations, verification mechanisms), and more. • Compared to the other teams hiring in this round, AIGP grantmaking tends to involve a strong understanding of key institutions like frontier AI companies, governments, international bodies, and the political dynamics within them. Familiarity with relevant technical domains like machine learning or hardware can be valuable, as can knowledge of policy specifics. • While we’re open to hiring talented generalists, the AIGP team is particularly interested in people with: • Prior experience in policy, for example in a government role or at a think tank. Experience in U.S. policy would be particularly valuable. • A technical background relevant to AI safety (e.g. in information security or frontier AI hardware). • Experience working on AI governance or policy at a frontier AI company. • U.S. POLICY SPECIALIST GRANTMAKERS • We are also hiring for one or more grantmakers focused specifically on grantmaking for U.S. policy. We're open to people from a wide variety of backgrounds, though we have a strong preference for candidates based in Washington, D.C. • Specific profiles we're interested in include: • A technical AI governance specialist who can bridge deep knowledge of AI capabilities and risks with the D.C. policymaking world. • A coalition-building expert with knowledge of the political economy of AI who can map how political coalitions form and shift, and invest in building durable cross-partisan support for AI governance. • A political and campaign strategist who has experience running or funding issue-area advocacy. • CHINA SPECIALIST • We are also hiring for a China specialist to help us find ways to support Chinese contributions to AI safety and useful cooperation between China and the West, including through forums like the International Dialogues on AI Safety https://idais.ai/. Applicants for this role should be fluent in Mandarin and have a strong educational and/or professional background in China studies, particularly Chinese politics, policy, business, economics, and/or recent history. • AI INFORMATION SECURITY SPECIALIST • We are also hiring for an AI information security grantmaker. Our work on "AI infosec" includes safeguarding model weights and algorithmic breakthroughs, preventing system poisoning or sabotage, securing training data and compute resources, addressing vulnerabilities across the full machine learning supply chain (from compute resources to MLOps), enabling secure third-party access for audits and evaluations, and ensuring high security standards for other governance-enabling techniques (e.g. international agreement verification mechanisms). • This portfolio will likely span technical research, policy development, field-building initiatives, and ecosystem support. Our previous grants have supported RAND's Meselson Center https://www.rand.org/global-and-emerging-risks/centers/meselson.html (which authored Securing Model Weights https://www.rand.org/pubs/research_reports/RRA2849-1.html), security fieldbuilding projects such as Heron https://www.heronsec.ai/, and benchmarks like Cybench https://arxiv.org/abs/2408.08926, CVEbench https://arxiv.org/abs/2503.17332, and BountyBench https://arxiv.org/abs/2505.15216. • We are also hiring for a Chief of Staff, who will serve as a close thought partner to Luke Muehlhauser, produce nuanced recommendations for senior leadership, and design organizational infrastructure for the rapidly growing AIGP team. This is a senior generalist role, not a grantmaking role.

Responsibilities

• Because these roles share many of the same underlying skills, we encourage candidates to indicate interest in every team that could be a plausible fit. We will evaluate candidates for all of their preferred teams through a single streamlined process, with specific team placement determined in the final stages. Candidates can update their preferences at any point during the round. • Across all our teams, grantmakers might: • Identify the most important projects and organizations that need to exist, and make them happen. Our grantmaking work is increasingly proactive — scoping out priority projects, finding or developing the right founders, and actively building new initiatives rather than waiting for proposals to come in. • Investigate grant opportunities. Essentially, a grant investigation is a focused, practical research project aimed at answering the question, “Should this project be funded, and at what level?” • Own special projects that go beyond grantmaking. Some examples include incubating impactful projects, headhunting founders, and working with for-profit entities to achieve a specific outcome in the world. • Conduct research to inform program strategy, such as helping Coefficient Giving investigate a new grantmaking area within the global catastrophic risks ecosystem or evaluate the historical cost-effectiveness of a certain kind of grant. • Build and maintain relationships in the field, ensuring that feedback flows between us, our grantees, and other stakeholders. • We expect to make hires at any of four levels: • Senior Program Associates investigate funding opportunities and proactively develop new ideas that lead to grants. On occasion, they also carry out research and evaluation work, or other highly impactful non-grantmaking projects. • Associate Program Officers lead projects with increased autonomy, manage particularly complex projects, own sub-areas of their team’s strategy, and manage Program Associates. • Program Officers own a significant fraction of their team’s strategy, design and implement new grantmaking initiatives, and/or manage several direct reports. • Senior Program Officers fully own a program area within one of our larger teams or lead a small team. We’re excited about hiring at this level in exceptional cases. • You might be a good fit for these roles if most of the following applies to you: • Familiarity with the global catastrophic risks ecosystem. You are interested in, have spent time engaging with, and are motivated to work on catastrophic risks posed by transformative AI and/or biotechnology. (Note: these roles do not require that you have technical expertise — for example, you do not need to be able to replicate a technical paper.) • Ownership and agency. You are accustomed to taking full responsibility and ownership over poorly scoped projects and/or areas of ongoing work. You will push to make the right thing happen and to move things forward, even if it requires rolling up your sleeves to do something unusual, difficult, and/or time-consuming. • Ambition and speed. You generate creative, bold ideas to capture more impact, and you’re comfortable moving quickly to execute on ambitious plans that could make a significant difference in the world. • Social awareness and flexibility. You can effectively communicate and build relationships with people across a wide range of professional contexts. You feel excited by the idea of developing a broad professional network among and deep understanding of key stakeholders relevant to global catastrophic risks, and collaborating with them to deliver on your strategic priorities. • Critical thinking. You have strong analytical and critical thinking skills, especially the ability to quickly grasp complex issues, find the best arguments for and against a proposition, and skeptically evaluate claims. You should feel comfortable thinking in terms of expected value https://en.wikipedia.org/wiki/Expected_value and reasoning quantitatively and probabilistically about tradeoffs and evidence. • Good judgment. You can identify and focus on the most important considerations, have good instincts about due diligence and efficiency, and can form reasonable, holistic perspectives on people and organizations. • Clear communication. You communicate in a clear, information-dense, and calibrated https://coefficientgiving.org/research/efforts-to-improve-the-accuracy-of-our-judgments-and-forecasts/#Calibration_training way, with good reasoning transparency https://coefficientgiving.org/research/reasoning-transparency/, both in writing and in person. • Acting as a force multiplier for Luke by helping him focus on AIGP’s top priorities, advising him on critical decisions, and owning initial drafts or full segments of his portfolio. • Project managing large initiatives across AI teams, such as strategy refreshes, public communications, and process updates that require significant stakeholder coordination. • Working closely with Luke to shape the strategy and structure of the AIGP team, including by identifying new priority areas and deciding who should own them. • Reviewing internal processes to accelerate the team’s grantmaking and raise the ambition and impact of our grantees. • Designing and leading hiring rounds end-to-end, from crafting the JD to recommending the final decision. • Managing grantmakers and other staff at various levels of seniority. • We are open to a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored to the successful candidate. Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, and the ability to effectively support teams as they scale. There is no location requirement for this role. • AIGP is open to remote hires and does not have a strong location preference unless otherwise indicated, though proximity to Washington, D.C. or San Francisco can be helpful given the nature of the policy work. • SHORT TIMELINES SPECIAL PROJECTS • The Short Timelines Special Projects (STSP) team, led by Claire Zabel https://coefficientgiving.org/team/claire-zabel/, focuses on driving forward new projects that seem likely to be especially important if timelines to transformative AI are short. • The STSP team’s remit is extremely broad, and its mission is to ensure that if money can improve outcomes in a world of short AI timelines, it does. Relative to other teams, STSP is less focused on grantmaking in particular — though we view grantmaking as a valuable tool — and is more open to using a zoomed-out position and financial leverage to "do what it takes". In addition to grantmaking, STSP is open to conducting research, incubating impactful projects, supporting coordination efforts, headhunting, working with for-profit entities, and more. • The STSP team is particularly interested in hiring people with the following traits: • Openness to working on a wide variety of projects rather than specializing deeply in one sub-area. • A relatively deep and broad understanding of AI strategy and AI futurism. • Excitement about getting new projects off the ground and working in relatively unscoped areas with few collaborators. • The STSP team prefers candidates to be based in the San Francisco Bay Area, but is open to considering candidates in other locations. We’ll support candidates with the costs of relocation to the Bay. • GCR CAPACITY BUILDING • The Global Catastrophic Risks Capacity Building (GCRCB) team focuses on growing and strengthening the fields of researchers and practitioners working on Navigating Transformative AI https://coefficientgiving.org/funds/navigating-transformative-ai/ as well as the broader ecosystem https://coefficientgiving.org/funds/global-catastrophic-risks-opportunities/ concerned with global catastrophic risks. We believe capacity building is extremely leveraged, and much of the GCR team’s total impact to date is downstream of the GCRCB team’s earlier work. • Relative to other teams, the GCRCB team focuses more on building infrastructure and pipelines to attract new talent and support people and organizations seeking to address risks from transformative AI and biosecurity. Compared to other roles in this round, the GCRCB team’s work is often better suited to a generalist mindset (as opposed to deep, narrow specialization). If you’re excited to think about a wide range of ways to support the broad GCR ecosystem and remain flexible about how you deliver impact, the GCRCB team might be a good fit. This is both because GCRCB’s own portfolio is diverse, and because GCRCB often helps other teams with their grantmaking. • The GCRCB team is particularly interested in hiring people with direct experience in GCR-relevant fields, such as AI safety, biosecurity, or related research and nonprofit communities. • GCRCB prefers candidates to be based in the San Francisco Bay Area, but is open to considering candidates in other locations. We’ll support candidates with the costs of relocation to the Bay. • BIOSECURITY AND PANDEMIC PREPAREDNESS • The Biosecurity and Pandemic Preparedness (BPP) team, led by Andrew Snyder-Beattie https://coefficientgiving.org/team/andrew-snyder-beattie/, funds work to reduce the risk of catastrophic biological events, particularly those arising from the deliberate misuse of biotechnology. BPP’s work focuses on prevention (including biosecurity policy and governance, and safeguards to mitigate AI-enabled bio risks), response in the event of a catastrophe (especially personal protective equipment, biohardening, and detection), as well as field-building to support the broader ecosystem working on these problems. • Compared to the other teams hiring in this round, BPP focuses specifically on catastrophic biological threats, while still supporting a wide range of approaches within that space. The team’s work is often entrepreneurial, with a strong emphasis on identifying gaps and helping to launch new organizations or initiatives to address them. While familiarity with biosecurity, public health, or biotechnology can be helpful, it is not required, and many of BPP’s most effective grantmakers come from non-bio backgrounds. That said, for some areas of work, particularly at the intersection of AI and biology, relevant technical or domain expertise can be a significant asset. • For this round, BPP is particularly interested in hiring individuals to work at the intersection of AI and biology or biosecurity field-building. • BPP prefers hires to be in-person in Washington, D.C. While significant remote work is possible, we want hires to spend meaningful time in D.C. to connect with the team and our network, and would cover expenses for such trips. Regardless of primary location, the role may occasionally require travel to other locations, both within the U.S. and internationally. • GCR EXECUTIVE TEAM • The GCR executive team, led by Emily Oehlsen https://coefficientgiving.org/team/emily-oehlsen/ and George Rosenfeld https://coefficientgiving.org/team/george-rosenfeld/, leads the Global Catastrophic Risks division. The team sets high-level strategy, manages the program teams, and owns high-priority special projects to ensure the division can run hard at its goals of navigating transformative AI and reducing catastrophic biorisk. The executive team is responsible for the whole division, and works on projects and decisions that determine which sub-fields end up being prioritized, how well the team can execute, and how hundreds of millions of dollars are spent. • Unlike most other roles in this round, this is a senior generalist role rather than a grantmaking role. It provides direct access to senior leadership and decision-making across the division, has the potential to amplify the impact of the whole GCR team, and offers a platform for leadership and strategy work. • The team is open to candidates at a wide range of experience levels, from early-career to very senior, and the shape of the role will be tailored substantially to the successful candidate. • Preparing Emily and George for meetings with high-stakes external stakeholders (e.g. major funders, frontier AI companies, senior policymakers). • Writing strategic memos that shape major decisions for the division, such as identifying the most important neglected priorities and recommending how to address them. • Reviewing internal processes to accelerate the division's grantmaking and drafting communications to raise the ambitions of grantees. • Identifying opportunities to use AI to automate significant parts of the team's work, leveraging growing AI capabilities to keep pace with a rapidly changing world. • Identifying and executing major pivots and special projects the division should undertake in the run-up to transformative AI. • Helping to set division-wide strategy, including on questions like the division's house view on AI timelines and how the team should respond to major shifts in the AI landscape. • Leading investigations to identify the division's next area of expansion, and potentially spinning up a new team to run it. • Managing team leads or other senior staff, and developing outreach strategies to recruit senior talent into the division's highest-priority open positions. • Depending on experience, successful applicants will be hired at the Associate Program Officer, Program Officer, or Senior Program Officer level. The team is open to considering alternate titles based on candidate seniority and preference (e.g. Special Projects Lead, Chief of Staff, Strategy Lead). Strong candidates will combine sharp strategic judgment, a clear track record of ownership and execution, and the ability to push teams to aim higher and move faster. • There is no hard location requirement for this role, though the team prefers candidates based in (or willing to relocate to) Washington, D.C., London, or the San Francisco Bay Area — and/or candidates who are able to travel frequently to those locations to facilitate regular in-person work.

Benefits

• Compensation: The baseline compensation for the various levels of these roles are: • The starting compensation for a Senior Program Associate is $151,800.00 – $189,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500. • The starting compensation for an Associate Program Officer is $204,500.00 – $254,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500. • The starting compensation for a Program Officer is $225,000.00 – $280,000.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500. • The starting compensation for a Senior Program Officer is $257,000.00 – $319,500.00 depending on team and location, of which 15% is paid as an unconditional 401k grant, up to $24,500. • The starting compensation for the Chief of Staff role on the AIGP team is $231,000.00 - $280,000.00 depending on seniority and location, of which 15% is paid as an unconditional 401k grant, up to $24,500. • All compensation will be distributed in the form of take-home salary for internationally-based hires. • Time zones and location: We offer remote work in many countries, and we are open to hires outside the U.S. • We’ll also consider sponsoring U.S. work authorization for international candidates (though we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval). • Benefits: Our benefits package includes: • Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses. • Dental, vision, and life insurance for you and your family. • Four weeks of PTO recommended per year. • Four months of fully paid family leave. • A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive. This policy also includes a productivity benefit, which provides a set amount for staff to expense items that enhance their productivity. • A continual learning policy that encourages staff to spend time on professional development with related expenses covered. • Support for remote work — we’ll cover a remote workspace outside your home if you need one, or connect you with a Coefficient Giving coworking hub in your city. We currently have offices in San Francisco and Washington D.C., and multiple staff working from several other cities in the U.S. and elsewhere. • We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country). • Start date: The start date is flexible, and we may be willing to wait for an extended period of time for the best candidate, though we’d prefer successful candidates to start as soon as possible after receiving an offer.

Get Started Free

No credit card. Takes 10 seconds.

Privacy·Terms··Contact·FAQ·Wagey on X