Engineered pathogens pose a grave threat to society, plausibly constituting an existential risk[1] (‘x-risk’) to humanity.[2]
Yet remarkably few people are working full-time on this problem. By my count, there are ~160 people on the planet whose full-time job is reducing bio x-risk.[3] This entire group could fit on a single short-haul flight. As one point of comparison, something like 200+ organizations are working on AI safety – a problem area that is neglected in its own right (relative to its scale).[4]
This is bad. We have a short bench, and only limited time to solve this problem: AI is advancing quickly, and in the near term I expect advances to help attackers more than defenders.
We urgently need more talented people who want in on this mission. We’re especially short on people who can truly own crucial projects, approaching them with good judgment, entrepreneurial drive, and a mix of desperation and impatience sufficient to see them through to completion, and quickly.
But what should these people actually… do?
Below is a list of 10 important projects that I’d be excited to see people work on and own. These projects cover:
- Physical defenses: PPE stockpiles, DIY cleanrooms and respirators, particle monitoring
- Prevention and policy: red-teaming safeguards, AI-bio demos for policymakers, food stockpile expansion, mirror life policy
- Field-building: raising awareness, headhunting project leads
The list draws on ideas my Coefficient Giving team’s lead, Andrew Snyder-Beattie, discussed in a recent podcast and blog post. Many of these wouldn’t appear on traditional lists of biosecurity projects, but they are all pressing gaps that my colleagues and I see at the moment. You can read a bit more about how they fit into the bigger picture in the appendix.
I can’t take credit for these ideas, as the list draws heavily on discussions with my team and others in the field. Special thanks to Aman Patel, Adin Richards, and Damon Binder for their contributions. And kudos to the many grantees, collaborators, and colleagues whose ongoing work means several other projects didn't need to appear here — the list would be longer and more daunting without them.
This list isn't exhaustive, and it isn't ordered by priority — not all of these are equally pressing, but I consider all of them pressing in absolute terms. I see this list as offering strong hypotheses rather than final answers, so end-to-end ownership of any of these projects should include making sure it's actually the right thing to work on, not just taking my word for it.
And while many of these projects will benefit from specific technical expertise and credentials, these are rarely the binding constraint on progress. It’s a common misconception that doing good biosecurity work requires a biology background. That’s really not the case. The main bottleneck is commitment, sound judgment, and the generalist agency to just get things done. People with those qualities can usually find their way into a problem, working out an approach that fits their background and assembling any technical help they need along the way. My team is eager to back promising people and projects (with funding, introductions, advice, etc.).
If anything here resonates, some immediate concrete things you could do:
- Fill out this form to register your interest in this work and stay posted on opportunities
- Apply to our team’s open request for proposals
- Closes on May 11 @ 11:59pm PT – 500-word application, shouldn’t take >1h
- Request career development and transition funding
- Anything from one-off travel support to many months of full-time living stipend and resources
- Subscribe to the GCBR Org Updates newsletter and apply to opportunities highlighted there
- Apply to work with us!
Projects I’d be excited to see people work on and own
1. Ensure PPE stockpiles and emergency distribution systems are in place in X country
Why it matters
In combination with social distancing, supplying the critical workforce with effective respirators could form the backbone of a minimum viable path to keeping society afloat in a high-lethality human-to-human pandemic. Stockpiling sufficient personal protective equipment (PPE) may even be doable at the scale of private philanthropy. One of our grantees — ProEquip — is already working toward stockpiling and distributing enough PPE for essential workers in the United States. It would be valuable to expand this work to other countries, especially those with the industrial capacity to develop and export medical countermeasures.[5] In some cases, it might be easiest to expand government-funded stockpiles; in others, a philanthropic model might be more tractable.
Tentative victory condition
By end of 2027, at least three countries (combined population ≥500M) maintain effective stockpiles of enough elastomeric half-mask respirators (EHMRs), N95s, and/or powered air-purifying respirators (PAPRs) to cover all critical workers for ≥6 months.
2. Ensure systematic red-teaming of gene synthesis and AIxBio safeguards
Inspired by work from Sentinel Bio, particularly Joshua Monrad
Why it matters
Can gene synthesis providers and frontier AI companies successfully block bad actors from getting access to dangerous materials and information? One-off audits have found concerning holes in recent years, which has influenced both these companies’ systems and policymaking at the highest levels of government. But basically no one is systematically doing adversarial red-teaming on an ongoing, full-time basis. This means policymakers, businesses, and the public are flying blind as to whether the safeguards in place actually work.
Tentative victory condition
By mid-2027, there is a standing, well-resourced red-team unit running regular “in-situ” adversarial testing of gene synthesis providers, frontier AI labs, and other important biotech chokepoints (e.g., pathogen repositories, contract research organizations) across multiple jurisdictions. They coordinate with industry, law enforcement, and other stakeholders, and their findings are responsibly disclosed and reliably routed into provider-level fixes and regulatory action.
3. Create and use demos to communicate AI-bio uplift to policymakers
Why it matters
Most senior policymakers haven't seen what frontier AI models can do for someone trying to design a pathogen. While skimming a whitepaper can be informative, watching a model coach a non-expert through a tricky plasmid design can be much more visceral. Demos have shaped policymaking before, but nobody is doing this on a sustained basis as capabilities evolve. Doing demos well – in a way that is accessible and calibrated, neither overstating nor understating capabilities – will take a lot of effort, skill, and expertise.
Tentative victory condition
By mid-2027, a standing "AI-bio demo team" – with credible biology expertise, well-honed materials, and calibrated practices around information hazards – has delivered briefings to senior staff in all relevant Congressional offices and executive-branch agencies, plus counterparts in at least 5 other major countries/jurisdictions. They are doing this on a continuing basis, updating as model capabilities advance.
4. Ensure strong, sensible mirror life policy is effective in X country
Why it matters
If ever created, mirror bacteria — built from scratch with mirror-image DNA, amino acids, etc. — would likely evade many of our immune defenses and grow largely unchecked by natural predators. We have reason to be concerned they could persistently saturate outdoor soil, water, and air, going on to kill humans and much other multicellular life. Scientific consensus today is to never build mirror bacteria, but norms get broken (e.g., He Jiankui in 2018), and the precursor technologies are advancing fast. There's a growing international conversation about this risk, spanning United Nations agencies, national governments, and the relevant scientific communities, with the Mirror Biology Dialogues Fund and others helping coordinate. But only about 10 people work on this full-time today, meaning someone serious could plausibly join the effort as a strong voice on mirror life policy in their country within months.[6]
Tentative victory condition
By end of 2028, all jurisdictions with credible technical capacity to develop mirror life have binding restrictions on mirror-cell synthesis and the most dangerous precursor technologies.
5. Develop rigorously tested DIY protocols for converting bedrooms into cleanrooms
Why it matters
In a scenario where lethal pathogens persist in the environment (e.g., mirror bacteria), outdoor air might be heavily contaminated with infectious aerosols. To survive, people would need to reduce the amount of outdoor particles they inhale by orders of magnitude, potentially for months on end. Stockpiling purpose-built cleanroom equipment for the whole population is wildly impractical.[7] Fortunately, however, DIY solutions that use abundant materials (e.g., fans, filters, tape, blankets, vacuum cleaners) have a good shot at working. Slapdash preliminary tests by colleagues, using just tape and towels, have already attained a ~30x reduction in the hardest-to-filter particle sizes. But we still don’t have anything close to validated, expert-endorsed protocols that an emergency manager could responsibly push out to untrained households on day one.
Tentative victory condition
By end of 2027, there exists a publicly available, peer-reviewed, data-backed "Nuclear War Survival Skills"-style guide, showing ≥1000x sustained PM10 reduction when followed by untrained households for ≥7 days.
6. Develop rigorously tested DIY protocols for making respirators out of common household materials
Why it matters
Even with good DIY cleanrooms, people in an environment-to-human threat scenario couldn’t stay inside forever. Critical workers would need to leave home to keep water and power running, distribute essential supplies, and manufacture countermeasures. Most other people would also need to go out from time to time, too, if only to do maintenance work on their homes. Before a crisis hits, we should stockpile at least enough PPE for critical workers (see #1). But if there ends up not being enough PPE to go around, we need rigorous, well-tested protocols that untrained individuals can use to make effective PPE themselves. DIY “mask hacks” from 2020 (like these) were a good start, but we need people to push DIY respiratory protection to its limit.
Tentative victory condition
By end of 2027, there exists at least one validated DIY protocol that, when followed by untrained people using only items present in >80% of US households, reliably delivers ≥1000x PM10 reduction for ≥4 hours of continuous wear.
7. Scalably monitor particle concentrations inside clean spaces and respirators
Why it matters
In a crisis, untrained people would greatly benefit from fast, unambiguous feedback about whether their bedroom-cleanroom or DIY respirator is actually working — otherwise they may think they're protected when they actually aren't, or abandon protections that are working just fine. Existing particle counters typically cost thousands of dollars[8], generally aren't designed for stockpiling or in-respirator wear, and have no manufacturing plan suited to a crisis ramp-up.
Tentative victory condition
By end of 2027, there exists a working sensor design that accurately detects ≥1000x indoor/outdoor PM10 differentials, can be worn continuously inside a respirator, and either costs <$1 per unit to stockpile or has a validated path to surge manufacturing.[9]
8. Boost countries’ food stockpiles by >25%, especially countries with high industrial capacity
Why it matters
A worst-case environmental pathogen (like mirror bacteria) could make outdoor agriculture impossible for a year or more. Extensive research by my colleague Adin Richards has shown that non-agricultural food sources, like methanotrophic bacteria and algae grown in photobioreactors, could be a path forward even if we couldn’t grow plants outside anymore. The catch is that it would take a lot of time to get the necessary infrastructure up and running, straining our food reserves. Strategically enlarging food stockpiles could be a tractable lever to increasing societal resilience to this sort of threat, especially in countries otherwise well positioned to successfully establish non-agricultural food production.
Tentative victory condition
By the end of 2027, at least 3 major industrial economies have increased their effective food stockpiles by at least 25%.
9. Raise awareness of engineered pandemic risks, especially among people who can do something about it
Why it matters
Proportional to the scale of the problem, serious public discussion of catastrophic threats from engineered pathogens is virtually non-existent.[10] This holds back both talent recruitment and serious policy action. We need people to bring this issue to life for relevant audiences without sliding into sensationalism or misrepresenting the facts.[11] It should be common knowledge that this is one of the major challenges society faces today.
Tentative victory condition
By the end of 2027, at least 3 journalists have this as their full-time beat, at least 10 Substacks write about the problem regularly, and the average university graduate lists it among the 10 most pressing threats facing society.
10. Headhunt the leads for these and many other projects
Why it matters
Simply put, nothing will happen if nobody does it. Projects need leadership. The people who will go on to lead these projects are out there. We need people to find them and accelerate their paths to doing the most impactful work on this issue. This could look like a lot of different things, including starting a headhunting org, working with an existing fieldbuilding operation to drive this forward (e.g., BlueDot Impact, 80,000 Hours, Coefficient Giving), or even (meta!) persuading someone else to take it on as their top priority.
Tentative victory condition
By the end of 2026, all of the above projects have at least one capable “owner” who sees it as their personal responsibility to make them happen.
***
As I noted up top, this is not an exhaustive list, but it points at many of what I see as the biggest gaps. There is a lot of important work to be done, and not many people doing it. If you want to help change that:
- Fill out this form to register your interest in this work and stay posted on opportunities
- Apply to our team’s open request for proposals
- Request career development and transition funding
- Subscribe to the GCBR Org Updates newsletter and apply to opportunities highlighted there
- Apply to work with us!
(And to learn more about biosecurity, I designed this reading list as a place to start.)
Appendix: How this all fits into the bigger picture
When I take calls with people new to the field, I often give some version of a whirlwind “biorisk 101” spiel, where I quickly try to put things in context, at least as I see it. Here, I’ll share that spiel in sketch form to give a sense of how I see the above projects fitting in.
- The biggest risks come from humans (state or non-state actors) or rogue AIs, or some combination of these
- Russia and North Korea apparently have had active biological weapons programs for decades.[12] (The Soviets did some wacky stuff.)
- Al Qaeda pursued biological weapons. Aum Shinrikyo is a well-documented example of a group that seriously tried to develop and use biological weapons in service of an apocalyptic vision.
- Rogue AIs are of course a more speculative actor, but worth taking seriously in my opinion. Conditioned on a rogue AI existing, it’s plausible that a biological weapon of some kind would be among the cheapest ways for it to extort and disempower humanity, given how few industrial inputs it requires relative to other mass-casualty weapons like nukes.
- Deliberately weaponized pathogens worry me a lot more than lab leaks or naturally emerging pathogens
- With the exception of mirror bacteria, I think it’s unlikely well-intentioned researchers would accidentally stumble into a pathogen design that threatens extinction. Something super scary would probably be pretty weirdly optimized for harm. And Mother Nature just isn’t “the ultimate bioterrorist.”
- Nuance: Lab leaks from biological weapons programs could be scary
- I’m principally concerned with two classes of engineered biological threats:
- Human-to-human transmissible
- This is usually what we talk about when we talk about pandemics. Infected people emit infectious particles; susceptible people breathe them in and get sick.
- Good PPE for essential workers might be sufficient to prevent the worst case scenarios.
- Environment-to-human transmissible
- This is more exotic. Malaria and other zoonoses can be considered narrow examples of this class, where the pathogen comes ‘from the environment’ rather than from another person directly. More worrying are threats that could persist broadly in the environment as an ambient respiratory threat. Valley fever is one existing example in this category, and mirror bacteria would risk being far worse.
- This class of threats is more concerning than human-to-human as a potential cause of extinction, and requires more effort to get on top of, like the DIY ‘biohardening’ efforts described above.
- Human-to-human transmissible
- One framework for thinking about interventions is Prevention, Detection, and Response[13]
- Prevention
- I expect the most impactful things are (1) preventing anyone from creating mirror bacteria, (2) blocking bad actors’ access to dangerous nucleic acid synthesis products, and (3) reducing the extent to which AI provides uplift to bad actors. It also could be quite important for states to enhance their intelligence agencies’ work on biological threats, so as to boost their chances of detecting and characterizing state and non-state biological weapons activities. Overall, except for (1), I see these interventions as relatively likely to fall short compared to “right of boom”-focused projects like PPE stockpiling. However, conditioned on success, they’d be among the best things that could be done: as they say, an ounce of prevention is worth a pound of cure. So I’m excited to see a lot more work here.
- Detection
- Early warning is the name of the game: how can we minimize the time between when a worst-case pathogen is released and when a full-scale response is triggered? I’m excited about the work SecureBio – a grantee of ours – is doing to build out a pathogen-agnostic detection system, and I think it’s important for governments to invest more in this type of capability and figure out how to move in a swift, calibrated way from initial potential detection to whole-of-society response.
- Response
- The most important thing is to make sure people don’t get exposed to pathogens. For human-infecting pathogens, if they can’t physically touch you, they can’t hurt you, and breathing them in is generally the infection route that’s hardest to defend against. Key interventions here are PPE (that protects both the wearer and those around them) and mechanisms to prevent pathogens from getting indoors or transmitting between people (e.g., behaviour change, filters, positive air pressure, disinfectant vapors, ultraviolet light). PPE is the crucial tool for a threat that transmits human-to-human, whereas the full stack is needed for an environment-to-human pathogen.
- Medical countermeasures are a secondary priority right now mainly because they take so long to develop, test, produce, and distribute. (Also, given how much money and effort gets thrown into pharmaceutical R&D already, it’s likely not tractable at current margins as non-pharmaceutical interventions.) I’m excited for more work on diagnostics and on non-agricultural food production, but see these as secondary as well.
- Prevention
- There’s also a set of “meta” interventions that support object-level work
- As discussed above, I’m excited to see more people growing and supporting the field of people working on defending against bio x-risks, and also raising awareness of these risks. This could look like running events, starting fellowships, publishing content, creating common resources, and recruiting talented people to work on these issues.
- Taking a step back, why are these risks so neglected anyway?
- I think a lot of it is historically contingent, having to do with how the broader field of biosecurity has developed. The public health field has admirably banged the drum about pandemic risks for decades, but has largely seen engineered threats as out of scope or relatively low priority. The biodefense community, on the other hand, is very concerned about engineered biological weapons (especially those that have been pursued by historical bioweapons programs e.g., multi-drug-resistant tularemia), but many in that community have been reluctant to focus on threats whose effects extend far beyond the military theatre.[14] Anecdotally, I’ve observed that many people in these fields haven’t yet internalized how much more powerful biotech is now compared to the turn of the millennium – especially now with AI as fuel for progress. Meanwhile, I’ve observed that the scientists closest to these advances tend not to think much about the security implications.
- Of course, a lot of this is also just the fact that the risks I talk about here – the sort of exotic stuff that could threaten civilization – have so far been “science fiction.” While infectious diseases have killed horrifying numbers of people, the kind of weapon I worry about has never been built. Mirror bacteria serves as a credible example of a potentially devastating biological innovation, and concerning experimental results with other pathogens have been published. But many people do not extrapolate from this fact pattern to concern about tail-risk scenarios, usually because of different empirical or normative beliefs. While I think reasonable people can differ somewhat in their beliefs about this stuff, I do think it’s quite common for people to err in giving too little attention to catastrophes whose chance of happening is on the order of 0.1-10%, especially those where the largest portion of the risk lies >12 months in the future.
Again, this is all just a sketch of how I think about these things (with a lot of my thinking informed by work from my colleagues). Feedback is welcome!
One definition of existential risk that I like is Michael Aird’s 2020 definition, which he adapts from Toby Ord: “An existential risk is a risk that threatens the destruction of the vast majority of humanity’s long-term potential.” There’s a lot of nuance one could get into about what counts and what doesn’t. Personally, my thinking here isn’t sophisticated, and in practice, I approximate x-risk as “the risk of all humans dying.” ↩︎
To give a sense, Andrew Snyder-Beattie, who leads the team I’m on at Coefficient Giving, has estimated a 1-3% chance of a pandemic that threatens human survival within our lifetimes. ↩︎
In getting this rough count, I excluded people for whom biosecurity is just one component of their job, and people whose work is somewhat helpful for reducing bio x-risk but not directly aimed at solving the problem. The tally does include people who are working at orgs squarely focused on bio x-risk, even those personally more motivated by the non-x-risk impacts of their projects. ↩︎
I asked Claude Opus 4.7 to do this count for me, then cross-checked it with a colleague on Coefficient Giving’s GCR Capacity Building team. This site gives a gist for the general activity level in AI safety. This analysis provides an additional perspective. ↩︎
ProEquip would be a great team to talk with as a first stop if you're interested in working on this project – could look like either starting a new initiative or joining their team. ↩︎
Note: Broadly speaking, attempts to shape policy have real potential to backfire. I’d strongly encourage anybody interested in this project to think carefully about their approach and reach out to the Mirror Biology Dialogues Fund to get the current lay of the land and discuss how to most productively contribute. ↩︎
NB: I’m also excited about the project of getting air cleaning deployed widely before a pandemic hits. While I think the DIY goal is more neglected and tractable on short timelines, I’d be pumped for more people to take on the “peacetime deployment” mission and approach it with due urgency. (Thank you to Jeff Kaufman for feedback leading to this inclusion.) ↩︎
Possibly the goal should instead be to have two sensors: one for room inside-outside differentials, another for respirator inside-outside differentials. ↩︎
Historically, I’ve urged people to be very cautious about info hazards, especially in light of the unilateralist’s curse. I still think caution is warranted, e.g., I continue to discourage folks from sitting around idly brainstorming the best, most creative ways to kill everyone with pathogens. But I also think I and others have indexed too hard on info hazard concerns in the past, placing too little weight on the costs of caution in the cost-benefit calculation. Also, relative to the past, I think the costs of secrecy now are higher (since we are working against a tight deadline), and the costs of sharing information are lower (since open-source models may end up pointing bad guys to the most concerning information anyway). ↩︎
It’s also important that, when considering which information to put out there, public communications weigh the relative costs and benefits. The vast majority of information is totally fine to share, but some information could inadvertently cause harm. ↩︎
However, a recent US intelligence community report seems to imply that Russia does not have an active biological weapons program. I’m not sure what to make of this. ↩︎
Prevention, Detection, Response is a common framework in global health security. It was also recently modified to Delay, Detect, Defend by Kevin Esvelt, in the context of protecting against engineered pandemics. ↩︎