..

Peter Wildeford on Forecasting

Michaël: Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projects that scale.

Michaël: Before we go into details of forecasting, maybe we could start with a definition of forecasting for our listeners that are not very familiar.

Peter: Yeah. I think it’s easy to talk about, just hard to do, but basically you’re just trying to tell the future, decide what events will happen in the future given what you know now. And obviously since no one can perfectly see the future, you want to think probabilistically, what is likely to happen or unlikely to happen. Ideally, you would give a probability assessment in a fraction like 80% or 30%. And based on that, you can express ideas about the future at different levels of confidence. And from there, there’s a whole system of reasoning and vetting and optimal decision making that can happen based on forecasting. So it’s pretty cool. You can do risk management, all sorts of different things.

Michaël: It’s a great way to take decisions and improve decision making by creating a good map of what could happen. And I think it’s especially important since COVID was kind of unexpected and that’s when forecasting took off. Right?

Peter: Yeah. I think that forecasting definitely started interesting me a lot in the run up to COVID. I definitely wish that I had been as active forecasting then as I am now, because I think that was a really great learning opportunity. I think anytime our society’s faced with a very unexpected event, it definitely renews a lot of interest in seeing how we can detect these sorts of events in the future.

Michaël: People were also surprised with Putin getting really angry and starting to launch nuclear threats. And I think at that time people were very surprised and tried to do a bunch of forecasts on where it could go. And he started pushing limits of what people were expecting. Did you get that impression as well?

Peter: Yeah. There’s definitely a lot of uncertainty and instability with regard to Putin’s actions, but I think a lot of them have been foreseeable. There certainly have been forecasters, including myself, that definitely thought that there was a very high risk of invasion from Putin as early as December of last year. I think at that point I had put like a 55% to 60% chance that Putin would launch an invasion just based on some of the things that Russia was saying, and there’re like mass force buildup, and also just the general history of Ukraine and Russia. I think since the war unfolded, I was definitely very happily surprised at how well Ukraine defended against Russia. I’m definitely curious and a bit worried about how that will continue to play out as Russia can’t get some sort of win to show for all their war efforts. They might end up doing increasingly destructive and destabilizing things.

Michaël: Right. They might even use chemical weapons and keep on destroying houses and civilians’ properties. I’ve seen you you’ve recently been active on Metaculus, trying to predict the use of chemical weapons. Did you actually get to do a prediction on this?

Peter: I predicted whether Russia would use chemical weapons. I think it’s pretty important to look at that question pretty carefully and understand what is meant by a chemical weapon. When I think of using chemical weapons, I think of what Syria did in terms of battlefield use of say chemical gas or other things on army military targets or even civilian targets. But there’s actually a lot of gradations to chemical weapons, and it could include just poisonings, even non-battlefield uses of poison. And non-battlefield uses of poison is actually something Russia likes to do a lot. I think Russia’s probably the most prolific poisoner of all the powers in the world today.

Peter: So I thought that there was a 40% chance that this question would resolve correctly, with there being a clear case of two permanent members of the United Security Council assessing that Russia has used a chemical weapon. But I thought that a vast majority of those scenarios would be non-battlefield uses of poison attributed to Russia rather than Russia doing a chemical weapon attack on the battlefield with say, chlorine gas or something similar to what you may have seen from Syria. Which I mean resolves the question all the same, but in my opinion is significantly more serious and worrying. So, I think the vast majority of my assessment comes from non-battlefield poisonings, technically being a chemical weapon.

Michaël: Interesting. You said 40%, right?

Peter: Yes.

Michaël: So, how do you end up with this number? Is it mostly from intuition or do you apply some methodology, using a base rate and then trying to extrapolate?

Peter: Yeah. It definitely is very intuition-driven. I think you can know basic facts about Russia’s use of chemical weapons. So, recently they did an attempted assassination with poison in 2018 in the Skripal case. And then again in 2020 with Navalny, which they didn’t succeed, but used it nonetheless. And then they’ve been alleged to use or facilitate the use of chemical weapons in Chechnya in 1999 and 2002, and potentially again in assisting Syria in the 2017 civil war. Though, I think none of those three cases have been proven to the standards of this question. And you might think that Russia would be again, highly motivated to poison enemies, given that they’ve done this before and now they face strong incentives to do so again.

Peter: I guess the main thing holding me back from thinking this is likely to happen is just the fact that it requires a pretty high standard of proof. You have to have two permanent members of the UN Security Council make a definitive statement, which may be hard to come by, or have at least six prominent news sources, such as The Economist, The New York Times make a definitive statement. And I think there’s going to be a lot of cases where there’s a poisoning, but it’s hard to explicitly attribute it to Russia, because they’re usually pretty good at disguising their tracks to some case. So, based on that, I came to just 40% being more unlikely to happen than to happen, but still quite likely. It’s also pretty close to the current community median prediction of 34%.

Michaël: Essentially you’re saying that, digging into account previous use of chemical weapons to assassinate or for poisoning, you could say that it’s higher than 50%, but because the question is quite strict about like the two members of the UN Security Council, you tend to put it lower than 50% because it’s a very hard criterion to meet.

Peter: Yeah, that’s right. And also, I guess if you’re using a strict base rate like two times out of the past five years might suggest also like a 40% rate. Though that wasn’t the main reason I chose 40%.

Michaël: So maybe we can move from this one to another prediction that is a bit more complex with different parts that we can decompose. So, one thing we talked about was the probability of having a nuke in London killing people living there. So there was a question on the Effective Altruism Forum about what’s the probability of this happening. I think they have criteria where they multiply by the actual number of people that were killed. So if maybe a half of the people in London were killed, then you get half a chance of dying if you’re there. That’s kind of a detail, but how would you approach decomposing a question like this into different parts where you get condition and probabilities?

Peter: Yeah. Definitely the decomposition that you mentioned is the key to answering such a complex question, that basically you want to be able to break things down into different parts. So I guess first in order for there to be a bomb in London, you can also just ask like, what’s the chance of a nuclear bomb being detonated anywhere. And then you can be like, okay, given that a bomb has been detonated somewhere, what’s the chance of it being in London. So, that’s kind of decomposition with a conditional forecast. And then I guess the chance of if you go, what’s the chance of there being a nuclear bomb detonated offensively. You might ask, like what’s the chance of there being war conditions that might lead to a nuclear exchange such as like there being a formal war between NATO and Russia.

Peter: And so then you can kind of really get into it. I think there’s also two main pathways to nuclear war. There’s certainly intentional nuclear war, which is NATO and Russia get into some sort of conflict, Russia decides to launch a nuclear weapon, as a part of that conflict, and then they decide to launch that weapon at London, in addition to, or instead of other targets, with the UK obviously being part of NATO. Then there’s also accidental nuclear war. Which would be, Russia mistakenly thinks they’re being nuked and they decide to launch all their nukes in retaliation to try to get a second strike before they’re no longer able to strike. And one of those nukes hits London. So you might want to go down both branches of that. And then, I guess you can just start assigning numbers.

Peter: So, maybe there’s like a 5% chance that there would be some conflict between Russia and NATO. The reason that being so low is like NATO is trying very hard not to be directly involved in the conflict, and Russia really has nothing to gain and everything to lose by taking a conflict directly with NATO, because Russia is so much weaker than NATO militarily. And then I guess if there’s a war, what’s the chance of nuclear weapons would be used. That’s like really hard to say. I would hope that again with mutually shared destruction, there’d be a lot of pressure and incentives on both sides to keep the conflict nonnuclear because they’d know that a nuclear conflict would inevitably lead to the destruction of both countries. No one really wins in a nuclear war. So you probably would try to see lots of attempts to end the war as peacefully if possible, or try to find some sort of way to keep the conflict conventional.

Peter: For example, Russia is attacking Ukraine right now, but they’re not nuking Ukraine. It’s purely with conventional weaponry. So it’s certainly possible that a war might stay conventional. Just to throw a number out there like a 40% chance that it would be nuclear. And then once you have a nuclear war, it’s also not a guarantee that they would hit London. They may only want to hit military targets and avoid civilian targets mainly out of fear of retaliation. And that’s called counterforce targeting where you eliminate military targets, versus counter-value targeting where you try to go after huge cities. And so it’s not a guarantee that a nuclear war would lead to hitting major popular centers. In fact, I would expect that it would be preferred to be avoided, so maybe like 60% or something. And then you can multiply 0.6 times 0.4 times 0.5, I mean 0.05. And then you would get like a 1% chance or so of there being a nuclear war that targets London. Though, I think actually on reflection, I would put it at under 1%, probably closer to one-third or one-fifth of a percent.

Peter: And then you would also add in the accidental nuclear war side of things too, which I think luckily we haven’t had any major accidents since 1995. But it used to be the case that Russian sensors would detect weather events and accidentally think their nuclear launches, or they detect peaceful rocket launches and accidentally mistake them as nuclear launches. And same in the United States in that then you might try to trigger a nuclear war in response to this false alarm. But luckily in every case that’s happened, we’ve successfully deescalated and not launched a nuclear war. And so hopefully these accidents would continue to go down in frequency and also would continue to not result in a nuclear exchange. And so you could quantify that, how many accidents have there been, how likely is there to be another accident in the future. And then how likely is that accident to escalate into a nuclear war response?

Michaël: So I think in terms of accidents, like the detection failure from mechanisms that try to detect nuclear threats, I think there’s been like two in the past 50 years. At least that’s what I saw in the data of Metaculus. And so, if we do two in the past 50 years, it gives us a base rate of 4%, but then I don’t know about the probability of it escalating into some kind of full scale nuclear war.

Peter: Yeah.

Michaël: I think my main disagreements from your take on it being hard for Russia to target directly London and civilians instead of military bases is, that for the past couple of months Putin has been very far from rational, and he has targeted civilians.

Peter: I think the question here, is Putin a rational actor? Is definitely of debate. I think there’s definitely some ways you could see him as still a rational actor, and also many ways you can see him as not being a rational actor. So you might want to put some probability over those two scenarios, one where he’s rational and one where he’s not. And then assess the scenarios under each circumstance and then integrate them together through multiplication.

Peter: And then of course, if he’s irrational, there might be a question of how irrational. Like obviously I guess killing civilians indiscriminately is incredibly terrible, but nuking a major city would be a whole nother level of terrible, and also a whole nother level of provoking the end of Russia through a nuclear retaliation. I assume that Putin still doesn’t want to end all of Russia. In fact, he seems pretty obsessed with his own survival and the survival of the Russian state. I think that the main reason he’d use nuclear weapons is if he thought that the Russian state or his own survival was at huge risk, which is I think why the United States is taking extreme pains to avoid making it look like they are trying to eliminate Russia or go to war with Russia directly.

Peter: Yeah. Like you said, I agree with you that you would have to take Putin’s mental state into account when assessing the risk of him launching nukes. And that you might end up with a higher estimate if you take that into account, especially if you really think that he’s an irrational actor.

Michaël: So the way forecast will do it is, they have a decision scenario with like two or three different cases. So one you said is, Putin is willing to kill civilians by nuking an entire city. And then you have this hypothesis like, is Putin willing to do such a terrible act or is Putin like below this level. Maybe it’s a better distinction between than irrational and rational, because then it’s very hard to say something about someone irrational.

Peter: Yeah. It definitely is difficult to predict irrational behavior by definition. And in fact, there even kind of is, if you’re a geopolitical actor, even if you are perfectly rational, you may want to pretend to be irrational, that’s kind of the so-called madman theory. That I think even Richard Nixon tried to do at one point. Where you really make it look like that you’d be ready to nuke anyone at any point. And as a result, people really, really try not to anger you and you get more of what you want. So he could be rational to appear irrational, or Putin could just be irrational. And so I guess that gets more into game theory type scenarios, where you try to figure out, how would you deal with a less than rational actor.

Michaël: There’s a possibility of Putin being rational, but pretending to be rational in a rational sense and in being completely rational.

Peter: Yeah, definitely.

Michaël: I’m curious, when you go on predicting things on Metaculus, do you often decompose in a kind of disjunctive scenario, or do you more often just assume something and then just do conditional probabilities and multiply them altogether?

Peter: No. We definitely would do the disjunctive scenario analysis I was describing. Where I would assign, like here’s what I think the probabilities look like if Putin is a rational actor, and here’s what the probabilities look like if Putin is an irrational actor, and maybe there might be shades of irrationality. And then I would multiply all those scenarios by the likelihood of Putin being in these various states. And so then you could get an overall forecast that goes over your uncertainty around Putin’s mental state.

Michaël: I think this resonates with a lot of work you might have done at Rethink Priorities, the nonprofits, you’re a co-CEO, where you try to kind of estimate the risk of nuclear war before Ukraine. Do you know what’s the risk of nuclear war when we are not in a cold war or not in a war with Ukraine?

Peter: Yeah, definitely. Rethink Priorities is the research organization I run, and we’ve tried to assess nuclear risk. As an organization, we haven’t actually done any recent work post Russia conflict, so we haven’t had a chance to update our numbers officially. But I would say like, unofficially I think the Metaculus assessment looks pretty good and well reasoned to me. I might be maybe personally inclined to think that things aren’t quite as scary as during the cold war, especially, I guess it depends on when in a cold war. I guess we definitely enjoyed a relative peace between 2000 and 2020 after the fall of the Soviet Union. But before the current conflicts starting in 2014 or so, and really hitting a turning point now. So definitely things are riskier now than they used to be. But I think during the cold war, we had more direct proxy conflict between the United States and Russia than we did now, like multiple different conflicts rather than just Ukraine and higher stakes conflicts. Like the Cuban Missile Crisis certainly feels much higher stakes than the Ukraine conflict when it comes to nuclear risks.

Peter: And I suppose, I guess another thing is just from a forecasting standpoint. Like if we rewind the clock to the 1970s and 1980s, we just have much less data on nuclear weapons at that point. And I think things would’ve looked subjectively much scarier because we would’ve already just survived much fewer years without conflict, and we would’ve had many more nuclear accident scenarios without having the fore knowledge that they would turn out okay. And so I think where we are now, we’ve had many more years experience understanding that nuclear weapons have not yet come to destroy the world, and also that we have much stronger norms against using nuclear weapons and better technology for nuclear weapon management. I would be more confident now than I would be in say 1980s or especially 1960s. Though of course, I’m definitely more scared today than I was five years ago.

Michaël: So are you saying that the fact that we have survived all those decades and that we have had peace for at least like yeah, the nuclear, so the cold war ended like 30 years ago, so this is an element for you being more calm towards nuclear risk, because we’re more prepared. Where there’s more analysis in this, there’s more conventions. Is this essentially what you’re saying?

Peter: Yeah. So I’m saying if I was sitting in this chair in 1980, I guess we wouldn’t have podcasts then, but maybe you and I would be chatting over the phone or via letters or something, and I would’ve seen that we would’ve had seven accidental nuclear incidents in just the past 30 years. I would see the Soviets invading Afghanistan, and I would see Reagan “vowing” to “confront the Soviets everywhere.” And I would see Soviet nuclear stockpiles at an all time high, with total stockpiles of nukes six times larger than they are today.

Peter: I would be a lot more afraid of accidental or intentional nuclear war in the 1980s than I am now. Having seen us successfully navigate the 80’s, having seen a ton of nuclear disarmament, having seen stronger norms against nuclear weapons as a geopolitical tool. Of course, that doesn’t mean there’s no risk. I mean, we’ve been trying to assess the risk on this podcast and in other articles. And I definitely miss the relative place of the 2010s for example, or at least the early 2010s before Crimea. And so I think things still just don’t look nearly as bad as 1980 or 1960.

Michaël: Got you. So, disarmament.

Peter: Yeah. Disarmament seems like a big factor.

Michaël: Stronger norms, and then the base rate when you’re in the 1980s is maybe 40 years and now we have like 80 years?

Peter: Yeah. Certainly observing 40 additional years of no nuclear destruction should add a lot more data to any model that would make you feel more confident in avoiding disaster.

Michaël: One of the things the charity you’re running, Rethink Priorities is you doing, is helping with creating tournaments for assessing nuclear risk. I think you’re helping with this nuclear risk forecasting tournaments?

Peter: Yeah. We had the unfortunate foresight to run a Nuclear Risk Tournament prior to the current Ukraine situation.

Michaël: What kind of insights did you get from this tournament?

Peter: I think we’re still assessing some of the data that’s come in and also many questions are still get to resolve, so it might be a little too early to say. And then of course, all the assessments need to be updated for our current geopolitical situation, which hopefully forecasters are doing. But I think it was really helpful for us to be able to just break down our analysis into a bunch of forecastable questions and then get public to input on these or variety of these questions. And then we can then piece the questions back together to create more overall assessments.

Michaël: So, if I want to predict things now, are there still questions I can predict and win points on?

Peter: Yeah. There should be. You should be able to go to the Nuclear Risk Tournament on Metaculus, and view the open questions and predict on those questions.

Michaël: Yeah. Awesome. I’ll do that. I’m kind of curious because you said that you didn’t have a chance to update much of your probabilities since Luisa Rodriguez’s report a couple years back. Is there any team working on, doing research on nuclear risk and calibrating those estimates, and is there any follow up reports or ongoing research that’s going on?

Peter: Yeah. So Louisa Rodriguez was a researcher at Rethink Priorities, and she led our new nuclear risk research and published several articles to the effect on the Effective Altruism Forum. Since then, we hired Michael Aird as a researcher. He’s now a senior research manager at Rethink Priorities. And he spent another year or so doing more research into nuclear weapons, including building that Metaculus tournament.

Peter: But ultimately as a strategy decision, we decided to pivot our team away from researching nuclear risk and instead towards researching AI governance and strategy. And so Michael Aird now leads a AI governance and strategy team, and we’re not currently doing any nuclear weapons. And unfortunately, due to that pivot we did not finish polishing and publishing a lot of our work. But given that there seems to be a lot of interest and demand in this work, Michael Aird on his personal time has been publishing some of his drafts in a more notes format that doesn’t kind of meet our normal quality standards, but we decided it was better to publish something than not publish anything at all. So you can start to see that on the EA Forum right now, actually some of Michael’s output on nuclear risk. But now our team is focused on AI governance and strategy.

Michaël: Yeah. That’s a bit unfortunate for nuclear risk, but as this is an AI podcast, we approve of this.

Peter: Yeah. Does help me get on your podcast.

Michaël: Yeah. I’m curious, what kind of work is the long term as a team that you run and that you’re managing doing at the moment? So you mentioned AI governance, is there any other work that is being done?

Peter: So we have an AI governance team, run by Michael Aird and also Amanda El-Dakhakhni. And then we have a second team in our longtermism department, run by Linch Zhang. And that second team is focused right now on trying to figure out how to make the best use of this very surprisingly massive amount of money available for people interested in longtermism, from like Open Philanthropy, the FTX Future Fund, and other possible funders. So we want to try to research different project ideas that might make use of large amounts of money, and then directly try to get them started based on our research. So we’re taking a research and incubation approach. So those are our two teams.

Michaël: You cannot say a surprisingly large amount of money without saying the large amount of money, otherwise you’re just bluffing.

Peter: The FTX Future Fund already announced publicly that they’re planning to spend somewhere between a 100 million a year to 1 billion a year in US dollars. And then Open Philanthropy, I think also would have the capacity to also donate several hundred million dollars a year. And then there certainly would be other donors too. Like, I personally have no idea what Elon Musk is up to, but he already contributed billions of dollars to be donated in the near future, though I’m not exactly sure what his philanthropic priorities are. And there’s definitely some other large players out there as well. So like the total amount of money available is certainly a billion a year or a greater.

Michaël: If Elon Musk has two billions to just have a new edit button on Twitter, he might have some money for the longterm future.

Peter: I would suspect and hope so. But there’s certainly a lot of money that we don’t necessarily know how to spend right away. So hopefully our team will learn how to spend it, with one team focused on trying to just find highly scalable opportunities, and another team trying to figure out how to spend the money specifically on AI governance and strategy issues to move the needle on this very important priority.

Michaël: Right. I think I misinterpreted the goal of the team as, your org had a massive amount of money and you were trying to spend it correctly, but no, the work is going to estimate how should Open Phil or FTX spend their money.

Peter: Yeah, definitely. We’re not lucky enough to have the money sitting in our own organization bank account. That certainly would be a very different scenario. But we are lucky enough to have good context with Open Philanthropy and FTX, where I think they would be willing to listen to our ideas and at least seriously consider them. And so I guess we consider ourselves more philanthropy advisors than actual grant makers.

Michaël: Have you come across the Patient Philanthropy research by Philip Trammell? He was in the book just couple of episodes ago.

Peter: Oh, cool. Yeah. I’ve definitely heard of some of his work, though I can’t actually say that I fully understand it.

Michaël: I know he’s done some work on, if you have billions of dollars, was it optimal spending in a century. Though, if I remember correctly, you wrote about how you read Holden Karnofsky’s sequence on The most Important Century, and how all the possible future for [emergy 00:33:14] are wild, I think it’s like all possible views about emergy’s futures are wild. I’m curious if you updated on that, on like the necessity of spending your money in this century and not like later?

Peter: Yeah. I did read his sequence on The Most Important Century. And did actually find it to be very personally persuasive to me. I guess, I don’t exactly know what Phil would say if he and I were talking, and I think there’s a very strong chance I just misunderstand what his views are. But I guess, if patient philanthropy involves merely saving the money in a bank account and waiting till later, it doesn’t seem like a good strategy to me for two reasons. One is that, like Holden said, I think there’s a very hard to dismiss chance that we would see very transformative change in this century, especially from artificial intelligence, but potentially also including nuclear weapons or other ways as well. And that we should take decisive action to try to mitigate those risks now while we still have a chance.

Peter: And then secondly, I would think, even if it turns out that this is not the most important century and actually money would be better spent later, I still think we need to be spending now to build capacity, so that once a crunch time does arise we would have all the researchers already employed and ready to deploy the capital. We would have experience making grants, and have all the infrastructure needed to identify promising opportunities and move large amounts of money to those opportunities. So hopefully we’d still would be spending multiple hundreds of millions of dollars a year just preparing for the most important century, whenever that takes place. I think between those two considerations, I’m pretty inclined to try to spend as much money now as we can, I guess subject to some cost-effectiveness bar.

Michaël: I just wanted to start by adding a disclaimer that I haven’t read everything you wrote about patient philanthropy. And I don’t know fully the details of the model, so it’s kind of an amateur.

Peter: Same.

Michaël: But from what I understood, the distinction is not really between this century or the next one is more like 20 years, 50 years, 70 years, and what’s the perfect portfolio on you’re spending. I think I was the one to put you in the direction of this century or the next one, so I’m the culprit here. And then, I think there’s a distinction between putting your money in your bank account and investing. And I think there’s something about investing in a thing that get 8% interest a year, where it could double in decades, and then you could get large amount of interest.

Michaël: And also I think another comment I’ve seen is that, you should treat investing as like a very large definition and not just financial investing. Because what you mentioned about creating meta structures. So essentially there is a difference between investing your money into something and doing direct work. So it’s between spending, let’s say one million in AI strategy research today or investing in building better labs or something, and like having a better community. And so I think your definition of building meta structures would be considered as investing.

Peter: Okay. Yeah. That makes sense. Because I think one really high return thing might be just like trying to grow the community of people interested in these issues. And that probably would exceed standard market returns. And yeah, I wouldn’t be super if Phil was listening to this podcast, he might be like, you idiot, clearly you have no idea what I’m talking about. I think that’s incredibly likely, because I haven’t really spent that much time thinking about his work. My main goal though, was just to try to come up with great opportunities to deploy as much money as fast as my possible subject to some cost-effectiveness bar, like maybe being eight times better than give directly or something.

Michaël: Definitely. And I think if you have medium to short timelines in AI, you could even think that spending money to have more people doing PhD and getting their peak productivity of research maybe like five years or 10 years after the PhD, is maybe like a waste of money. And that maybe you could just throw a bunch of money to researchers that are already at their peak now and direct work now, instead of planning for like 15 years, 20 years, depending on how short are your timelines. But I think the optimal amount of spending resonates with another one of the posts I’ve seen of yours, which is about the scalability of EA orgs. I think you wrote something on the Effective Altruism forum on why nonprofits should be scalable. Could you summarize the post for people who haven’t read it?

Peter: Yeah. So I think the basic summary of the idea is that, if there is a less cost-effective opportunity, but it’s more scalable, like it can take on more money, it actually frequently can be better to fund that than to fund a more cost-effective, but less scalable opportunity. The reason being that there’s a ton of money available to fund stuff and there’s analysis costs in figuring out what to fund and how to fund it. And if you can spend the same amount of analysis to come up with these really scalable opportunities, assuming they meet some cost-effectiveness bar, it’ll just use up the entire EA portfolio much faster. And this assumes that using up the portfolio faster is preferable to saving it in a bank count or something like that. So that motivates a lot of my current focus on highly scalable projects as well as growing Rethink Priorities as quickly as is feasible.

Michaël: How do you compare to different burn rates between funding two or three projects and funding one mega project? Why would funding three different projects and burn more money?

Peter: Why would there be more analysis costs?

Michaël: Oh, so the analysis cost is from the granting, right?

Peter: Yeah. It would come from both the EA research time needed to identify the opportunity and vet the opportunity, then it would come from the cost of actually transferring the funds, which I assume are much, much cheaper than the costs of researching and identifying the opportunity.

Michaël: Right. So, the main point is that, having one project would cost less time in researching it at the beginning and to evaluate?

Peter: Yeah. Then I think another important aspect is, things other than money that projects might take up, such as highly talented people that are in short supply or something. So a highly scalable project might also be something where you can do a lot of good even without a lot of highly talented Effective Altruism. Other scarce resources in addition to capital is just about, like making the portfolio do more without having all of our resources just kind of sitting and waiting.

Michaël: Do you think we’re like bottleneck on…

Peter: Yeah. I think we are currently only donating a very small percent of the overall EA portfolio like monetarily. And I think the money in the EA portfolio is actually growing faster than we can currently spend it. So it definitely doesn’t seem like money is the core bottleneck at least for now. What instead would be a bottleneck would be like available researchers with sufficient talent and sufficient mission alignment to identify great opportunities to fund, and also sufficiently talented people to run those great opportunities. So I think like personnel is definitely scarce here.

Michaël: What about the cost of having one big organization where everything is kind of slow, and between having one organization that deal with, I don’t know how, 100 people and five startups that are like 20 people.

Peter: Yeah. I mean, I’m certainly not advocating having only one organization. So you probably would want a mix of both, maybe some scalable efforts on AI combined with some small AI research outfits. And it’s certainly possible that a less scalable opportunity might still be overall more cost-effective just because it’s so much better per dollar. But I think there’s important returns from scalable organizations too, such as, they use up more capital, so you’re putting more capital to work instead of saving it. And then also there can frequently be economies of scale. Large companies can just have resources to do more things you can collaborate with more people, do more things, have bigger things, run things more efficiently through consolidation. There’s certainly returns to scale as well. Though, I do worry about bureaucracy and other ways that big organizations might be slow and not work as well. So I think you need to intentionally design with that in mind, as well as have some more nimble opportunities as well.

Michaël: Is that something you try to apply at Rethink Priorities?

Peter: Yeah. I’m definitely trying to intentionally grow Rethink Priorities to be quite large. Because I think that there’s a lot of important research questions to answer, and a lot of people that could be good researchers if given the proper training and opportunities. And so I’d like to grow Rethink Priorities to take on more early career researchers and mentor them and scale them up to do important questions. And I think I’m definitely really excited to grow Rethink Priorities in that way. But I do definitely want to be really mindful of ways in which our culture might break down or things might get a lot worse by being larger. And I think I’ve seen other organizations fail in that way. So, that’s something we’re definitely being very mindful of.

Michaël: Do you want to talk a little bit about the different roles that you’re hiring for now, and what kind of people you’re expecting to apply for those roles?

Peter: Yeah. So if you’re listening to this podcast before April 17th, 2022, we should have plenty of roles open, and I’m going to talk about those. It’s also possible if you’re listening to this much later, we might be hiring more still. So definitely feel free to check out our website @rethinkpriorities.org, and hover over about us and click on career opportunities. You can learn about all our openings, or follow us on social media. The current openings as of the recording of this podcast, I’m definitely very excited to offer opportunities on a lot of our teams. So our AI governance and strategy team that I was talking about, we’re looking for research fellows. Which is a position where you try out being a researcher for three to five months, and you decide if it’s something that you like. And we decide if it’s something we think you’re really good at.

Peter: And I think a lot of cases we really would try for every fellow to get them a permanent research position at Rethink Priorities or at another organization, assuming that’s something they want. So I’m excited for the fellowship to just kind of open up doors to a lot of people that want to try research. And then also we’re hiring for research assistants. These are permanent positions, but they’re more in a role where you’re assisting another researcher as opposed to doing a lot of research yourself. And this could be another great way to learn how to do research and eventually become a researcher yourself, if that’s something you want. Where you kind of get an opportunity to be in the environment and see a lot of the work.

Peter: And then in addition to assistance and fellows on the AI governance and strategy team, we’re doing the same for that team that was focused on building large projects. So you could help our team with deciding what opportunities to try to go after and decide how to approach them. It’s our general longtermism team, though I think it really needs a better name. And so we’re hiring in that area too. And then to facilitate a lot of this, we’re hiring on the operations side as well. So if there’s any people listening to your podcast that don’t want to do research, but like building organizations, they could apply to our operations roles, including our new special projects department, which is going to be actually doing the nitty-gritty work of building some of these opportunities from the ground up. So I think that could be a really exciting opportunity as well.

Michaël: What kind of opportunities do you plan to build from the ground up?

Peter: Yeah. So, we’re still at the idea stage right now, but one thing we wrote about on the EA Forum, that we’d be interested in more feedback on, we’d be like a much bigger forecasting center to provide more early warning opportunities. Some of the forecasting I was talking about earlier is kind of expanding that and like really trying to figure out what are the current risks, and how likely are they, and also be able to early alert when they become more likely, so that we could take quicker action. I think this might be especially useful for pandemics or other things that might escalate rapidly.

Peter: And so that’s something we’ve written about. And then we have some ideas that we’re in the process of writing up to get more feedback, and then ultimately we’ll pick say two or three ideas and launch them and try out, just seeing if we can build some scalable opportunities to use longtermism money. I was also going to plug one last role too, which is if you like me personally, and you want to help me out, I’m looking to get a research assistant for myself, to help me with like various tasks and projects. And that could be a great opportunity to see Rethink Priorities from a high level, and get personal mentorship and support from me. I think that could also be a great stepping-stone role for someone looking to get involved in research.

Michaël: So, if you like Peter from this podcast, just apply to be his personal research assistant.

Peter: Yeah. I’m excited.

Michaël: I’m kind of curious about the kind of AI governance research role, because I don’t think you’ve mentioned what someone in this role research in a day to day basis. Would it be kind of write a report on what’s the optimal AI governance, what it would look like in the next six months or a year? What’s the outputs you’re expecting from this? What’s kind of the analysis you’re you are trying to get?

Peter: Yeah. I mean, we work for stakeholders that are looking to us for specific research projects to inform their grant making. Some of them are high-level projects, such as figuring out different intermediate goals that we may have, and like just trying to map the landscape of possible interventions that could be funded. So for example, an intermediate goal might be to like, improve immigration to the United States with the hope that… Like skilled immigration with hope that that helps bolster AI talent in the United States. But we also are interested in more specific projects as well, such as like, looking at to how like a whistle-blowing system might work for AI, or other things like maybe looking at ways AI might deceive people when it’s deployed, and how we might mitigate that or other research topics such as those. So there’s definitely a mix of high level and highly concrete topics.

Michaël: I don’t fully get the whistle-blowing person. Do we need like, an Edward Snowden for AI safety, what’s going that on?

Peter: So it would be kind of like an Edward Snowden for AI risk, but one where he gets rewarded and the US government would be excited about it, as opposed to one where he has to flee the country.

Michaël: Is it like artificial general intelligence or more like misuse of AI in the network sense, where one could create defects or launch military drones?

Peter: Yeah. All our research that we do is focused on the long term impacts of more transformative AI systems, including artificial general intelligence. Our research, we’re not concerned with deepfakes or other near term outcomes acceptance, so far as they help us understand and mitigate more existential level outcomes or transformative outcomes.

Michaël: That’s awesome. I think that’s a great sentence to end the podcast on. So you’ve heard Peter Wildeford on the podcast. If you want to be his personal assistant and learn a lot, you can apply for that or different positions on AI governance.

Peter: Yeah. I’m definitely really excited.

Michaël: Thanks Peter for being here. And I hope I talk to you again soon.

Peter: Yeah, me too. Thanks for having me on the show.