CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 071: How to use your career to have a large impact (with Ben Todd)

September 16, 2021

What is 80,000 Hours, and why is it so important? Does doing the most good in the world require being completely selfless and altruistic? What are the career factors that contribute to impactfulness? How should people choose among the various problem areas on which they could work? What sorts of long-term AI outcomes are possible (besides merely apocalyptic scenarios), and why is it so important to get AI right? How much should we value future generations? How much should we be worried about catastrophic and/or existential risks? Has the 80,000 Hours organizing shifted its emphasis over time to longer-term causes? How many resources should we devote to meta-research into discovering and rating the relative importance of various problems? How important is personal fit in considering a career?

Ben Todd is the CEO and cofounder of 80,000 Hours, a non-profit that has reached millions of people and helped 1000+ people find careers tackling the world's most pressing problems. He helped to start the effective altruism movement in Oxford in 2011. He's the author of the 80,000 Hours Career Guide and Key Ideas series. Find out more about Ben at benjamintodd.org.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcasts. And I'm so glad you've joined us today. In this episode, Spencer speaks with Ben Todd about choosing a career path for societal impact, choosing solutions to global issues, and risk-taking in charity.

SPENCER: Ben, welcome. Thanks for coming on.

BEN: Hey, Spencer, thanks for having me.

SPENCER: So what is 80,000 hours? And why do you think it's so important?

BEN: So 80,000 hours helps people find careers addressing the world's most pressing problems? And yeah, we provide free research and support to help people switch into higher impact careers.

SPENCER: So if someone has a lot of talent, a lot of skill, and they want to figure out well, how do they actually help the world with their career, you're the group that they should come to help them figure that out. Is that right?

BEN: Yeah, that's the aim. There are a lot of people who care about doing something meaningful with their careers, doing something that contributes to the world. But it's often quite hard to get advice that actually helps you compare different paths you might take in terms of how much impact you could have.

SPENCER: So why careers? Why, you know, why did you decide to focus on that?

BEN: Yeah, so the secret is in the name. So 80,000 hours is roughly how many hours you have in your career. So that's about 40 hours a week, 45 weeks a year, and like over roughly 45 years. And the idea there is like, if you're interested in how can I have like a good life and ethical life, how can I do something important, your career is actually the biggest resource you have to contribute to the world in general.

SPENCER: So I think a lot of people, when they think about doing good, I think of it as something they kind of do on the side, right? Like maybe they'll give some donations, maybe they'll volunteer, or do kind of a side project. And I think what you're suggesting is that if you think of your career as sort of a mechanism for doing good, you may actually have a much higher impact than if you view doing good as sort of this thing on top, or is that the thing? Is that right?

BEN: Yeah, exactly. And I think when people think about doing good or having an ethical life, we often think about things like recycling, or fair trade, or you know, maybe think about, like being really honest, and kind of being like a virtuous person. But I actually think the most important decision that should be on our minds is like, what are you actually going to do with your career because that's the thing that will determine your social and ethical impact on the world to the biggest extent possible.

SPENCER: Right. Let's see. It seems like a lot of times when people are thinking about their careers, they're really thinking about what am I passionate about? What is your lifestyle want to lead to? How do you think about this idea of the impact, in conjunction with these other ideas of the way people think about choosing their career?

BEN: Yeah, so it's definitely true that your career needs to serve several different goals, because just how we're going to be spending so much time, so it's important to find something personally satisfying, and that will pay the bills. And yeah, that was definitely one of our aims when we started 80,000 hours, one of those things, too, but like many people also want to have a contribution with their career and life in general. And our big point is that exactly how you spend those working hours is probably for many people, especially if you're fortunate enough to have options about which paths you might go down, is likely to be your biggest lever for effecting change. So it's worth really taking seriously on that dimension, as well as the other ones that matter.

SPENCER: So if you encounter someone who's, let's say, really talented, but they just don't care that much about having a large impact, do you just kind of view them as not the audience that you're addressing? And really, sort of the starting assumption is, you want to have an impact in your life, you care about that. Okay. Now, what do you do?

BEN: Yeah, exactly. We think there is a significant number of people out there who do already want to have an impact. And we can just do a huge amount of good by helping give them better information about which paths are effective and supporting them and doing that. And we think we could get like many 1000s more people working on addressing pressing social problems through doing that. If do you think, in the longer term, we would also just like to get more people focused on impact in general. And we have found that our advice does often get people more concerned with social impact in the first place. Because one big reason is just, often people, they're not really sure what they actually do have an impact. And if they see that there's like routes open to them that they weren't aware of, or maybe they come to believe they can actually have more impact on the first thought, then they often end up prioritizing it more than they did before. Just trying to persuade people to be more focused on impact, in general, is not our current focus.

SPENCER: Is it a common experience you have that people sort of have self-doubt around their ability to impact the world, and that's actually part of what you have to address? Sort of either imposter syndrome or just a sense, of well, who am I to actually change anything about the world?

BEN: Yeah, and I mean, imposter syndrome is very, very common in general, among college students and among many successful people. So that definitely comes up. I would say maybe the even more common thing is just a sense of everything being very, very uncertain. Like, you just have all these options, and people are wrestling and grinding away, and they just feel really unsure about which one is best and what the left be like. And so it's that real feeling of uncertainty that I think is one of the biggest things that people get challenged with.

SPENCER: So what do you think that uncertainty is born out of just a huge range of possible options or feeling like they haven't explored enough to the world yet? Or something else?

BEN: Yeah, no, I think both of those things, the variation and things that can happen in different paths, there's often just a pretty big spread of outcomes. And you have to make these decisions without having that much information. And in a way, you've just come out of college, you haven't worked in many careers. So it's quite hard to know what they're going to be like ahead of time.

SPENCER: Got it. And so, do you want to walk us through the framework that you've developed, 80,000 hours?

BEN: Yeah, so one of my biggest ideas is like, you really have a lot of time. So it's really worth thinking carefully about how best to use it. But actually, we think that there are more variation between different options, and people realize. Some of the options open to you probably have a lot more impact than others. And people haven't already factored that into their decisions. And so we're going to kind of promise that is, if people could find these higher impact options, they might be able to both do a lot more for the world while also finding something that's more satisfying than what they were going to do before. And so that's like a big part of what motivates us.

SPENCER: We say as are more satisfying, do you mean because it will be more imbued with meaning than what they would have done otherwise?

BEN: So it could be through several routes like if you just imagine that you could find a career path, that's like 100 times, or even just ten times higher impact than what you knew about before, then that means you could say, spend ten years doing the impactful path, and then achieve as much impact as what would have taken like people similar to you 100 or 1000 years beforehand. And then you could just spend the next 30 years of your career doing whatever you like you could go and spend your time on a beach or do whatever is most personally satisfying. And like it seems possible, then over the 40 year period, you could optimize for both goals better than you were able to before.

SPENCER: You're saying that if you care about both doing good and maybe optimizing for other things like personal satisfaction, or hedonism, or whatever it is you're optimizing for, kind of you can get a better joint optimization by thinking this way.

BEN: Exactly. Like if you can just find an option that suddenly achieves one of your goals way more effectively than before, then you can get to like a better optimum between the two. And then yeah, like more practically, that could be because you get to do this two-stage strategy that I was just kind of mentioning, there are other routes to it such as like, having more impact, people often find that intrinsically rewarding in itself. So that's another way in which it can just boost your personal satisfaction as well.

SPENCER: Sometimes people talk about the happiness of helping others. And that being one of the reasons to do good. How do you feel about that as a motivator?

BEN: Yeah, I feel good about it. Yeah, there's definitely this idea out there that we almost kind of see altruism as inherently about sel-sacrifice, it's not actually altruistic unless it costs you personally. But my attitude is more like if we can find things that make our readers happier and have more impact, then that's just even better.

SPENCER: I guess there's a danger there, which is, if you're trying to optimize for the kind of doing good that makes you happiest, it's probably not the most impactful kind of doing good. Would you say that's true?

BEN: Yeah. So I think there's less trade-off between personal satisfaction and impact than they're kind of general discussions around these things like there's less trade-off than people first think. But then you do have trade-offs at the end of the day. So I'm not saying that the thing that's best from a kind of narrow personal happiness point of view is going to be also the thing that's best for your impartial impact on the world. At some point, you'll have to wrestle between a difficult moral decision about which one you want to focus on the most.

SPENCER: It is really nice, and I don't know, quinces is the right word, or just truth about the world that doing good does feel good. Like you could imagine a world where that's true. Most humans, not all humans, most humans have psychology where if you know you've like had a positive impact that like feels really good. And that's a pretty awesome thing about humans, and we'd be in a pretty desolate place if that weren't the case.

BEN: Yeah, I guess that's part of our wiring to help us cooperate. But yeah, I think it's not only that effect that leads to this convergence. And other bigger factors, like we're also very motivated and satisfied by kind of a sense of craftsmanship or a sense of being competent to an important skill. And if you want to have a big impact, it's also useful to have valuable skills. And so, by focusing on getting good at something, you can often find something that's both inherently satisfying, and good for your impact, also kind of good for your credentials. And we would call your career capital.

SPENCER: Got it so that in practice, doing good tends to combine things like honing your skills at something that's useful or valuable, building career capital, or like getting credentials, and also simultaneously doing something that feels meaningful. So there's all these like, kind of nice things that come along with it. It's not just like, oh, I'm just sacrificing myself and becoming socially ostracized or whatever.

BEN: Yeah, I mean, and even just on the level of, if you don't, at least to some degree to enjoy your work, then it's harder to stick with it for a long time. And if you don't stick with it, then you'll also have less impact, especially because like most people only hit that peak productivity between kind of like ages 40 and 60. So a big idea is that it's really important to play the long game with your career. And I think, when you get into that mindset, you know, if you need to be thinking, I want to find things that I can stick with for potentially decades, then how rewarding you find it personally becomes an important factor in itself just for your impact.

SPENCER: That's a really interesting point. And is there a reason that a lot of the impact comes from, let's say, age 40 to 60, just because they tend to be at the peak of their career at that point where they have sort of the most clout, the most influence, that kind of thing?

BEN: Yeah, it depends a bit on the career. So, in general, the model is like, unfortunately, we're well into this process now. But you know, our fluid intelligence starts probably declining in our early 20s. But then, on the other hand, you're learning skills, so there's that whole, like, you need to do a lot of deliberate practice to become good at skills. So that's generally building-up, you're getting more just like the knowledge of the world, and you're building, I think, in a lot of careers, it's very important to have a lot of connections. So you know, like in politics or something, that's huge. And so then, you've got some factors that are declining, but there are other ones that are increasing over time. And for many careers, the sweet spot is often, like around age 45. I think that's in the US when many people's incomes peak. But then yeah, and it varies a lot on the career. So your politicians, their peak is often in the 60s, or, you know, we've seen even recently, it's quite hard to become president unless you're in your 70s. And not maybe just because those connections need to spend that long building up, whereas mathematicians and lyrical poets often peak in their 20s. And that's because just that really abstract, really quick reasoning. Fluid intelligence seems probably more important in those careers.

SPENCER: Hmm, that's very interesting. So you want to walk us through the framework that you all developed?

BEN: Okay, so why the career make such an impact and we often focus on four key factors that drive your impact. So the first one is how the pressing your problem is you focus on. The second is how effective the solution is to the problem you working on. The third is how much leverage you have to that career path. The fourth is your pesonal fit to that career path. We think of the variation of each of the four factors so even if we can just only improve on one them by finding better option you can increase your impact a lot.

SPENCER: Okay, so let's go into each of these one by one and kind of explore them. So the first is the problem. Do wanna walk us through that?

BEN: Yeah. So we actually think this one is probably the most important factor. And it's also the one that's like least discussed in normal social impact careers advice. And so that's like, what issue or problem or area you address in the first place. So that could be something like education, or health, or climate change, or homelessness, or pandemics, which kind of broad areas you're trying to address in your career.

SPENCER: So how would you think about which problem to work on? I mean, obviously, it's a really big and meaty question. But I'm just curious if you can give some pointers there.

BEN: Yeah, totally. And it's definitely difficult to compare them. And I would say the conventional advice on this is just like, don't really try just choose one that you're passionate about because then you'll be motivated. And that's like the best we can do. But I think if you actually start trying to compare just a little bit, it turns out, there are actually some pretty big differences between different options. And I'm guessing many of your readers are already familiar with the framework that gets used in effective altruism, which is, like how important the issue is, how neglected it is, and how tractable it is, not like one way you can start comparing different problems. And you can show you some data, there are pretty big differences.

SPENCER: Right. And so I want to go back to this kind of typical advice that people give, which is like, just pick something you're passionate about changing the world or a problem that's personally meaningful to you. Why do you think it is that people are so often giving that advice, rather than, hey, try to like, think about which problem actually should be worked on? Or you could have the highest leverage in?

BEN: Yeah, so that's a pretty good idea behind that advice, which is kind of going back to what we were saying earlier, like, in order to have a big impact, you probably want to find something that you're good at, and you're going to stick out for a long time. So basically, finding the thing that really, really motivates you or you're really really interested in is probably a key thing to focus on. And for our framework, that would come under personal sets, which is like the fourth one I mentioned. That's one of the things that drive your personal fit.

SPENCER: So others might be giving the first slot where you're giving them maybe the fourth slot or something like that.

BEN: Yeah, I mean, I wouldn't necessarily say that in order. Well, I mean, I'd say the problem selection one is the biggest. So yeah, actually, I would say, like, often is bigger than personal fit. So generally, what happens in our advice, like people narrow down some kind of like plausibly high impact parts, and then they choose between those mostly based on personal set. So that's kind of like how it often shakes out in practice.

SPENCER: I see. So you start by like, pre-selecting, like potentially high impact areas, and then like, which of these you're most excited about? Now, we're already in the realm of things that are worth working on, hopefully.

BEN: Yes, though, I do think there can be some situations when if someone was just going to be really amazing at something, that maybe it wasn't clear how it would have much impact, then, it might well still be well worth just pursuing that. Partly just because you'll get so much career capital from that past. And like, you know, just generally, when you're like good at things that give you lots of options. So like an example recently is an interview come across isotope, but it's set up by someone who was like a fashion model. And then they got really concerned about climate change. And then they realized that lots of scientists they spoke to said, Actually, nuclear energy seems like an important part of the solution, but no one is talking about it. Or like, you know, to really unpopular, so she decided to, like, totally pivot her social media presence to advocating for nuclear energy. And trying to make it popular. And this is like an example of you, you wouldn't think that being a fashion model is like a high-impact path. But if you like to build a big social media following, you can often use it to promote important things later.

SPENCER: So yeah, I mean, imagine if, you know, Magnus Carlsen won career advice, you wouldn't come to stop playing chess, right? Like, presumably, it's through chassis gonna have the biggest impact if he figured out a way to leverage his clout and fame, that kind of thing.

BEN: Yeah, exactly.

SPENCER: Going back to this thing, why do people give this advice about following your passion, following your interests, even if you want to have an impact, right? So obviously, there's, there's a whole separate side reason to do it, which is you just might enjoy your life more. But even if you want to have an impact, it seems to me like yes, part of it is that people think you'll just stop doing something that you're not passionate about. But it seems to me that maybe also that's wrapped up in like sort of the way people think about doing good, like, something is more good. If it's something like you really care about or it really touches your soul or something like this. What do you think about that?

BEN: Yeah, I think that's something in that were, one of our big concepts, in a way is just tried to get people to actually think about outcomes a bit more. I agree, there is a kind of strand of like, it's yeah, it's almost like a type of personal expression or something like that. I mean, yeah, I guess I was, I didn't mean to say earlier that I think maybe one of the biggest reasons. So you know, in a sense, the thing that we're betting on, is that actually a big reason why people don't focus more on the other factors, is just that they don't know how to compare based on these factors. And if they just were aware of like, how much variation there can be, then more people would factor these other things in. And it's just right now, it seems to people that it's not possible to compare problem areas, and then falling back on just like, okay, so just at least do something you're motivated by, that's actually the best they can do. But our hope is, by getting these frameworks and an idea out there, then many more people will start to factor them into their decisions.

SPENCER: Right, maybe if everything is just broadly good in terms of helping the environment or working on poverty, or trying to help animals or whatever, if everything sort of seems about equal, maybe following your passion makes sense. Whereas if you think, "Oh, wait, if I choose the right problem area, maybe that's, I could have 100 times more impact" or something like that, then suddenly, maybe that actually starts to make a lot more sense in terms of your prioritization. But people aren't necessarily thinking that way. They don't necessarily believe that there's this huge difference in impact.

BEN: Yeah, exactly. I mean, you could imagine if we had a kind of like, idealized economy, where all of the externalities are now internalized, then you would just be able to follow, like, do the thing that is your greatest strength, you could almost just follow, like the thing that makes you earn the most money, and then that would also optimize for impact. But that's not at all the world we find ourselves in, it seems like actually, there's big differences between different paths and ways of doing good. And yeah, like you were saying, I mean, I think you had Stefan on your podcast. And he was talking about, we've done these surveys of how much people think different charities differ in the cost to save a life. And typically, people think that the best charities in the space are about 50%, more cost-effective than the average. But then if you ask a bunch of global health experts, how big they think the differences are, they say they think it's actually around 100, or some of the experts even said 10,000 fold difference between the average and most cost effective in the survey.

SPENCER: That's pretty wild.

BEN: Yeah. And this is just like not at all integrated into common sense view of doing good. But you know, if you thought there was only 50% differences between options, then yeah, just focusing on the thing where you'll be most productive, most motivated would make total sense.

[promo break]

SPENCER: Okay, so we talked about the problem, which is kind of part one, let's go into the solution and how you choose which switch to work on.

BEN: Yeah, I mean, I suppose with this choice of problems, we didn't actually really talk about how to compare problems in terms of impact or why we might think there are big differences.

SPENCER: So why do you think that problems vary so much in the impact they can have?

BEN: Yeah, so we mentioned like they can vary how important they are, how neglected they are, how tractable if they are, that's one framework that's used in effective altruism, and to compare different problems. It's not the only one, but it's like a useful starting point to start to make some comparisons. So then you can kind of ask, "Why to think some problems are much more important than others?" And so yeah, by importance, we mean, if you were to make progress on this problem, how much impact would result which you can roughly think of is like, how many people we benefited by? And how much would they be benefited from? And in a sense, like, the common-sense view is actually a bit in two minds about this topic. Because I think if you also ask people are some problems in the world bigger than others, they would like totally agree. And surveys of millennials typically find that people think climate change is the world's biggest problem. And you can make a pretty good case for that. But then yeah, and I think if you look into it more, you can actually start to find potentially really big differences and how important different issues seem. And a big framework I use is like the longtermism framework. So I think there could be a lot of future generations. And one of the most important things we could do today is like setting up the world to enable those future generations to continue and to do well and put civilization on a good footing, were to think about what issues are important as in terms of which issues will put civilization on the best track in the long term. And it seems that with many issues, it's hard to see how they have a lot of leverage on that. Whereas, there are other issues like artificial intelligence, where it seems like there's a chance that we get some kind of change to the world as big as the industrial revolution might actually be happening in our lifetimes. That could be machines that are smarter than us. And that could just be like one of the most important events in history, that transition. And so making sure that goes well, just seems like an insanely important thing to get right from this longtermist perspective.

SPENCER: Right. Like, suppose you live sort of in the era where the Industrial Revolution was just about to begin. And you had the ability to maybe influence how fast it went, or whether it went in a direction that was tended to be better for humanity, or worse for humanity. Like that seems like a sort of pivotal moment in history. It's not clear to me that that's actually true of the Industrial Revolution, like whether people could have influenced it or made it more positive or negative. I'm not sure. But at least I guess that's the intuition. Is that right?

BEN: Yeah, I agree with the Industrial Revolution, when it's not clear. I mean, I think maybe one thing that you could have tried to do is, it seems like basically, which country developed the Industrial Revolution first basically dominated the world for a while. And you know, that was basically what was behind the British Empire. So yeah, maybe if you could have changed, like how quickly different countries developed it that could have changed, like which Empire was dominant? Or how dominant the British Empire was. But yeah, I think actually, artificial intelligence is like a stronger case for it, which is that when so are a bunch of powerful machine intelligence, there's no time limit on like, how long they could last. And so whatever they're optimizing for, they're now potentially optimizing for that in the long term, whereas with humans, we die. And so there's a kind of general turnover of like, who's calling the shots, but it's not obvious that would apply to a bunch of artificial intelligence is in the same way. There's this idea that could be a long-term lock-in of what values are being optimized for in the world, or at least that's one path.

SPENCER: I want to understand the framing we're working with here. Are we thinking about a civilization where artificial intelligence are literally in charge? Or they're being called on to make predictions? Or Yeah, what's the frame you're working under?

BEN: With the kind of value lockings, one that would be these systems that are optimizing for their own values of some kind. So if they were perfectly controlled by humans, then this wouldn't be an issue in the same way because it would come back to whatever values the humans have. But the idea is there's a chance that when systems become more intelligent than we are, it's not obvious that they would still be perfectly under human control, and they wouldn't just be optimized for some other goals.

SPENCER: Because I've heard a few different scenarios here, which I'll kind of float around this set of ideas. One of them is the idea of a completely uncontrolled, single 10. So you have one artificial intelligence that someday is built, that is extremely powerful. And basically, because its goals are not exactly aligned with what humans want, it just starts executing on this set of slightly misaligned goals. And that leads to really cataclysmic outcomes, right, so that's one scenario, we can imagine the rogue AI going out of control. A second scenario we can imagine is humans build an AI that's extremely good at making predictions extremely good at influencing the world. They have it under control, but that actually gives whoever creates it an extreme amount of power to sort of control the future. So that would be sort of a different type of like, the first AI, you could have a value lock-in where like that AI that's misaligned kind of controls the future of what happens. But the second type, it also feels like could be value locking of a data type where now you have whoever created it, basically, their values can be put upon the rest of the world because they have most of the power.

BEN: That was like the British Empire example but more extreme.

SPENCER: And then there's a third scenario, which is maybe even harder to explain, but it's something like, imagine you have a future of lots of competing AI's, they're powerful, but none of them are so powerful that they can kind of dominate the influence on the world. But through this competition of let's say, lots of different groups have made lots of different AI's, and they're more powerful than humans in many ways. But they're still in competition with each other. You could imagine a slow value drift where over time those AI's that are optimizing more for, let's say, profit, or self-improvement, enhancing themselves or whatever tend to out-compete, and you kind of get this like evolutionary drift, where you end up with like a future where it's dictated by these, like the competition between AI's and humans. Now, we're not even in control of the future, but neither is served any particular AI. And this slow competitive drift scenario where the values get locked in that way.

BEN: Yeah, exactly. And, I mean, it seems that might almost be already happening now, we're kind of letting algorithms make more and more decisions in society in general, and corporations kind of have their own interests, and they're optimizing for things that aren't aligned with human values like they cause pollution or they get us addicted to Facebook. And you can kind of imagine, just like that process just continuing to evolve, but corporations get even more powerful, and the algorithms get even better, and we get even more decision-making to be delegated to them.

SPENCER: Yes, he is a great example because in many ways, a corporation is a superintelligence or a mini super intelligence because you might think like Apple as a company, in some ways is kind of smarter than the individual human, even though it's not right. A conscious entity, it still has intelligence.

BEN: Yeah, maybe more powerful than various nations. Maybe Facebook is an even more plausible example. If you think that everyone said to send a push notification to a large fraction of the world's population. It's not something that many people could do.

SPENCER: Yeah, totally. Well, yeah. And so this may be illustrates some of these different examples. So the Facebook, one to imagine some person or group of people are at the helm but it's not through push notifications through Facebook or influencing people's minds, or Facebook, through an AI, they control. But then there's the other side of thing, which is if you think about, let's say, Apple Computer as intelligence itself, what is it really optimizing for is the thing it's optimizing for the same as what humans care about. And then imagine it's really an alien mind that's optimizing the future of the world. But it's imagining what's a one company control the whole world, like, what would that be like? Right, except that instead of a company as an AI?

BEN: Yeah. And so a lot of it comes down to the shareholders. How much influence can they actually have on things? And do the shareholder's values reflect what we want, society's values to be? And then yeah, and how can they actually make sure the company stays aligned with those values, but there are already big principal-agent problems with current companies where the management often do things the shareholders wouldn't want them to do, but it's hard for the shareholders to perfectly control the management.

SPENCER: Now, I think when talking about AI scenarios, some people have this reaction that's like, "Are you kidding? You're talking about the sort of like sci-fi kind of sounding thing? Well, we have so many actual problems in the world today, poverty, disease, global warming, etc." So I'm just curious what is the way that you talk about this? Or how would you respond to sort of that critique?

BEN: Yeah, I mean, there's a few different things in that, like my kind of broader framework is the thing, like I was saying about which problems in the world are most pressing from this long-term perspective, and that can sometimes lead you to work on very media issues. We've been trying to encourage people to work on pandemic preparedness from long before COVID And if we'd had better preparation, there would have reduced a lot of suffering in the here and now, as well as, helping to make humanity safer in the long-term. Yeah, I agree that doesn't apply with AI as much. But if you know that there are these existential risks that are also very immediate concerns, too.

SPENCER: Maybe some aspects of AI, like if you helped make AI more explainable, she could like better understand what AI is doing, that could have some immediate benefits, and then also could have a longer-term payout in terms of building safer systems.

BEN: Yeah, a lot of AI safety researchers do try to find things that help both from a short-term and a longer-term perspective. They guess the other aspects of your question is, to some degree, that is just a really difficult thing and other people we work with, they wanted to do good because they saw people suffering around them or in the world, and that's a big motivator for them. And then to some degree, you just have to make this individual choice around how kind of abstract you're willing to be versus doing that thing that's really driving you.

SPENCER: It's easy to think about trying to make the abstract concrete, like if you actually imagine future people being affected, right, so part of what makes it so abstract is that it's distant in time, and you don't know who's gonna be affected, and it's hard to even know exactly which way they're going to be affected. But if you actually imagine these negative outcomes, you could avert for specific future people that you made up, obviously, maybe that brings home that like, oh, it's still cashing out in the same thing. And so cashing out and humanizing better. It's just that you don't know, the face of the child that you're helping, right?

BEN: Yeah, exactly. And you can think that people matter less just because they're far away from me and in space so far away from me in time. And I ultimately think there's not much difference there. More kind of really morally, so then I try to then line up my motivations with that as well. But it's definitely challenging. I definitely find it very motivating myself to sometimes think about how the world could get blown up at any moment by an accident with our nuclear alert systems. And just seems so crazy to me that we've built these like doomsday machines that are pointed at each other and constantly turned on and ready to go at a moment's notice. And it would not only be the worst thing to happen to our generation but could potentially snuff out maybe the only intelligent life in the observable universe forever. Yeah, I can I can get into a frame of mind where I feel pretty motivated by that.

SPENCER: Yeah, it's an interesting way to think about it. There are framings on this, that connection may get very visceral. We talked about AI and pandemic preparedness. Maybe that was just a good time to mention some of the other top areas that you think have potentially really high impact.

BEN: Yeah, so one bucket is kind of direct reducing existential risks, which we think are really important. And I mean, tend to jump ahead and some of the other parts of the framework, we think they're also very neglected by our current economic-political system. And that's partly because future generations can't vote and have economic power. So we think they look good on both those two key parts of the framework.

SPENCER: So if you're using an efficient market hypothesis thing, what sort of solutions are going to be undervalued in the market of ideas? And the people being affected that don't exist, yet. So they can't actually work on them. So there's going to be too few people kind of exploring these solutions.

BEN: Exactly. That might be the my kind of ultimate market failure that we face, which is there's this missing market of all future generations to put it into economics framework. Yeah. So I guess with other things that 80,000 hours recommends, one bucket is directly reducing neglected big existential risks, which is, so we have AI, pandemic, venturing action, and I briefly touched on nuclear security. And then we have extreme climate change. So those are the four we focus on the most roughly an order.

SPENCER: As extreme climate change, that would be feedback loops, like the ice caps melt, and that causes something else, and then they kind of spirals totally out of control. That wouldn't be like you get a modest amount of warming over the next 100 years, right.

BEN: Yeah. So I think the scenarios we should be most worried about are these tail scenarios where you look at the climate models, it comes out, there's like maybe like a 5% chance that climate sensitivity is actually 10 degrees. And so if we double CO2 emissions, that CO2 concentration in the ice, we could be on track to have 10 degrees of warming, rather than the two or three degrees that people normally suppose. And that seems quite a high chance to me when you just kind of step back, but then Yeah, I actually think the biggest contribution of climate change to these long-term issues is, it's a risk factor. So even moderate climate change could act as a stressor on the global economy in general and political relations. And that could make us less able to address other big risks that could come up in the meantime. Another way of thinking about it is just, most young people who want to do good, a lot of them are focused on climate change now. So if climate change was solved, that would free up a lot of people to address these other issues. So like a very basic level,

SPENCER: Sure. So one of the things that seem to have happened over time is that it is that the device 80,000 hours has shifted more from things like poverty and those kinds of cause areas. They're more in here now to these longer-term cause areas. Do you want to comment on why that shift is taking place?

BEN: Yeah, to some degree, I think it might be a bit of a misconception that shift has taken place in our views because I think our first ever ranked list of causes was in 2013. And that has global catastrophic risks right at the top of the list already. So an ER we were like officially founded in 2012.

SPENCER: Okay, I didn't realize that. But is it not true that there was more emphasis on the short-term pressures previously? That's my impression.

BEN: Yeah. So one of our thoughts early on, is we just thought that not enough people would be interested in them. And so we thought we could have a big impact by talking about a broader range of things. And we could help those people still have a much bigger impact, and they're on track to or maybe some of them would get interested in the more niche, unusual issues as well.

SPENCER: I see. So you're, you're saying like, even early on, you thought, some of the highest impact stuff is this kind of stuff that's about influencing the future or preventing the destruction of humanity. But realistically, there are people that are willing to work on that. And you could still greatly improve people's impact by helping them to be more efficient, let's say global warming, global property, or that kind of cause. So we should have advice for them too because it's so like a really good win relative to what other people might otherwise be working on. Is that right?

BEN: That's part of it. We do still have some of our podcast episodes about a variety of other issues, including factory farming and global poverty. So yeah, that's part of it. I mean, I think it probably is sad to say, we have become a bit more confident in those existential risk issues over time. And I think it was only in 2015, that Boston released superintelligence. And we kind of saw that subjective idea to a bit more criticism than before. And we can kind of get a sense of how well that was holding up. I think also, just AI progress has been faster than we expected. And so I think, the sooner we think that more powerful systems might become, the more pressing the issue seems. So that's probably also been positive updates. On the other hand, these issues, I think, are becoming less neglected. AI alignment has really grown a lot like a field. The last 2015, or so. And, you know, now we can see that, talking about catastrophic pandemics used to be like not at all a common topic. But that's much more on people's minds these days.

SPENCER: It's pretty wild how different things are in terms of AI than let's say, even six or seven years ago, where people really thought you were crazy if you talked about future risks from artificial intelligence, at least that was my impression at the time. And now it's just like, wild how mainstream that is.

BEN: Yeah, I mean, I think that's a really big success of the rationality community and effective altruism community and Nick Bostrom, and people like that.

SPENCER: You got to give kudos to Eliezer Yudkowsky who literally identified these cause areas when there is no one talking about it. Yeah, it's very interesting how public perception can shift.

BEN: Another big bucket of areas that we work on you think of them as like the meta-issues as well. And I mean, I think personally, one of the causes that I would most like to see more effort on we call global priorities research, which is, in a way a bit of a cop-out, it's like, the cause of trying to figure out which causes most pressing, there's only like, 10s of people really doing research on some of those really foundational questions that drive your answers to that question. And so like, our hope is that we could get a whole field of research going in academia, that would enable people in the future to have much better answers to this question like, which problem is most pressing? And there's now an institute at Oxford called Global Priorities Institute that's focused on foundational questions in ethics, economics, and other subjects to help people figure out which global priorities are most pressing?

SPENCER: Well, do you think that there's a power-law distribution or something similar? And how many good different problem areas are to work on? That it makes sense to devote a non-negligible amount of resources into trying to figure out why maybe there's something we missed? It's even further out in the tail, right? Because it could be 10 or 100 times more important than the things we already are working on.

BEN: Yeah, exactly. And it seems like there's been a lot of important ideas discovered in the last one or two decades, that we might project out from that there'll be other big shifts in which things seem most pressing in the future if we put more work into it.

SPENCER: And that makes sense. Is there other meta-areas that you're focused on?

BEN: Well, yeah, so one is like building effective altruism, where like the idea that is almost like, rather than trying to get yourself, you could try to build a community of people who are all trying to get together, and who want to like work on whichever issues are gonna be most pressing in the future. And that's like a kind of a way of getting a bit of leverage on what you're doing. So yeah, we've also worked a lot on that as well. And 80,000 hours itself has been one of the biggest forces that have helped to drive the growth of the effective altruism movement.

SPENCER: I think about the sort of like recursive nature of this. It's like, well, you could go try to do a bunch of good or you could try to go and see figure out how to do more good. Or you could go figure out how to get other people to get interested in doing more good. Or you could go and try to get other people to do figure out how to do more good, right? It's just like, yeah,

BEN: And that's kind of the level of 80,000 hours is that because we're encouraging people to go and do global priorities research to then figure out how to do more good. So we're like a few steps up, there, recursion does stop at some point, because you just run out of opportunities, and the diminishing returns become too big. And then you should switch to doing more object-level things. Across the community, you need a bit of both, you want some like meta work to make the rest of the community more effective, but you also need some people like actually solving concrete problems as well. So it's eventually cashing out into some actual impact.

SPENCER: Now, you better work.

BEN: I mean, it might be like not crazy to have everyone do that to work for like 10 years, or 90% of people, and then you like the switch to the object level stuff later when things have become a bit more settled. But from a kind of practical movement, building points of view, that would seem a bit suspicious, if you had that many people focusing on that.

SPENCER: We're just gonna do meta-stuff. Once we figured out the answers that actually solved the problem.

BEN: I mean, that's kind of like Philip Trammell's proposal of patient philanthropy is like, you just save money in a big endowment for hundreds of years. And because the rate of return on investment is higher than the rate of economic growth, you become like a more and more influential body over time, and then eventually use that to do whatever is the most pressing issue at that time.

SPENCER: I don't know about you, but my intuition tells me there's like a 99% chance that in 100 or 200 years that the money is like views, not for what you sort of expect or some group takes it over or the money's lost, or, something like that happens.

BEN: You just have to work out what is the annual risk of confiscation and make an estimate of that? And the issue is, is that enough to overcome how the rate of investment returns is higher than the social discount rates. And if yes, then you shouldn't do this. But you know, you could think there's like a half a percent chance of losing it all every year. But then, as long as the investment returns, like more than a half a percent above the social discount rates, you'll still come out ahead and expect, though, you are right. So almost all of the benefits come from this very small segment of scenarios where you accumulate loads of money, and it all goes to plan. And the kind of median scenarios is all that you just lose it all, without achieving anything.

SPENCER: It reminds me of these kinds of wild situations where someone would buy, like Amazon stock, like really early in the days of Amazon, put in their like brokerage account, and then just never touch it for like, really long time. And then they go to log into their account, and they discovered literally their shares have been taken from them. Because there's some weird rule that says like, if you don't log into your brokerage account, like what's considered forfeited after 10 years or something, and these people lost, like vast amounts of money, and they thought they were gonna be super-wealthy. Yeah. So you know, it's just like weird, logistical things like that.

BEN: Yeah, you'd need to think about how to set it up pretty carefully.

SPENCER: So my guess is that there are other areas besides the ones you mentioned, you want to just talk about that for a second in terms of important problems to work on?

BEN: Yeah. So it's important to say, we're really unsure about which columns are most pressing. And although we do have our own list that we throw out there for people to consider working on, part of the aim of 80,000 hours is just to get people to think critically about which problems are most pressing, and to try to develop their own answers to that. And one thing we have to help with that, besides the framework is we have this a big list of other ideas of problems that we think could be really promising in terms of like how important neglected they are on the website. So if you go to the problem profiles page, and yeah, it'd be really cool to see people exploring more of those ideas as well. And also proposing new ones.

SPENCER: Very cool. Yeah, I'm actually a pretty big fan of people trying wild ideas, because good ideas come from, but ultimately, is people kind of exploring things that at first seem a little out there crazy, and maybe even not that promising to most people. But then they find a way to look at things differently.

BEN: Thing, especially if you think about this from a community point of view. Whereas you might want a bunch of the readers or a bunch of the community or focus on the top priorities, you probably want another, like 20% just spread out over all kinds of different areas that might be really promising exploring them.

SPENCER: Absolutely agree. I think these people sometimes struggle because they'll get pushback because they can't give the strongest argument that their thing is the most important thing in the world. There's a need that's like, what it looks like in infancy before things are flushed out.

BEN: Yeah, exactly.

SPENCER: And then the chances are, it's not the most moral thing in the world, obviously. Right. But just like one out of 50 of those actually turns out to be incredible. Because they're, you know.

BEN: Yeah, I mean, you also have diminishing returns to each area. So it could well be worth having a couple of people work on it, even if it's not worth like 1000s of people going to work on it.

SPENCER: Right, if they're like, especially well suited to that, yeah. Alright, so we've talked about problems. Let's go into the second part of your framework, which is thinking about which solution to work on for chosen problems there. Is that right?

BEN: Yeah. That's the easiest way of thinking about that. We've actually touched on this a bit already. When we were talking about how much people think different charities vary inability to save a life. That data was actually from within the cause of global health already. So you know, what we were showing there is like, even if you decided, well, I want to focus on global health, then it turns out that some ways of saving a life seem like the least experts in the field think, save 100 times more lives per dollar than the average charity or intervention in that area. And so that would suggest by like choosing carefully, you could maybe increase your impact a hundredfold just even within the problem that you're already focused on.

SPENCER: Do you want to give example maybe that makes the concrete of how there can be different solutions for the same problem and how they can be so vastly different and impact?

BEN: Staying with global health. There are these big surveys, like the cost-effectiveness of different interventions. And yeah, I mean, I suppose maybe even more clearly, GiveWell is like a research organization that tries to find really cost-effective ways to help the global poor. And they've found currently they recommend deworming and malaria nets are some of the most cost-effective interventions. And they reckon that those are around 10, or 20 times more cost-effective than direct cash transfers. And so that's just giving people cash, which is like quite an attractive intervention away because people know how to make themselves better off. And so just giving them cash with as little overhead as possible, is like a pretty good starting point.

SPENCER: Just to clarify, this is giving cash to people that are extremely poor. And in a way that's very hard to game where it's like, you actually can tell through objective means that there are extremely poor.

BEN: Yeah, exactly. So it's just like finding the poorest people in the world, giving them cash with as low overhead as possible. You could almost think of that as a kind of like the index fund of global health and development work.

SPENCER: But ironically, I think they give us would say that's probably a lot more effective than your average charity, in global health. Is that right?

BEN: Yeah, I would say that's probably still above the median. And that's because most charities don't help the very poorest people. And they also incur a lot of overhead, but it's not obvious that they actually get enough leverage to overcome that overhead. And so yeah, so we might think, there's the median, that's a bit below cash transfers. And then we have these more leveraged things like malaria nets that are maybe 10, or 20x, more cost-effective than cash transfers. So then you're getting a spread of maybe like, let's call it 30 to 100, between the median and these effects of things that GiveWell has identified.

SPENCER: It is wild to think about with numbers that large, like if it really, let's say, is 20 times better to buy malaria nets, I think we should be extremely uncertain in the server what the numbers actually are. But let's suppose that it was like 20 times more impact per dollar than using a different solution, then you think about working one year and donating, let's say, 5% of your income, would have the impact of working 20 years and then 5% of your income? I mean, this is wild, that discrepancy.

BEN: Yeah, totally. And like, if everyone was acting on that, then it would be increasing the number of altruistic people in the world by 20 folds.

SPENCER: So in terms of thinking about solutions, one sort of axis you can think about is how certain your solution is right? Like, are you doing something that you're very confident is causing good? Or are you taking more risk, where there's a chance that it won't cause any good, but then maybe there's a smaller probability of doing good, but if it does do good, it's gonna be kind of a massive amount of good? Do you want to comment on that?

BEN: Yeah, so I think it seems like it's often possible to do even more get again, through, say funding research or funding policy change, even if the chance of success is not as good, the rewards of it has seemed like they often are high enough that it can be even more effective again. And so like, for instance, Open Philanthropy projects, which is partnered with GiveWell also has funded research into a gene drive that might be able to really crush the population of a certain type of mosquito that carries most malaria. I can't remember the exact figure, but I think under $20 million on that, but you if that works, then that might be able to massively reduce malaria in one go.

SPENCER: Right? So instead of having to get people bed nets, protect them from mosquitoes, in theory, you might just wipe out like whole mosquito populations.

BEN: There are 20 or 30 different species of mosquitoes around in Africa. So this would just be taking out the one that's causing the most malaria.

SPENCER: Right. Because obviously, the concern people immediately have is like, well, what eats the mosquitoes and then like weights, the things that eat the mosquitoes and so on, and like, what are the repercussions? Obviously, that would have to take into consideration. But the idea is that maybe there's this thing that is out there and maybe has a good chance of not going anywhere. But if it did go somewhere, it could certainly remove a significant proportion of all malaria in some areas, rather than having to have people like sleep under a bed. That's forever.

BEN: Yeah. And it's like around a million children a year a million people die a year of malaria. So it would be a pretty big deal. And so yeah, I mean, we often encourage people in our advice to seek out these higher-risk things as well. And often they seem more promising these more evidence back things. And I think that's particularly true and other causes because Global Health is the one where we actually have the most data and the most measurements. So it's been a bit unfair that I focused on global health to make this point about solutions because I actually think you get bigger differences within global health. For most other cases where we don't have this kind of data, there's just like one extra point I would make, I think normally you get a bigger impact through choosing your problem in the first place, then through choosing the solution, but there are still big differences in solutions. So it's worth thinking about too.

SPENCER: Right. It was just a sort of compound, right? If you choose a problem that's totally worthless, like, obviously, no matter how good your solution is, that's not gonna help. On the flip side, if you choose a problem, that's really important, but you choose a totally useless solution, that's not going to help. So it seems like there's some kind of multiplicative thing.

BEN: So I think that all four of the factors multiplied together. So that means if you can find something that's like twice as good on each, then in total, you found something that is 16 fold better. But then I do also think that often if you go from the very common things that people work on in social impact, I think by working on something that might have a big impact on the long-term future is really neglected, like aI safety, you might well be able to have like 100 times more impact through that. Whereas, I think it's often hard to increase your impact 100 fold just through choosing a solution better, though global health might be one area where you can do that because this data exists.

SPENCER: Like if any of them are zero, you get zero at the end, because they all multiply to zero. But you know, some of them have a kind of more on caps than others.

BEN: Yeah. And there's also an issue of like, how much has already been factored into common sense. I think people that are a bit more thinking, once they've chosen a cause, they are thinking about like, well, what actually helps solve that issue? That's, that's more factored into people's existing thinking. Whereas most people will never sit you down and say, like, what are the biggest firms in the world, which are most neglected? What can you do about it?

SPENCER: Going back to the kind of headspace giving, or like taking risks, in your terrible efforts, I tend to think that a lot of the best things to do are the sort of thing that like, probably will fail, but has some reasonable chance of succeeding, like maybe 10% success or something, I tend to think that if something has a 99% chance of success, it's much more likely that those people are already taking that opportunity. And on the other hand, if something seems to have like a one in a million chance success, then like, you can't tell the difference between one in 1,000,001 in a billion, I want to show you in as you know, I mean, it's just like we can't think about numbers that small when it comes to projects. And so it's very hard to actually have any kind of calibrated sense of how likely to succeed. I'm just curious to hear your thoughts on that.

BEN: I'm pretty sympathetic to that. I think I would just say it does seem like it depends a bit on the area. So in some areas, people seem overconfident. And people sometimes have a bit of a lottery-seeking bias. And so sometimes, like things with low probabilities, I think, might actually become over-invested in. And so like, an example of this might be like trying to become a professional athlete, as a high school student. People think they have a 10 or 20% chance of success, but they actually only have a 1% chance. And so in that domain, people should actually be keener to avoid risk than they are. Whereas it seems like for instance, in biomedical research, there's some evidence that the main funders in the US like the NIH is pretty risk-averse, and what it funds and there's like opportunities, therefore, to like, take a longer time horizon and support up and coming scientists who don't yet have a track record and maybe do it better by doing those higher-risk projects.

SPENCER: Yeah, that's a great point. So maybe we need to add something else to model something like how socially normal is it? It's a very normal thing that everyone has, like hearing about people trying to become a professional athlete, right. So you combine that with a serve risk, and people will still do it. Whereas, if you're doing some weird project, it's like is not considered normal. It's not the sort of thing people usually do. And it also has a high risk of failure. I think that's maybe the kind of thing that is way under-explored.

BEN: Yeah. So one idea is, it's like, if you've just had a bunch of recent success, then it can lead to people overestimating their future success. And if you're a high school athlete, well, you've just been like killing it, like do the best that your school will be going well, like positive expectations as positive surprises at each step. And maybe then they're expecting more. Whereas, if it's just something that you'd never tried before it yeah, if you look at the overconfidence, literature, people, when it comes to very difficult to complex tasks, people tend to underestimate their abilities relative to others. And so that could suggest something like being an AI safety researcher, you might expect that people would actually be underconfident in their relative abilities compared to other domains.

SPENCER: Yeah, we actually ran some studies on this. And we found a kind of cute finding, where if you ask people, you say, like out of 100 people, how many of them are you better than at driving? They tend to say, Oh, I'm better than most of them at driving, right? Most people think they're better than most other drivers. Right? But if instead of saying a driving, you say at racecar driving, people actually will go the other way. And they'll say they say they're worse than most other people. racecar driving. So it kind of illustrates this point of like, ah, a racecar driver. That sounds pretty hard, actually.

BEN: Yeah, that's so cool. So he replicated those results in the overconfidence searcher, and it worked out. Yeah, exactly. Cool. Okay.

SPENCER: Otherwise, just add a kind of nuance there that it depends a lot on, like, are you having people compare people to others? So this particular is all this like, you're doing your relative ranking to other people. So going back to this idea of the kind of efficient market hypothesis, but with regard to charity, if you're an investor in the stock market, there's just a natural assumption that like, "Oh, if you take on higher risks, you can get a higher return." And the only reason people would be willing to take on higher risk is if there was if they believe there was more return coming with that. And that kind of thinking, it just doesn't, it's not clear to me that that applies in the charitable world like that there isn't sort of market efficiency. And then, in fact, the risk is just scary, because like, people want to feel like they actually had an impact, not just they have like an expected value of an impact. And if you're someone that maybe works to help allocate charitable dollars, you want to be able to go to the people that you're allocating them on behalf of and say, look at the good, we did not like, well, it didn't work out, but like it could have worked out, and it would have been the really high impact if it had. So it seems to me like there are reasons to think that in the charity world, in particular, people might not be taking as much risk as they should be.

BEN: I do think in the world of doing good, probably seeking a bit more risk is a good heuristic in itself for those kinds of reasons that you're saying. I mean, I think another argument for this is, if you look back over the history of the charitable socially-minded projects that have the biggest impact, a lot of them did involve taking on a bunch of risks. So we could think of something like the discovery of the oral contraceptive, where there's a philanthropist who basically just like back to this researcher who's like, I think this could work, and they had no idea that it would actually work out of time, presumably, you would have put fairly low odds on it. But it turned out and, that was like this led to this huge change for society.

SPENCER: It seems like in general, a lot of scientific discoveries, and medical discoveries are like that, and like you really couldn't have been talking at the time that it was going to work.

BEN: Yeah, totally. I mean, I guess Yeah, well, we're talking about complications. I mean, I think there are a lot of very thorny issues to figure out about how much solutions do actually differ on effectiveness. And it's interesting that like, a lot of the academic research within global health, the initial measurements for how cost-effective is the very top intervention and the distribution when GiveWell looked into those in more detail and has now done like much more thorough investigation of them and tried to think through like the ways they might be wrong, the estimates have generally come down a bit. So there's an example of a general phenomenon called regression to the mean, where if you do a bunch of like estimates, and you find something that seems unusually good, then there's a good chance that you've made more like positive error than negative error. And so like luck has made it turn out to be better than kind of actually is underlying. And when you then like look into it in more detail, you often then find that the things that seem unusually good, like aren't actually quite as good as they first seem.

SPENCER: Yeah, that's a great point. My favorite way of explaining this is to imagine you had 10 people, and you measure each of those 10 people's reaction time, like by giving them a simple kind of quick test of reaction time. And then you took the person who had performed the best on day one, what would your prediction be on day two? would you predict that they would their reaction time and improve the same or get worse, actually, you should predict it's gonna get worse on day two. And the reason is, that if you think about each person's reaction time as being sort of how good their reaction time is fundamentally, plus, random noise about where did they have stray thoughts while they're doing the reaction time experiment, were they tired that day, etc, etc, you're the person who's going to appear to be the best is going to be the person who has like, both skill and luck combined, they're going to tend to be the person with just that amount of skill. So chances are the person that performed the best and they won, that they actually had some luck going in their favor, and that helped them perform the best. And then on day two, you're on average, you're not going to have that luck, so they're gonna perform worse. So and the more people that are involved, the more the supplies, if you're choosing from 10 people, and you're picking the best, you'd expect them to maybe do, on average, a little bit worse than the next day, if we're choosing from a million people, then actually, in the next day, you might actually have a very huge drop off in performance. So if we think of the whole charitable world as like sorting through all these possible charitable ideas, and then trying to pick the best, you have the same exact phenomenon, we're like, there's noise in the evaluation process. And so the ones that just were the noise happened to go in their favor, as opposed to against them are gonna actually look or be the good ones, you're like, Oh, that looks amazing. And then when you actually look at how well it does, it's going to do less well, on average.

BEN: That's a good explanation. And another big factor is like the more noise you have in your evaluations, the bigger this effect is going to be. And then you know, in social impact is huge uncertainty bounds. So this can mean this effect is like a very, very big effect. Though I think one thing you didn't mention that could also be important is that you pick the person who's best on day one, the chances are, they're still probably going to be well above average, even if they don't do quite as good the next time. And I think this actually got born out in the global health case where malaria nets were one of the top ones in the original academic study. And you know, after a while looking into it more, they still think one of the tops so it's still helping you find more things that are more effective than average, it's just that you shouldn't trust your like naive initial estimates of how effective it seems.

SPENCER: Yeah, great point, the amount of regression the need is a moderator to these various effects, like one of them is the number of things you're selecting between, like we talked about, like you're selecting between 1000 charities, you're gonna get more regression, the mean and if you're selecting between 10 Another one is the noise level, as you pointed out, right? Like, imagine there's zero noise level like your evaluations have no noise in them, then actually you don't get anywhere In the mean, your evaluations just stay constant. And then the third one has to do with, is there any underlying signal at all, imagine you're in a situation where rather than valuing reaction time, you were having a coin-flipping competition, like, who could get heads the most on coin flips, because there's actually no skill at all involved, you're gonna get sort of this massive regression to the mean because everyone actually has the same skill level. Whereas thinking about like the total reverse of that, imagine that, you know, you're talking about something where there's an extreme level of skill involved, and you're going to find it there, you can get actually essentially no reduction in the mean because the best person walks you win every single time 100% of the time, they're never gonna actually lose to the second-best person.

BEN: That's actually getting us on to our fourth factor, which is personal fit, and how much does that differ between career paths. And that's been a thing I have been thinking about a bit recently if you look naively at some career paths, scientists or entrepreneurs or writers, if you look at ways of quantifying their output, they seem to wait to have huge differences, the top 1% of writers might sell 100 times more books than the median, or the most successful entrepreneurs, I think with Y Combinator, something like 80% of the market value of the companies comes from like the top one, or 1% or so of the companies. So then you might look at that data and be like, well, just by going from like, the 70th percentile to the 90th percentile in this path might result in me having multiples more impact, because I'll be in the successful tail. But then there's this interesting question, l how much is that just being driven by lock, you might think that once you get to a certain point in startups, you've had a series around, maybe at that point, it is a bit more like a lottery. And it's a winner takes all situation in many startups in many areas. So just whichever startup happens to slightly win the race will get all of the rewards of that. And so I can make it seem like this huge variation in how productive people are. But actually, it's like more of a lottery.

SPENCER: Yes, isn't think about if you consider people who make it really big on YouTube, or who become pop singers that are famous? Was there actually a skill differential that is driving a lot of that? And maybe it's even subtle skills that you're not even necessarily like, pure singing ability, but maybe other things? How hard they work for, how good they are never hurting or whatever? Or is a lot of it just like total noise that was completely unpredictable at the beginning?

BEN: We published some research about this with Max Daniel recently tried to think about it, and then I have a slightly boring answer in the end, which is I think there probably are large, significant differences in productivity between people, at least in certain types of career paths that can be somewhat predicted ahead of time. But they're not as big as they would be if you just naively look at the numbers that come out. It's a little bit similar to the global health interventions case where the thing that seemed best based on these, like initial academic studies, turned out to be a bit worse than at first seemed, but there were still big differences in the end.

SPENCER: Right. So people actually do different skill levels. But there are also selection effects. So people, still, you should downgrade your prediction about how skilled people are somewhat because of the selection effects that they do.

BEN: Yeah, so there could be regression to the mean, again, and then there could also just be like luck could just be a big factor, which if you think that luck has a very broad heavy tail distribution, than even if your underlying distribution is quite narrow, just by multiplying through by luck, you'll end up with a heavy tail distribution at the end,

SPENCER: I tend to think that most of the people that like do incredibly well in a career, actually are really skilled. Again, they're sort of not necessarily skilled in the things that you would think of. But like so often, you'll hear about someone who seems to be an overnight success. And people seem to think that they're an overnight success, and you go learn about their history, and you realize they were doing that thing for 12 years, like honing their craft, and then suddenly broken the scene. Everyone's like, wow, they just made it big in one video, and you're like, "No, you didn't see they're 300 videos they made before that didn't break out." So I find this often true. And yet at the same time, there are also these huge luck forces. So I guess the way I think about it is that usually, it takes some extreme amount of skill to get that big. But most people with that skill level, still don't make it that big, because there's so much luck involved, right? So most of the people without skill level end up not getting there.

BEN: Love that to be more kind of research and thinking about these questions like, does luck, multiply your of the other predictors of success? Or is it like a thing that you add on as a la random thing at the end? But yeah, no, I think something, like we were saying makes, makes a lot of sense to me.

SPENCER: So what else should people think about when it comes to personal fit?

BEN: We'll see, I've just been making the case that I think it is an important factor as well, because insofar as there are like these, in some fields, especially the types of fields that people want to have a big impact focus on research and policy, it does seem like there are big differences in outcomes. If success is a little bit predictable, you might be able to move from 80th percentile to 90th percentile in a career path that can lead to you having significantly more impact either directly through achieving more in that path, or by like what we talked about right at the start by building more career capital through kind of generally being successful. And so I think your personal fitness is a big determinant of impact as well. And yeah, in terms of how to figure out your puzzle fits, that's a whole other big discussion that we could talk about for a while. So I'm not sure what's going on now.

SPENCER: Maybe what are one or two pointers that people can think about when they're deciding if they're a good fit for something.

BEN: Yeah, I think one big thing I'd say is, it can be quite hard to predict ahead of time. So one big thing we do is encourage you to take a slightly more empirical approach when it comes to personal fit. And so you can consider you have different paths, you might go down, and then you can try and think what some ways I could test out this path that's quite cheap, wouldn't take too much time, which could initially just involve talking to someone in the area. But you could also then try to do short projects in your spare time, or you could do an internship, or you could even just try it for a year or two and see how it goes. And I think like that kind of how do you structure a series of tests to figure out the best long-term option, we often spend quite a lot of time talking about that with people in our one advice

SPENCER: It is certainly the idea that oftentimes passion follows working on something, whereas people usually think, well, you have to passion, and then you go around saying, but maybe it's the thing seems somewhat interesting, you go explore it, you'll actually realize that, oh, it's actually very interesting to you, and you'll develop a passion for it.

BEN: I mean, I think that is the thing that happens. I mean, I would have never said that careers advice, the thing I was passionate about as I was growing up. But I did end up finding it really interesting because I thought it could be an opportunity to have an impact. And I found good people to work with and things like that. So I do agree that passion often like develops from other factors later. But the other point I was just trying to make is that often the best way to figure out, whether a career path is for you, is to try to actually try it out in some way. And just see concretely, what you achieve or whether you get into graduate school, or whatever the next steps are

SPENCER: Alright, I see just in terms of how good you are at.

BEN: Yeah. Try to keep an open mind and take an empirical approach, rather than just from the armchair trying to predict which pathway is the best for you right at the start, when you don't really have much information about what different career paths are alike.

SPENCER: It's shocking the degree to which people don't really get what they're getting themselves into when they go into different career paths. It feels to me like it should just be much more normal to do like shadowing people in different jobs. Like, you know, imagine high school or college, it was just normal that you would go shadow, like 10 different people following them throughout the whole day, I just think people would be way better at understanding what they'd actually be good at if they got to see people in action doing things.

BEN: Yeah, totally. Often, just even just going to talk to someone in a bunch of these areas can often already be really helpful.

SPENCER: Yeah, I feel like one thing that people don't ask enough is, "Okay, can you break down your day for me?" Like, what are you actually doing? And that somehow, you know, it's like, people tend to talk in abstractions about what they're doing, but not can you get concrete?

BEN: I definitely agree with that. If you look at normal careers tests that you might take at school, they kind of asked you about your interests and preferences, and then they try to like match you up with careers like those. But I'm yeah, I'm kind of like not sure how much that is the key thing that drives success and career paths. I think, if I was trying to design, they had people for a couple of days, and they were really serious about starting to predict switch careers might be best. Getting a really good sense of the day in the life of people in those different parts could be really helpful. And then I would also just try to think like, what's something that's quite close to the work itself that you could get them to be doing? Like, could you almost design things you do for a few hours that would try to be similar to that skill set, or that part of the job? I mean, the other way, I would start to try to predict which career paths before we've asked that in the long-term is to apply all of the standard forecasting techniques, like from super forecasting. So you could like ask lots of people, and then take an average of all of their answers, when you can try to do base rate forecasting and things like that. And this is what the research shows is actually the best way to predict super uncertain things like career success, but I've never had a careers advisor say, have you considered doing a base rate forecast?

SPENCER: So this would be like asking other people how long do you think I do in this career?

BEN: That would be the kind of expert judgment aggregation type of method, which interlocks work, he often finds that just simply taking the average of lots of other people's guesses, is a pretty good approach. In many situations, I wouldn't just necessarily put that much weight on random friends and what they think the idea is to get views from people who've been involved with selecting people or predicting how different people are going to do in different career paths in the past. Like someone who's done lots of hiring for a startup is going to be in a much better position to judge like whether you're going to do well at that startup than a general friend who doesn't know what the work involves.

SPENCER: That makes sense. Also, if you're going to kind of soliciting feedback from people like how well do you think I'd be in an extra career, why you have to also contend with the social biases like they're gonna make you feel good, and that makes you feel bad.

BEN: So there are a lot of ways this could go wrong. I would also encourage people to really think about themselves, try to form an inside view about what they think drives success in this career and whether they have whatever it takes to be successful. And to some degree, you have the most knowledge of your skills and your preferences. So it does make sense to put a lot of weight on your own assessment rather than deferring to others, but I do think it can be also useful to collect It's a wide range of views and try to kind of take some average of them and hope some of these biases average out and like you're saying encourage them to be honest. It can be hard to tell someone they don't seem cut out for a puff.

SPENCER: One of my favorite tricks to get more honest reporting and things like that is to ask a relative question saying, which of these three career paths do you think I'd be best at what you think I'd be second best at? And why rather than would I be good at career x where there's going to be a lot of social reasons when people come back. Yeah, I think you do. Well, yeah. But if it's relative, there's always one that you're best at. And second best that, right?

BEN: Yeah, no, that's a pretty good idea.

[promo break]

SPENCER: Okay, so we've talked about when you're thinking about having an impactful career, we've talked about which problem to work on, and we talked about which solution to work on. And then we've talked about personal fit. Do you want to walk us through the last point which is leverage? What is leverage?

BEN: Yeah. So leverage is, one way of thinking about it is how many resources you can bring to bear on these effective solutions in that path. So as an example, one of our readers and [inaudible] was she wanted to help people have better legal representation, she was considering becoming a public defender, which is a classic social impact career path if you're a lawyer, but she thought about it some more and thought, well, you know, actually, if I could change policy, even just a little bit that might affect like millions of people. And so instead of going to work on criminal justice reform as a policy area, she thought there would be a way of basically getting a lot more resources or making a lot more progress on that solution, which is like these ways of improving, as the solution, there would be different ways of improving criminal justice reform. And you could just try and work on them yourself. Or you could try to get the government to general more focused on those things.

SPENCER: I see. So leverage is you put in a little bit, and then it provides a lot of output, essentially. So it's like, okay, well, if you're thinking about helping people with malaria, one thing you could do is you could l make your own bed nets, and then go fly somewhere where there's a lot of malaria and hand them out, right? Like, that's a very low leverage kind of activity. A second strategy would be, okay, maybe you hire a company that already makes bed nets, and then you distribute those, okay, that's maybe more leverage. And then you can kind of go up the chain until eventually, maybe you're convincing governments to buy the bed nets, and hand them out. And now maybe that's even more leverage and so on, is that the idea?

BEN: That's a good way of thinking about it. And the things higher up the chain were necessarily always be better, because the chances of success might decrease more quickly than the actual increase in resources goes up. But it's at least worth considering a wide range of ways of getting leverage and trying to think about which one might let you contribute the most resources, given the things you're focused on.

SPENCER: Yeah, that makes sense. So when people are thinking about this idea of leverage in their career, what should be in their mind? Or how might they like to make that evaluation?

BEN: Yeah, so we talked about a couple of different paths. And one is you can just get leverage by developing really valuable skills, and then just controlling your own labor. But making that labor as valuable as possible but then some of the other parts we talked about. One kind of option is community building or advocacy. And so this could be quite a nice example of leverage would be, you could consider working on like pandemic prevention yourself. But if you can, instead, find just one or two other people and get done convinced of it, or help them switch, then those people are roughly equally skilled as you then you would have two or three times your impact by doing that. And so by instead focusing more on community building or spreading important ideas, you've gained more leverage than you would have had if you just worked on that path yourself.

SPENCER: Imagine there's also a kind of scale that levers things where you're trying to influence the world through, I don't know, reducing addiction to something like this, right? If you go work at a company that already has millions of people that they're affecting, and then you're able to affect the product that helps people with addiction, that's gonna have a lot of leverage behind it. Whereas, let's say you're working at sort of a small local, addiction clinic, where the most people you could influence is like the number of people that clinic sees a year, which is a relatively small number like that's gonna have low leverage.

BEN: Yeah, that's a good example. And I think building organizations can be one route to leverage. There's also this issue of is there like low marginal costs. So we also encourage people to say work on research. And that's because if you discover an idea, then that can be spread throughout the whole world for no cost. So discovering new ideas can be a route to a lot of leverage because of that, how it can be spread. It's an externality. Another route that we've talked about a lot in the past is earning to give, you can own money, and then you can direct that or whatever bottlenecks are most pressing and that problem, I partly just bring this up because he had Sam Bankman-Fried on the podcast, who now has made maybe around like $10 billion through setting up FTX, which is a big cryptocurrency exchange. We first met him in 2012, when he came to talk by 80,000 hours at MIT. And he eventually decided, maybe through quant trading, or entrepreneurship, I could have a big impact through earning money. And like now he's in a position to fund, literally 1000s of people doing high-impact project, so that there was another indirect route to then being able to tackle big problems.

SPENCER: Right. So earning to give us a general idea of rather than trying to do direct impact, you could make money in your career, and then you could give away some percentage of that and have a high impact on the world. Through that giving and mind sharing that, early on the kinds of days of 80,000 hours, there was no more emphasis on that. And then that's shifted over time. Is that right?

BEN: Yeah, I mean, to some degree, we've always just seen it as like one option among a bunch of options. So maybe like 10% of people, that should be something they consider seriously. But then what happened is because it was such a crabbing idea in the media, that we got a lot of media coverage specifically about it. So then people tended to just associate that with being our main thing.

SPENCER: So it's, Oh, if you're someone who wants to have a high impact, and you happen to have a skill set and set up in your life that makes it particularly convenient to go make lots of money, maybe you're really interested in quant trading or something like this, then maybe that would be one of the highest impact things you could do.

BEN: Yeah, though, I would say if, for people who are interested in effective altruism, it does seem like it's become less promising over time. Because there's just been like, a lot of people who are already very wealthy get interested in effective altruism. And it seems like the amount of money available is growing faster the number of people available. So it's creating this like overhang where what we need is people with really good ideas for how a lot of money can be spent and like management skills to build large organizations quickly, and grantmakers to figure out how this money can be spent. And like those skill sets seem to be the biggest bottlenecks right now, rather than just getting even more money, though. At the very least, like GiveWell still has a funding gap. So you can still save lots of lives of additional money. And I think probably even a bunch of higher leverage things to than that. But if you're just thinking about like, the most pressing things right now, I would think I need to give a structure over time.

SPENCER: I guess one kind of response to that is it feels to me like, yes, there's a lot of money in sort of things like Open Philanthropy project. But on the other hand, Open Philanthropy projects being as big as they are, it doesn't even make sense for them to consider like kind of smaller funding opportunities, whereas these sort of like a big topic, areas that are sort of really well investigated, might have huge amounts of funding behind them. There's not necessarily a lot of players looking at more creative funding ideas in sort of smaller or more niche areas. What do you think about that?

BEN: I mean, I agree that Open Philanthropy has a lot of gaps in what they can cover, but there are people trying to fill those as well. So like, if I talk to some funds make a lot of small grants to individuals, and that they've made that their niche. And there are some others like Angel donors in the community who are trying to focus.

SPENCER: People playing those roles. Yeah, I guess still, my sense is that it's way easier to find interesting opportunities, if you're giving away like a few million dollars, and there might be really high impact things you can do with a few million dollars, that sort of wouldn't get to the level of what they know, Open Philanthropy. So they wouldn't even consider it. It doesn't just doesn't make sense for them to I feel like the higher you go up the food chain into like, really larger size opportunities, the more sort of efficient it is like, you know, the less money gets you but like, if you're giving away smaller amounts, I do think that there's still a ton of really interesting funding opportunities.

BEN: Maybe what I'm getting at is that like, if someone just makes a bunch of money, and then gives to all the cars are exactly the same things that everyone else is given to like maybe that's not that valuable. Right now, on the margin.

SPENCER: Maybe what I'm getting at is that like, if someone just makes a bunch of money, and then gives to all the cars are exactly the same things that everyone else is given to like maybe that's not that valuable right now, on the margin.

BEN: It still is valuable, like, at the very least, you're like saving the open field grant makers money which they can then spend on other stuff.

SPENCER: Right. There's an interesting kind of exchangeability there like if Open Phil could fill up to give all charities but they choose not to, then you're sort of contributing to that pool of money that's going to give all charities, and then that frees up Open Philanthropy to have more money. And you know, that's probably a good thing. And if you believe Open Philanthropy is gonna have a high impact over time, although maybe it's complicated to think philosophically about, what you're really accomplishing there. But I guess what I'm saying is, though, if you make a bunch of money, and then you're willing to consider creating opportunities for the high impact that are outside of the thing that lots of other people already doing, that's where I see a sort of a lot of the best use cases of earning to give.

BEN: Yeah, totally, you'd want to think about what your edge could be. And so yeah, I think it could make sense to save up the money a bit for a while. And then when you have time to donate, you can put more time into really researching it. And then you could think maybe could I get an edge through my connections, I might know, people that the large donors don't know, and maybe, then I could back them. Or another thing is, you could just try to find a cause area that Open Philanthropy is focusing on currently, and there's like a lot of those many, many of the ones that I mentioned, are on our list of other interesting areas earlier. And you could try to specialize in that and get to know all the key people within it. And then yeah, those grants wouldn't be made otherwise. That's the thing I would encourage anyone who's done some money to give or has some spare money to donate to consider.

SPENCER: Yeah, that's a strategy that I think is like, why interesting to think about for people that are like giving away modest amounts of money is there someone who's really high impact, who just could use some more support that actually could make them higher impact, right, someone who like for example, could just benefit from having more time to research rather than having to do a day job. For example, someone who seems like they could be really high impact. But right now, let's say they're forced to focus on having a day job and kind of disappointing themselves. And maybe you could help bootstrap them into like, focusing on doing good full time or that kind of thing. And that's not very scalable, because it's really like helping at the level of one individual. But maybe it's like a really good use of money.

BEN: Yeah, and it's a good example of an opportunity you find through your networks, that other donors might, it might be quite hard for them to evaluate, because they might not know whether they can trust this person. And whether they're really how focused on the impact they actually are. But if you know them, well already, then you can do that. One of the things to consider, which I think probably is what I would do, like if-then at another $10,000, is I would put it into a donor lottery, which, like if I torture some funds normally does one or two rounds a year. And so instead of donating $10,000, it basically gives you a 10% chance of having $100,000. And so 90% of the time, I wouldn't have to do any research. And that'd be super easy. But then in the 10%, of scenarios, when I would have $100,000, to think about so then I might be able to spend, a couple of days really trying to find an interesting opportunity. And I think that can often be like a better path than just having $10,000 and then only spending a few hours thinking about it.

SPENCER: It's really cool, because, you know, if you're only giving away $10,000, that you just like the motivation to spend a huge amount of time investigating your options is going to be less whereas whoever wins, the lottery is going to have a much better, like a better incentive to really focus on using that money well.

BEN: Yeah, well, the hope is there could be some slight increasing returns to research. One way of saying that is like, well, if you just get $10,000, maybe I'll just give it see if I catch some funds. Because like, that's the only thing I can figure out in a few hours. But if I had like a few days, then maybe I can actually find an opportunity that hasn't been identified already. And so it could be more than 10 times more effective.

SPENCER: Okay, so we've finished talking about these kinds of four different factors like which primary focus in your career, what solution you use to try to solve that problem, the amount of leverage you have in that solution, and then your personal fit. And we've also talked about the sort of multiplicative effects of these, right? Each of them kind of influences the other. And if you can get them all in place, they multiplied together, and that kind of magnifies your impact. Anything you want to add to that?

BEN: No, I think that's a really good summary. And yeah, it means that by making slightly better decisions, about every four factors, the total difference compounds, and that holds this possibility of being able to find paths that are much higher impact than what you might have started with, and therefore, like finding something, potentially for something that's both more satisfying and does a lot more for the world.

SPENCER: Yeah, I'll just point out something about when you have a bunch of factors multiplying together, there's an interesting way to think about it, which is that if any one of them has zero, if zeros out the whole equation, so sort of like the first thing you're thinking about is making sure none of them are too low, right? Like making sure you're not working on a worthless problem or a terrible solution, or you're at zero leverage, or where you're really bad personal fit, right. So like, that's the first thing, you can't have any zeros in the equation. So anything that's too low, you want to push that up. And then the second thing you can think about is, the way you can make the numbers really big is by making sure at least one of these is really large, right? At least one of them has a huge influence. And so you can think about for yourself, you know, is there an opportunity to make one of these especially big so that the product ends up being really big. So this has been super interesting, Ben. I want to make sure people have action steps to take like let's say they want to learn more about these ideas. Let's say they want to think about how they can improve their career. What is the next action they can take right now?

BEN: Yeah, so we've covered a lot of the kind of big question concepts, but how to then apply those to very concrete career decisions that you have to make, like, which job do I apply for? And who should I speak to? These are very practical questions. There are a lot of extra steps involved in going to that. So we created this career planning course, which has like eight sections, and you can do one per week. And it leads you through all these concepts, but tries to make it much more practical and says, like a series of questions you can answer and clarify for your own career, kind of starting with, like your long term aims, and then working towards like, the very next jobs you could take and then working towards like, next steps are going to take to investigate or put those plans into action. And yeah, like a lot of the stuff we've been talking about today is like very high-level kinds of big picture things. But I think, like a lot of great career strategies, you also want to be looking at the options that are right in front of you. And what's exciting to you right now. What's working well? Just what seems interesting, like, where are you learning a lot? And really good core strategy, you want to do like a mixture of like, kind of like more bottom-up what we call working forwards versus working backward from these really big picture considerations. The career planning course tries to balance those two. And yeah, so you could just find that 80000hours.org/career-planning/process, and you can get it as an email course. Or you can just read it all on our website.

SPENCER: Great. If people want to learn more about you or your work, we're going to do that.

BEN: Yes. So the best thing is to either follow me on Twitter @ben_j_todd or check out my personal website, benjamintodd.org. Yeah, thanks for having me.

SPENCER: Awesome, Ben, thanks so much for coming out. It's just great.

[outro]

Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of the major podcast platforms:


Credits

Host / Director
Spencer Greenberg

Producer
Josh Castle

Audio Engineer
Ryan Kessler

Factotum
Uri Bram

Transcriptionist
Janaisa Baril

Music
Lee Rosevere
Josh Woodward
Broke for Free
zapsplat.com
wowamusic
Quiet Music for Tiny Robots

Affiliates
Please note that Clearer Thinking , Mind Ease , and UpLift are all affiliated with this podcast.