CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 202: Should we widen our moral circles to include animals, insects, and AIs? (with Jeff Sebo)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

March 21, 2024

How did we end up with factory farming? How many animals do we kill every year in factory farms? When we consider the rights of non-human living things, we tend to focus mainly on the animal kingdom, and in particular on relatively larger, more complex animals; but to what extent should insects, plants, fungi, and even single-celled organisms deserve our moral consideration? Do we know anything about what it's like (or not) to be an AI? To what extent is the perception of time linked to the speed at which one's brain processes information? What's the difference between consciousness and sentience? Should an organism be required to have consciousness and/or sentience before we'll give it our moral consideration? What evidence do we have that various organisms and/or AIs are conscious? What do we know about the evolutionary function of consciousness? What's the "rebugnant conclusion"? What might it mean to "harm" an AI? What can be done by the average person to move the needle on these issues? What should we say to people who think all of this is ridiculous? What is Humean constructivism? What do all of the above considerations imply about abortion? Do we (or any organisms or AIs) have free will? How likely is it that panpsychism is true?

Jeff Sebo is Associate Professor of Environmental Studies; Affiliated Professor of Bioethics, Medical Ethics, Philosophy, and Law; Director of the Animal Studies M.A. Program; Director of the Mind, Ethics, and Policy Program; and Co-Director of the Wild Animal Welfare Program at New York University. He is the author of Saving Animals, Saving Ourselves (2022) and co-author of Chimpanzee Rights (2018) and Food, Animals, and the Environment (2018). He is also an executive committee member at the NYU Center for Environmental and Animal Protection, a board member at Minding Animals International, an advisory board member at the Insect Welfare Research Society, a senior research fellow at the Legal Priorities Project, and a mentor at Sentient Media.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Jeff Sebo about the moral status of insects and AI systems, preventing moral catastrophe, and the repugnant conclusion. This episode was recorded live on January 30, 2024, and was put on by the NYU Mind, Ethics, and Policy Program in collaboration with EA New York City.

SPENCER: Jeff, welcome.

JEFF: Yeah, thank you, Spencer. And thank you everybody for being here. This is a fantastic turnout. We really appreciate it.

SPENCER: Now, I just want to do a quick poll in the audience because today we're talking about these thorny questions in ethics around AI and insects. Do we have any AI or insects in the audience today? Okay, we got a couple in the back.

JEFF: A few. [laughter in the audience]

SPENCER: Jeff, I think of your work as being about: how do we prevent moral catastrophes? How do we make sure that humanity doesn't accidentally cause some massive amount of harm? Do you think that's a fair characterization?

JEFF: Yeah, that is definitely one of the main themes of my work. And I think of some of the topics that we might discuss tonight, because we do have a track record of intentionally causing massive amounts of harm to neglected nonhuman populations. But we also have a track record of unintentionally but perhaps foreseeably, causing massive amounts of harm to neglected nonhuman populations. Sometimes we know animals are conscious, sentient, agential, and we exploit and exterminate them anyway. Other times, we might not know or might conveniently overlook the fact that they might have those characteristics. And so we, again, exploit or exterminate them. And so I think there are lots of opportunities here for us to increase that tendency to accidentally cause massive, massive, massive global problems.

SPENCER: Most of the people listening and most of the people in the audience will probably have at least considered ethics of factory farming. You push the boundaries a lot further than that, and get us to consider situations that are not on almost anyone's radar. But let's start just briefly with factory farming as an example. Do you want to just tell us about what you think happened regarding the ethics of factory farming? How did this come about?

JEFF: Factory farming came about over about the past century. Factory farming is roughly industrial animal agriculture. When the world started to industrialize, when assembly lines came online, we realized we could use that to increase efficiency and scale for a lot of industries, including the food industry. And so over the course of the 20th century, we use these technologies and methods in order to develop industrial ways of farming animals at scale. And so by the end of the 20th century, and now in the beginning of the 21st century, we have reached a point where we kill more than 100 billion captive land animals per year, and many more than that captive aquatic animals per year, at least one trillion captive invertebrates per year, one to three trillion through industrial fishing aquatic animals per year. This is massive, massive, massive scale for this food system, which is largely unnecessary to the extent that we can produce food with plants instead. This is what we already do. And now humanity is contemplating extending this further to invertebrates like insects, as well as of course, coming up with these new ways to bring online and then use these new kinds of nonhumans.

SPENCER: And people might say, well, humans have always eaten animals for food. But some argue that the factory farm conditions actually are much worse for animals than the historical period of eating animals. Do you agree with that?

JEFF: I do agree with that. We now optimize our animal agriculture, at least 90% of animal agriculture globally, for output. And that means we breed animals to grow as big as possible, as fast as possible, to produce as much milk or as many eggs as possible, as fast as possible. So a lot of suffering is bred into them, no matter what their conditions of captivity are. But we then, and again, at least 90% of cases keep them captive in these large, toxic, cramped environments where they have no ability to express their natural instincts. They live in their own waste, crowded by other animals, pumped full of antibiotics and antimicrobials to stimulate growth and suppress the spread of disease. And then, as juveniles still, in most cases, are transported across great distances in hot trucks and then killed on rapid disassembly lines that prioritize speed and efficiency over their health and welfare. And so that is bad for the animals, bad for the human workers, bad for local public health and environmental issues, and bad for global health and environmental issues. Yet we do it anyway, because it allows us to produce meat efficiently.

SPENCER: One thing that I find very interesting about the topic is even if you assign a very small value to the lives of animals, or the suffering of animals, like let's say 100th or 1000th as much as you care about the suffering of humans, it still is such a big industry that ends up being a massive amount of harm being done. So you don't have to think animals are very important to still think it's a pretty big deal.

JEFF: Absolutely. You could really focus on only humans and think this is a really big deal because, again, factory farming/industrial animal agriculture is a leading contributor to disease outbreaks, epidemics, pandemic, environmental issues like waste and pollution and greenhouse gas emissions. Even focusing on those impacts alone, factory farming is one of the most harmful, devastating things humanity has ever done. And then if you bring in the more than 100 billion animals who are killed every year in captive settings, to say nothing of the invertebrates, to say nothing of the wild animals, then it makes it almost overdetermined that this is one of the worst things that humanity has ever done.

SPENCER: But now you're pushing for new sort of ethical frontiers. Let's just briefly touch on each of them, and we'll go back into them in more depth. But why could insects be a big deal on the ethical front?

JEFF: Insects could be a big deal because first of all, they are surprisingly sophisticated given their small size. And second of all, there are very very many of them. So there are about 8 billion humans alive at any given time. At this point in our history, there are an unknown number of insects alive at any given time, but estimates range from one to 10 quintillion insects alive at any given time. It's about 18 zeros, a lot of individual animals. And yes, we know very little about what, if anything, it might be like to be a given insect. But if there is a reasonable chance that there is something it is like to be an insect, and if they have even a small amount of happiness or suffering or other forms of welfare per individual, then the fact that there are one to 10 quintillion of them alive at any given time means that there is potentially a lot of happiness and suffering and other welfare states at stake in our interactions with these individuals.

SPENCER: Now, some people might say, "Well, you know, if you're gonna push it that far, why stop at insects? Why not go to bacteria?

JEFF: Yes, that is a type of question that animal advocates have been facing for a long time. If you want to advocate for cows and pigs and chickens, people say, "Well, what about the fishes?" And if you want to advocate for the fishes, people say, "Well, what about the insects?" And then if you want to advocate for the insects, people say, "Well, what about the plants? What about the bacteria? What about viruses?"

SPENCER: So usually as a reductio ad absurdum, right?

JEFF: Usually as a reductio, usually as a way to show that this form of reasoning leads to overwhelming and disturbing places, moral inclusion for this vast number and wide range of beings. And so, I am not sure exactly where this process should end. But I will say that we do have a tendency to shut ourselves off to these possibilities because we feel overwhelmed, because it feels implausible, because we sense our own self-importance being challenged. And so as a first step in those conversations, I want to resist my own impulse to close the door behind whichever individuals I happen to be advocating for right now. So if people say, "What about the microbes? What about the plants? What about the AI systems?" As a first step, I want to pause and say, "Yeah, what about them? Is it possible that in 10, or 20, or 30, or 40 or 50 years, it will seem a little bit less bizarre to us that they might merit at least some moral consideration on the margins?"

SPENCER: I guess that's something that is part of why you're a philosopher. You say, "Well, what if they are?" Right?

JEFF: Yeah, right.

SPENCER: It seems to me that there's this thing that happens as you get to sort of, let's say, less complex creatures, where the probability of them mattering might go down, but the number of them goes up.

JEFF: Exactly.

SPENCER: And so there's kind of two things fighting against each other. So is that how you think about where we would stop in terms of analyzing which creatures matter?

JEFF: Yeah, that is a major trade off. We live in a world, and will in the future live in a world, with some of the new beings that humanity is creating, where you have a trade off between, on one hand, small populations of large beings (like humans, African elephants, and so on), and then on the other hand, large populations of small beings (and that includes insects), but then even smaller beings (like microbes, and so on and so forth). And so you have to ask, in a complex multi species society like this, how do you make trade offs between the needs of the small populations of large beings, and these large populations of small beings? And again, I am not sure how to answer that question. But I want to start from the place of not simply presuming that the human population happens to be the one that always possesses the most value and takes priority.

SPENCER: Now let's bring AIs into this question. Where do you see them fitting in when it comes to ethics?

JEFF: I think of AIs right now as similar in some ways to insects and these other nonhumans, who are very distinct from humans, because they are very complex, very sophisticated, increasingly complex and sophisticated, but with many relevant similarities and many relevant differences. And so the reality is that we are creating them, eventually in really large numbers, and then using them for our own purposes. And we will almost necessarily have a limited understanding of what, If anything, it might be like to be them. We might never know for sure, if there is anything at all, it is like to be an AI system or what it might be like, whether particular types of experiences are good or bad for them. And so we are creating this huge population and interacting with them in ways that benefit us, and are kind of in the dark about what the implications might be for them. And so, I do see that as somewhat relevantly similar to our predicament with respect to these very different types of nonhumans like insects.

SPENCER: As with insects, you can imagine there being extremely large numbers of these. If you take a company like OpenAI that's running AIs all the time, I don't know how many API requests that they're handling, but it's probably really large volumes. But it also seems like AIs have another thing going on, which is that they may be able to experience, if they have an experience at all or if they eventually have an experience at all, they maybe will experience things for a much longer amount of experiential time and the same amount of clock time. So you can imagine that an AI operates a million times faster than the human brain, and suppose that they actually had something that's like to be it, could experience a million years in one year.

JEFF: Yeah, this is a question that actually comes up for other animals too, because with both AI systems and nonhuman animals, different individuals have different cognitive clock speeds. And that basically means that they can perform more or fewer operations in a given unit of time. And so you might think that how much welfare we can experience over the course of our lives depends on our lifespan, how many years we can live. But once you take into account the fact that some of us can perform more cognitive operations in a given unit of time, it raises a question, which is: do those individuals experience the passage of time differently? Does time seem to move slower for you, if you can do more things in a given second or minute or hour or day? And all I will say right now is we do not know. We are not sure. But the mere conceptual possibility of that shows that you should not take anything for granted in terms of how to measure or compare welfare across such different types of beings.

SPENCER: This is something I thought about watching videos of cats interacting with snakes, because it's just absolutely incredible the extent to which the cat seems unconcerned about being like two feet from a deadly snake. And then the deadly snake tries to bite it, and somehow the cat doesn't get bitten magically. And you start to wonder, could it be that they're operating at just a faster clock speed than humans?

JEFF: That's right.

SPENCER: So before we dive into these topics in more detail, let's take a step back and talk about terminology. Because I feel like there's so much confusion around terminology. When we're talking about consciousness, how do you define it, and is that the crux of this conversation? Is it the kind of key thing we need to keep in mind? Or is there something else we have to keep in mind?

JEFF: There are many ways to define consciousness. And there are perhaps different forms of consciousness. For present purposes, I define consciousness — and many others define consciousness — as roughly the capacity for subjective awareness. Or, in other words, the capacity to have feelings, to have experiences or motivations that feel like something from a first person perspective. So if you can feel pleasure or pain — pleasure feels good, pain feels bad — if you can hear music and it feels like something to listen to a song, you can taste chocolate and it feels like something to have that sensory experience, that is what I mean by consciousness. So, is there a kind of subjective, first personal qualitative element to your cognition?

SPENCER: So you can imagine a sensor that can detect whether something like smells of raspberries, maybe it's detecting some chemicals, but it doesn't mean it's conscious. But if you actually smell the smell of raspberries, then you're conscious. Is that fair?

JEFF: Yes, that is fair. And that right there, that distinction that you made shows what makes this so hard to study, because you can always tell when a being can sense data in the environment. What you cannot tell from the outside, from the third person or second person perspective, is whether that sensation corresponds to any kind of subjective, qualitative, first personal experience that feels like something. And so, we have to do a lot more inferential and speculative work in order to fill that gap.

SPENCER: Is this definition of consciousness standard in philosophy, or is it a term used other ways? Because at least on YouTube podcasts, it seems like there's lot of different terms.

JEFF: Yeah, it is standard in philosophy. There is a specific name for it, which is called phenomenal consciousness, and that is the type of consciousness that feels like something. This is distinguished from access consciousness, which might not have that element.

SPENCER: And so what is access consciousness?

JEFF: Basically a more sort of functional behavioral kind of consciousness. So if you have consciousness, and it feels like something to be you, then you have phenomenal consciousness. And if you have consciousness that can perform certain types of cognitive functions, then you have access consciousness. And that obviously goes hand in hand with us and many other animals. And then the question is, to what extent does it go hand in hand in other types of beings too?

SPENCER: So do you view phenomenal consciousness as being critical to whether ethics applies to a particular being or agent?

JEFF: This is a matter of live debate in philosophy, and I think in the world, too. So there are a lot of people who think consciousness is required. There are a lot of people who think consciousness is sufficient for moral standing and moral considerability. But there are a lot of people who disagree with both of those claims. So some people think more is required than consciousness, perhaps you need sentience or agency. And we can say what that means. And there are other people who think that less is required, that maybe all you need to merit moral consideration is being alive in some minimal sense, being capable of some form of survival and reproduction or self replication. So there are a lot of views about what it takes to be morally considerable. And when we are deciding how to treat these large populations (these very different types of beings), we have to deal not only with uncertainty about, for example, whether they possess consciousness, but also uncertainty or disagreement, at least, about whether possessing consciousness is what it takes to matter.

SPENCER: So what is sentience? How does it differ from phenomenal consciousness?

JEFF: This is, again, a term that people define differently. For present purposes, I and (again) many others define sentience as the capacity to consciously experience positive and negative states like, but not limited to, pleasure and pain. So you are conscious when it feels like something to be you. And then you are sentient, when additionally, you can have feelings that are good or bad, that feel good or bad to you, that you like or you dislike.

SPENCER: So there could be a conscious creature that is not sentient. So it has experiences, but it doesn't sort of differentiate between them. They're all equally valence.

JEFF: At least conceptually. Whether such a creature does or could exist in the actual world is a separate question. Maybe being conscious is enough to guarantee at least some minimal form of positive or negative experience. But conceptually, yes, consciousness is possible with no valence, no positive or negative at all. And then sentience requires that positive or negative valence.

SPENCER: So if insects turn out to not be conscious, would you say that we're kind of off the hook morally, in your view?

JEFF: My view is that, given persistent disagreement and uncertainty about the values and the facts, we need to be at least a little bit cautious and humble when we make value judgments and factual judgments about these matters. So for example, I might personally feel confident that consciousness, sentience, and agency are all required for moral significance. But then I need to reflect on how many smart people disagree with me about that. And I also need to reflect on how many smart people have been confidently wrong about these issues in the past. And when I reflect on those forms of disagreement, and those forms of hubris and arrogance, and all of the sources of bias and ignorance we have and all the motivated reasoning we have, how convenient it would be for us if these beings did not matter. When I reflect on all of that, it makes me want to allow for at least a realistic chance, a reasonable chance that less is required than I currently think.

SPENCER: It sounds like your most likely scenario is that consciousness matters. But you have some uncertainty because other smart people disagree with you and so on. Is that right?

JEFF: Yeah, I want to allow for at least a 1% chance. So, one in 100 chance that I might be wrong. And like many other humans, I might be setting the bar too high. And in my view, if there is really a 1% chance that I might be setting the bar too high, I ought to take that into account somehow. Because then that would basically mean a 1% chance that a much wider range of beings matter than I think matter. And I think, in the spirit of caution and humility, I should maybe give at least some minimal kind of consideration to these beings who at least could matter for all I can currently tell.

SPENCER: So if you're really confident in a philosophical view — you've thought about the arguments, you're confident — but then you find out or you already know that lots of smart philosophers think you're wrong. How do you treat that? Do you just say, "Widen your uncertainty a little bit"? Or you just add a little edge of like, "Well, maybe I'm wrong."

JEFF: Yeah, this is an ongoing debate in epistemology, the branch of philosophy that studies knowledge and when you can know things and what it is to know things. And not surprisingly, people disagree about how to navigate peer disagreement. And views might range from splitting the difference all the way to digging in your heels and insisting that your view is right. I am not sure exactly where I fall there, but I do know that I do not accept the 'dig in your heels' and 'insist that your view is right' approach. I think that there should be, again, some consideration of the possibility that you are wrong and some hedging that you do when it comes to making decisions, in case you are wrong.

SPENCER: It seems like our response ideally has to fall between the two extremes. On the one extreme, you're like, "Well, I'm just one person out of n people. And so I'm gonna believe my beliefs like one over n." And you treat everyone else equally. On the other extreme, you just ignore all the smart people who just tell you you're wrong. It seems like both of those are non-functional positions. We have to be somewhere in between.

JEFF: Yeah. I am open, at least in theory, to the possibility that I really should give my own view no more weight than any other view. What is so special about my view? The fact that I happened to be the one experiencing and possessing it? Why should that matter from any kind of objective or impartial perspective? So I am open to the idea that, in theory, my view should count as one, and no more than one and should receive equal weight to everybody else's. Now maybe you can then find reasons to regard yourself as a little bit more of an expert about some topics than other people. But of course, that form of reasoning is seductive and can be used to rationalize a lot of hubris too. So all of that is a convoluted way of saying, "I'm not sure where I fall, but I do not want to be close to the possibility that I should significantly discount my own views when making decisions about how to treat vulnerable populations."

SPENCER: Seems like a good motivator to get really good at thinking because then you can justify believing in your own beliefs instead of other people.

JEFF: Yeah, that's true. Yeah, a little bit of rationality goes a long way. As Kant reminded us, it can be used for good and for rationalizing evil.

SPENCER: So let's go back to the bugs question. What do you assign, currently, to the probability that bugs are conscious? Maybe it's gonna depend on the type of bug, but can you unpack that a little bit?

JEFF: It certainly does depend on the type of bug. I think that broadly speaking, just to zoom out a little bit. We have the vertebrates: animals with a spinal column. And then we have invertebrates: animals without a spinal column. There are much larger in number and wider in range of invertebrates than vertebrates. I think we are now at the point where we can say all vertebrates — and that includes mammals, birds, reptiles, amphibians, and fishes — are at the very least more likely than not to be minimally conscious, sentient and agential. The real action is with the invertebrates. And here, I would say this much as a first approximation, and then we could talk a little bit more if you want. There is at least a reasonable realistic chance, a reasonable, realistic, non-trivial, non-negligible chance that at least members of the following categories of invertebrates have these qualities: the cephalopod molluscs (like octopuses and cuttlefishes), the Decapod crustaceans (like lobsters and crabs), and the insects. So I do think now, even though we still know very little about very many different species of insect, we now have enough behavioral and anatomical evidence and enough progress in the philosophy and science of consciousness, to attribute at least a non-trivial chance, given the evidence available to the idea that it feels like something to be a bee or an ant or other other insects.

SPENCER: That might surprise some people because they might think, "Well, isn't this something that we've had almost no progress in science and understanding of consciousness?" So what are some of the lines of evidence that you're drawing on that are helping you decide what is and isn't conscious?

JEFF: This is very much in development. But the general idea is that you would take what Jonathan Birch and other philosophers call a theory-light approach. Because in the same way that there are different theories of moral standing — consciousness versus sentience versus agency — there are also different theories of what it takes to be conscious, of what it takes to be sentient, what it takes to be agential. And those range from very demanding and restrictive theories to very undemanding and permissive theories. So some people, for example, think that in order to be conscious, in order for it to feel like something to be you, you have to have a very high degree of cognitive complexity, a very high degree of cognitive centralization. You need to have a carbon based substrate. They think you need all of that. And then on the other end of the spectrum, you have people who think consciousness is a fundamental property of all matter, or consciousness requires only very limited information processing of a kind that is widespread in nature. So first of all, what you need to do is acknowledge disagreement and uncertainty about that. And then what you need to do is find an approach that can be reasonably ecumenical, reasonably pluralistic. So that means looking at plausible, fairly widely accepted theories, see where they overlap, find indicators of consciousness, behavioral or anatomical or evolutionary evidence that a fairly wide range of theories would agree on, and then look for those indicators in various populations.

SPENCER: So what are some of the strongest indicators that we can actually empirically validate?

JEFF: You can look for certain brain structures, of course. And then when it comes to behaviors, you can look for questions like: Do they make trade offs between experiences and other priorities that they might have? Do they engage in certain types of grooming or self-protective behavior? Do they nurse their wounds? Do they nurse each other's wounds? Again, none of this is dispositive. None of it is a guarantee of anything,

SPENCER: Because you could have an automata that just cleaned its wounds or something.

JEFF: Exactly, right. And we know that even about ourselves. We all have the capacity for nociception. And nociception is the ability to unconsciously detect and respond to harmful aversive stimuli. So if I touch a hot stove, what I feel first, personally, is pain and I rip my hand away to protect myself. But what actually happens in my brain is the signal gets sent to my spinal column, it automatically sends a signal to rip my hand away in a totally unconscious way. And then meanwhile, it keeps traveling up to my brain, and then it causes the pain. So I experienced the pain as coming first and myself choosing to rip my hand away. But what actually happens is, I am unconsciously already ripping my hand away while the pain experience is forming. And the fact that even we can have that and do that raises the question, how much of the behavior that we observe in other animals is explained purely in that way without any corresponding pain experience. So there is always uncertainty. But what we are not looking for, and certainty, are opportunities to increase or decrease our levels of confidence that particular individuals are conscious or sentient or agential. And this does give us that.

SPENCER: What about research where they give animals painkillers, and they show that if they have wounds that the painkillers change their behavior?

JEFF: Those are other indicators, absolutely. And we do find, even in some insect populations, that they respond to painkillers, antidepressants, analgesics in some of the same ways that that humans do. And again, this is not proof of anything. But it might be grounds for at least attributing what I said before was a realistic, reasonable, non-trivial, non-negligible chance. And that could even be 5% or 1%. A 5% or 1% chance of catastrophic harm, at the very least, merits a tiny amount of consideration. And that is all we're talking about right now.

SPENCER: Okay, so we've got structures in the brain, we've got signs of certain behaviors, we've got things like giving that drugs and seeing how they interact. Any other lines of evidence that you're kind of looking at when you're thinking about something conscious?

JEFF: You can also look at evolutionary evidence with certain types of beings. So for example, you can ask yourself: at what point in human evolution did consciousness and sentience likely emerge? And then you can look at comparative data, similarities and dissimilarities with other beings, to figure out at what point in history did they branch off from us on the evolutionary tree? And then you might be able to make an inference based on that, about whether consciousness or sentience started to emerge before or after that split. Now, if we think that consciousness emerged after the split, that is not evidence that they lack consciousness, because of course, they could have evolved independently. But if consciousness might have emerged before this split, then that is evidence that they have consciousness.

SPENCER: It seems like the tricky thing here is that we don't understand, there is not an accepted view on what the purpose of consciousness is evolutionarily. Do you agree with that, that it seems like a mystery?

JEFF: I think that there are theories, but I personally do experience it as a mystery. Again, you can focus on the functional behavioral aspects of consciousness, the way it allows you to unify and coordinate activity in your different brain regions and connect your perceptions to your beliefs and desires and intentions and actions. But all of that can be spelled out in purely functional behavioral terms without this corresponding felt experience. And so, the evolutionary benefit of that corresponding felt experience is what, at least in my mind, remains a mystery.

SPENCER: So if it turns out bugs are conscious, should we think that the world is way worse than we thought it was before?

JEFF: Well, the world is certainly way more ethically complicated than it was before. It has, again, a much vaster number and wider range of stakeholders, and probably lots of intractable conflicts and dilemmas. So the world is much more complicated than it was, if insects matter or if they even might matter, given the evidence available to us. Is it worse? I think that depends on further questions like: How much happiness do they experience? How much suffering do they experience? Do they experience more happiness than suffering? How can we tell? And are the impacts of human activity net positive or net negative for them? Obviously, pesticides and all kinds of direct interactions, but then climate change and all kinds of indirect interactions, you would have to wrap your head around all of that in order to form a sense of whether their potential significance increases or decreases the value of the world in expectation.

SPENCER: What do you do when you have a bug in your room?

JEFF: I make every effort to liberate them from that space instead of killing them. I do that partly because I genuinely think they might matter. And again, I think if someone might matter, if there is a non-trivial chance that they matter, that warrants at least a little bit of consideration enough for me to bother to go get a cup and try to get them in the cup, and then get them out on the lawn. But I also do it for these kinds of indirect reasons. Because I think that those sorts of behaviors, those sorts of interactions with particular individuals, do shape our perceptions, our beliefs, our values, our habits, our behaviors more generally. So if I make a habit of describing an individual insect as it and squashing them by default, simply because I can, or simply because that is convenient, I think that shapes my attitudes more generally. And it makes me dismissive of them more generally, even in higher stakes, higher scale situations. So I do make every reasonable effort — reasonable effort — to rescue them, partly for their sake, but then partly because of the way that then conditions me, and anyone who might be observing me, to maybe have a little tiny bit more respect and compassion for insects, in a broader sense.

SPENCER: Some listeners will have heard of the rebugnant conclusion. What's the rebugnant conclusion?

JEFF: The repugnant conclusion, roughly speaking, is the observation that even if a single, very happy life might, in some sense, be better than a single barely happy life, if you have a sufficiently large number of barely happy lives — people who just sit in a room and eat potatoes and listen to music all day (barely happy, barely worth living) — but if you have a large enough number of them, they might collectively, in the aggregate, be better off than that single very happy life. And then if you take that reasoning to its logical conclusion, then you imagine a future that involves a huge number of trillions or quadrillions or quintillions of barely happy individuals sitting in a room and eating potatoes, that could actually be a better future than, say, a billion people falling in love and experiencing great art. And the philosopher Derek Parfit called that repugnant. The idea that a large world of barely happy individuals could be better, he called that repugnant. And the rebugnant conclusion is the idea that, hey, you might believe that a single African elephant is capable of more happiness, a richer life, richer forms of flourishing than a single ant. You might think that, you also might not, but suppose you do. Still, there might be some number of ants, who, despite on average, having only a barely happy life in comparison with the African elephant, there might be some number of ants who collectively in the aggregate experience more happiness, more flourishing. And so that would be the rebugnant conclusion: that a future that contains trillions or quadrillions or quintillions of happy ants, that could in some way be better than a future that contains a billion happy African elephants.

SPENCER: So let's switch to talking about AI. AI seems unique in that we're building more and more complex AIs every year. As far as we can tell, if insects are evolving, it's very slow. So we might end up with very different AIs in 10 or 20 years than we have now. What do you see as some of the unique ethical challenges there?

JEFF: So with AI systems, we are developing them at a rapid pace. Think about how much has changed just in the past six months, just in the past year. People now know what chatbots and what large language models are. A lot of people attribute consciousness and sentience and agency to chatbots and large language models. And it might be that in a year, two years, five years, these beings are even more sophisticated and even more mysterious to us. And so what is happening right now is we are, in real time, creating an entirely new kind of being with minds that are similar to ours in some ways (structurally and functionally and behaviorally) and different from ours in some ways (structurally, functionally, and behaviorally). And we are following the playbook that we have used with nonhuman animals so far. We are developing them, creating them so that they can serve our own purposes. And then we are scaling them up industrially. And then we are deploying and then destroying them, again, for our own purposes. So if we follow the same playbook with AI systems that we follow for nonhuman animals, we might see the same issues emerging. With nonhuman animals, what has happened? We exploit and exterminate them hundreds of billions per year. That not only harms and kills them unnecessarily, but it then contributes to these global health and environmental issues, like pandemics and climate change, which then in turn harm human and nonhuman animals all over again, including by increasing human violence and neglect against nonhuman animals. It is a horrible, vicious cycle. And that might happen, if we are not careful, all over again with AI systems. We create them to fit our purposes. We industrially scale up our use of them, in ways where if they are conscious, it's not considering their welfare and might be hurting them. And that could then contribute to AI ethics and safety and alignment issues, an increased risk of disinformation, economic destabilization, political destabilization, nuclear war, all kinds of issues that could emerge, and that could then come back and harm human and nonhuman animals alike. So I think there are a lot of parallels and lessons we can learn from our past and present awful treatment of nonhuman animals. And hopefully, we can treat them as a cautionary tale and learn our lesson before we make this happen this time around, instead of after we make this happen.

SPENCER: With animals and insects, we can at least look at things like evolutionary history, behavioral experiments, etc. It seems like that's hard to do with AIs. For example, think about behavioral experiments. Well, we've programmed them or trained them to behave as though they're an agent, right? So them acting agent-like seems like that's more about their training than what's really going on underneath. You can't give them painkillers and see what happens. They have no evolutionary history. In the back end, they just look like a bunch of linear algebra. So how could we get a sense of whether they could possibly be conscious?

JEFF: Yeah, the farther and farther they get away from us — behaviorally, anatomically, evolutionarily — the less we have to latch on to, in terms of finding similarities or continuities that could constitute indicators or evidence of consciousness, sentience and agency. This is already a problem for insects and other invertebrates, because they branched off from us a lot longer ago. And so we have fewer continuities to latch on for them than we have for vertebrates. Unfair to them, because then we regard that as license to exploit them more than we exploit vertebrates. And with AI systems, that might happen even more. And the bad news is, as you say, we might not be able to use verbal self-reports as strong evidence because, again, the verbal self-reports are coming from our having programmed them to predict the next word in the sentence rather than reveal their true inner nature. So we might have to significantly discount their verbal self-reports, to say nothing of the fact that many of them lack the ability to offer verbal self-reports in the first place, like image generators or video generators. What kind of self reports are they offering? Fortunately, you can look underneath the verbal self-reports for more basic behavioral or architectural features that could be, in some broad sense, relevantly similar to the structures of our brains, and could still constitute evidence of some form of consciousness or sentience or agency.

SPENCER: It seems like, if we're not careful, we can end up accidentally crossing a line, where we go from unconscious AIs to conscious AIs without realizing it. And then suddenly, we're accidentally torturing millions or billions or trillions of minds. How worried are you about that?

JEFF: I'm extremely worried about that. We, again, already did that with animals. And they were conscious all along the way. When we started industrially harming and killing nonhuman animals, they were conscious at that point, and they are conscious at this point. The only difference here is the AI systems might go from being non-conscious to at some point being conscious. But if our track record shows that we are capable of scaling up exploitation and extermination of beings who are conscious all along the way, and a lot more continuous with us, then of course, we have the capacity to do that with these other more different, currently perhaps nonconscious kinds of beings too. I think we should look at that track record and be concerned about our own disposition to behave in that kind of way.

SPENCER: It seems like an additional challenge with AI is, whereas with an insect, you can kind of reasonably say, "Well, okay, if I cut off its legs, it's harming it." What would it even mean to harm an AI?

JEFF: That is the million dollar question. There is so much that we do not know right now, which on one hand is really scary, because this is moving so fast. And we lack the basic knowledge, the basic capacity, the basic political will that we need to address it. The perhaps exciting news for people who are starting out in their careers is that that means that we have a lot of opportunity to conduct and support that research to engage in that capacity building, to engage in that sort of advocacy and lobbying and political will building. And I do think that there is some tractability here. I think that there are research programs that can teach us about these issues. We are just racing against the clock.

SPENCER: Some people might say about this conversation: this is all very interesting, but we can't do anything about these topics. What can we possibly do about insects or about AI sentience? Do you have any concrete ideas of where we can go from here? If people take these things seriously, what can actually be done?

JEFF: I think that is a really, really difficult question. And again, this is true not only for insects and AI systems, but for a lot of animals. I know some people in the room here do a lot of work on wild animal welfare. Wild animals, of course, include vertebrates and invertebrates; all kinds of animals. But they exist in really complicated populations and species and ecosystems, and you benefit one that ends up harming others, and so on and so forth. And so, you can recognize that wild animal welfare is really important, but also recognize that wild animal welfare is really difficult. And the same is true with insect welfare, with the potential for AI welfare. And so, I think the first step is to recognize that these issues are important and difficult at the same time, not to latch on to the importance and use that as a reason to dismiss or downplay the difficulty. But also, not latch on to the difficulty and use that as a reason to dismiss or downplay the importance. Recognize that they are important and difficult and start from there. And in my view, what that means is go slowly and gradually, and take a kind of two-pronged approach, where the first prong is search for low hanging fruit, seemingly co-beneficial policies that you can try out. What are some things that we could do in the short term, that at least have the potential to be good for humans, and these nonhumans, and global health and the environment at the same time? And if we can do that and then leverage it to gradually increase our knowledge, gradually increase our capacity, our infrastructure, our institutions, our representation for these nonhumans, then we might find that we gradually are able to do more over time than we are right now. So working within our limitations, trying some things that can help some nonhumans a little bit, and that we can then use to learn that information, to build that infrastructure, to build those institutions. And then we push our limits and do more overtime.

SPENCER: Final question about this before we go to our last topic. Some people listening to this might just think that this is absolutely ridiculous. You can imagine the likes of TikTok that's like, "No, you know, liberals are saying we should care about insects more than our families." So I'm just wondering, what would you say to someone who's just like, "This is ridiculous. This is pushed beyond the level of outlandishness I'm willing to consider."

JEFF: I would answer that in all kinds of different ways. I might say one or two things, and then we could talk about it more not. One answer is, you can recognize that non–humans, that invertebrates, that AI systems merit consideration while still prioritizing yourself and your family, your nation and your generation, and so on, for all kinds of reasons. Like, I can recognize that your family matters. And I can still prioritize feeding my family, right? Because they are my family, and I love them, and I care for them. And we have this history and these bonds, and they live right here in my house with me, and I know how to take care of them. So there are all kinds of reasons why you can recognize your family matters, but still prioritize my family for these practical and relational reasons. But still, the fact that your family matters, that matters. I should not kill your family unnecessarily to feed their flesh to my family when we have rice and beans instead, right? And if my family has an itch, and your family is suffering migraine headaches, and I can only help one, maybe at that point, I should prioritize helping your family. So there are all kinds of reasons why extending consideration to insects does not automatically mean that, all things considered, we should prioritize them. But it does mean we should consider them and put some limits on our behavior. And then maybe in some circumstances, we should prioritize them a little bit. And then one final thing, which is that, again, we should not fully trust our intuitions here. Our intuitions are subject to all kinds of bias, all kinds of ignorance, all kinds of self-interest, all kinds of motivated reasoning. We, of course, like the status quo from which we benefit. We, of course, are used to that, familiar with it. We, of course, are more biased towards individuals who look like us and against individuals who look really different. So we like large bodies, symmetrical features, large eyes, four limbs, and furry skin. We like all of that, and we dislike alternatives to that. We also have a bias against small individuals. We have scope and sensitivity, the inability to wrap our minds around large numbers. All of that conspires to lead us to discount what might be a very significant amount of welfare and moral significance.

SPENCER: The final topic I want to talk to you about is your own ethical views. And I know that you take disagreements seriously, but I want to hear what you think. And I believe that you have an ethical system that's different than anyone I know, actually. So I'd love to hear just a little bit about how you decide what you think is good. As I understand it, you consider yourself a Humean constructivist?

JEFF: Yeah. That is a meta ethical theory, and then there are normative ethical theories. So in philosophy, meta ethics is the study of what the hell was going on with ethics. So if we disagree about whether you should eat meat or not, and then we ask what is the status of that disagreement, are we actually disagreeing about something where one of us is right and the other is wrong? Or, are we just shouting opinions at each other? That is a meta ethical question. And so Humean constructivism is a particular meta ethical theory, a particular theory about what is going on when we engage in moral discourse in practice, when we morally disagree with each other. And broadly speaking — and I will try to avoid spending like 20 minutes doing the whole thing here — but broadly speaking, you can divide meta ethical views into a realist camp and an anti-realist camp. The realist camp says that there are true objective, mind independent moral facts in the world. When I say eating meat is wrong, that is true or false, independently of what any of us happens to think and feel. In the same way that E = MC2 is true or false, independently of how any of us thinks or feels. And then anti-realism says, no, no, no, there are no moral facts existing out there in the cosmos that we try to learn about, in the same way that there are scientific facts. Morality is an expression of our most deeply held beliefs and values. And moral philosophy is a matter of unpacking and articulating what it is that we believe and value, the values that we bring into existence through the act of valuing something. So the first part of the answer to your question is just that I am on the anti-realist side of that camp. I do not think that there exist moral facts out there in the world, even if they did exist, I have no idea how we could know about them. And even if we could know about them, I have no idea why we would care about them. For me, morality really is about using evidence and using reason to understand what it is that I most deeply believe and value, so that I can make my beliefs and values coherent and informed, and consistent then with my practices in the world.

SPENCER: That was well said. So from your point of view, where does ethics come from, and why be ethical?

JEFF: As an anti-realist, and again, Humean constructivist — this is a particular kind of anti-realism, we can talk about it or not — but as an anti realist, I think, my moral responsibilities come from my own beliefs and values. I was thrown into the world as a creature who happens to aspire to act for reasons. And what it is to act for reasons is to act according to considerations that I would take to apply to anyone in my situation. And so I am trying to go about my life acting according to reasons, a consideration that would apply to any one in my situation. And that naturally guides me towards a certain kind of search for information, a certain kind of search for coherence, a certain kind of search for objectivity and impartiality. And from there, I get my own felt commitment to something like morality, to something like respect and compassion for others who are different from me.

SPENCER: How would you navigate a situation where there's different values at stake, where you've got, let's say, you could be honest, but it would hurt someone's feelings, or you could lie, and it would make the person feel better? As a Humean constructivist, or just as you personally, how do you think about that moral choice?

JEFF: There are a lot of things that feel intrinsically valuable. When you think about knowledge, when you think about family, when you think about love, when you think about beauty, these all seem to have a kind of intrinsic value. And so we naturally experienced the world as containing all kinds of different things that have their own intrinsic value, and values that may even seem incomparable, incommensurable, in some kind of way. But we also experience the world as containing humans and cats and dogs and cars and trucks, and so on and so forth. And what we've learned through science is that there is some sense in which all of these really different kinds of things are made out of the same kind of stuff, reduced to the same kind of stuff. All are made out of organs, which are made out of cells, which are made out of molecules, which are, in some sense, made out of particles or waves. And so the more we look in science, the more we find that what seems like this plurality of totally different kinds of things, they end up being different expressions of the same fundamental things in the same fundamental laws. And my view is that even if our values, even if our moral values are things that we create, instead of discovering the world, the same kind of process is and can be and should be underway, where we confront what seems like all of these different values. But then through moral inquiry, through moral investigation, what we realize is that the reason I value knowledge, the reason I value family, the reason I value love, the reason I value beauty, it all reduces to some more fundamental basic shared set of properties. They all are conducive to giving us positive experiences that feel good and that we like, as an example. And so for me, I am holding out hope or an expectation that ethics will, in the fullness of time, reveal itself to be more unified at its foundation, in the same way that we are discovering sciences.

SPENCER: I wanted to get your take on my personal life philosophy. I sent you an article about it before. So my philosophy, called Valueism, says very simply try to figure out your intrinsic values, that is the things you value for their own sake, not as a means to their ends. And then once you figure out your intrinsic values, try to effectively increase them in the world, so take effective actions to create more of them. It sounds like — and I view this as a life philosophy, not as a moral theory — but it sounds like it has overlap with what you think, but also, I think, some differences. So I just wanted to get your quick take. To what extent do you disagree with it? And what's your kind of critique of it? I

JEFF: I totally agree with it, especially as a life philosophy. I might express some cautious skepticism about it, if it was also being defended as a kind of foundational theory of morality. But as a life philosophy, I think that more or less is a really great aspiration that captures what I take to be significant about my lived experience, and how I should be thinking and making decisions. And again, we can think about science as an example. In some theoretical abstract sense, I understand that cars and trucks and tables and chairs and cats and dogs are actually all complicated collections of particles or waves. But when I am walking across the street, walking into the office, am I thinking about how to avoid complex collections of particles and waves? Or am I thinking about how to avoid like cars and trucks, and how to say hi to cats and dogs, right? The fact that I have theoretical knowledge that in some deep sense, this all reduces to this more fundamental basic set of things, does not really infiltrate my ordinary experience of the world, or my ordinary ways of making predictions or decisions. And I think the same can be true of ethics. We can realize in the seminar room, through deep moral investigation, that the reason all these things matter is that they are conducive to pleasure and happiness in some broad sense. But that does not mean that in everyday life, I should just see everyone as an instrument for pleasure and happiness, or everything as an instrument for pleasure and happiness. I should still value them for their own sake, see them, experience them as having intrinsic value, and maybe even, again, incomparable value. So I think, in practice, something like what you describe as your life philosophy is really good as a life philosophy. It would just be a mistake to confuse that for the sort of moral equivalent of particle physics.

SPENCER: So final question for you. It sounds like maybe, if we disagree, well, we might disagree on as you might expect, to sort of an underlying unification. If we think hard enough, we'll get to some deeper...it's like the equivalent of physics like the deep underlying physics. Except for ethics, whereas maybe I don't expect that.

JEFF: That's right, yeah.

SPENCER: Awesome. Jeff, thanks so much for coming on.

JEFF: Hey, thank you so much.

JEFF: So the first question was: Well, I am a reluctant vegetarian, and I am persuaded that I should give some consideration to maybe even insects and AI systems. But what are the implications for human embryos or fetuses? And then what are the implications for the ethics of abortion, and I feel a little bit uncomfortable if this form of reasoning takes me down a path where now I might have to be anti abortion, which is not how I thought I would be when I woke up today. Okay.

I guess I will say a few things about that. One is that, I do think, for better or for worse, that this reasoning does imply rightly, that we should extend at least some minimal consideration not only to insects and AI systems, but also to sufficiently developed human fetuses. So I do not think that we should regard them purely as objects not meriting any consideration at all. However, as I noted with respect to insects and AI systems and nonhuman animals of various other kinds, the mere fact that a particular being merits at least minimal consideration by itself tells you very little, if anything at all, about how they should be treated and how dilemmas should be resolved. So for example, one of the most famous Applied Ethics papers of the 20th century is Judith Jarvis Thomson's "A Defense of Abortion". And the argument in that paper is: everyone thinks the ethics of abortion depends on whether or not you think a fetus has a right to life. I, Judith Jarvis Thomson, I'm going to grant for the sake of argument that a fetus does have a right to life and argue that abortion is morally permissible anyway. And then she proceeds to develop all kinds of different thought experiments that show that even if someone has a right to life, that does not necessarily mean I have to engage in all kinds of sacrifices to keep them alive. And that is just an illustration of how morality is really complicated. You can have a right to life, but I can still have a right to kill you in self-defense, in other defense as an unavoidable side effect of a sufficiently important activity, et cetera, et cetera, et cetera. Now, does that mean that it will conveniently work out to be the case that there is no moral problem here after all? Maybe, maybe not necessarily. But for me, I always search for complicated middle grounds, even when debates are really polarized and simplified. And what I think about the abortion debate, again for better or worse, is there is moral considerability here, but this is already also a complicated situation involving real trade offs. And there are good arguments for the permissibility of abortion, in some cases, in spite of the considerability of the fetus.

SPENCER: I'll just add, and not that we should take people's opinions as a source of moral truth, but if you look at polls on Americans, it's actually pretty fascinating how Americans actually often do take a middle ground on abortion, where they want to prohibit abortions that are really, really late, but are pretty okay with abortions that are really early. So, that seems like a lot of people actually take a kind of nuanced middle ground here.

JEFF: Well, the question was: suppose I got to be dictator for a day, what kinds of global policies that I implement?

Well, at a meta level, first of all, I really do want to emphasize that the core message that I have tried to deliver in this conversation is the importance of caution and humility, and pluralism. So I really do want to, instead of going with whatever feels right and true to me, I really do want to search for those things that can be endorsed from a wide range of reasonable perspectives, even if I happen to disagree with many of them, and use principles of risk and principles of uncertainty and cultivate virtues of caution and humility. So at a meta level, I would try to operate within that mindset and set up structures and institutions that really bring those types of decisions. Of course, democracy, capitalism in various ways, these ways of aggregating decisions and interests can partially play that role. Now, since you asked what are the concrete policies I would implement, well, slow down AI research, factory farming, deforestation, and the wildlife trade as a start.

SPENCER: In case you haven't noticed, we have a philosopher in the audience.

JEFF: It's great. So Spencer and I spent most of our time talking about a particular topic within moral philosophy, which is the topic of moral standing — What does it take to morally matter? And who has what it takes? Which beings morally matter? — But then, as you know, there is this, then, the other part of the conversation: What do we owe to beings who matter? What are our duties? What are our responsibilities to these beings? And should we understand that in terms of welfare and promoting welfare, increasing their happiness and reducing their suffering? Should we understand it in terms of rights, like respecting their agency and autonomy and not interfering in their own choices in life? Should we understand it in terms of virtues like cultivating a virtuous, caring, respectful, compassionate set of attitudes and dispositions towards them? Should we understand that in terms of relationships, like cultivating caring instead of kind of oppressive or exploitative relationships with them? This is the classic debate and moral theory. And those are four options, but there are more. Is it welfare? Is it rights? Is it virtues, character traits? Is it something more relational, contextual? And my view, in brief, is that the answer is kind of all of the above. And the reason I think that — first of all, there are a lot of smart people who think each of those is what matters and what we should be focusing on. And again, I want to have some allowance for the possibility that they are right. — But even if I am right, and morality is ultimately about doing the most good possible and maximizing happiness and minimizing suffering in the world, as consequentialists and utilitarians for 100 to 200 years have reminded us, if I want to maximize happiness and minimize suffering in the world, I should not go around thinking in terms of maximizing happiness and minimizing suffering. I should think about It's more complicated and pluralistic way. Again, this is like what we were saying before. The mere fact that it all reduces to particles or waves does not mean that I should go around thinking of the world in terms of particles or waves. And similarly, the mere fact that morality reduces to happiness and suffering, if it does, does not mean I should go about interacting with others strictly only in terms of happiness and suffering, I think if I want to do the most good possible, what we ought to do is create and maintain robust systems of rights that can serve as checks against bias (self-serving applications of harm-benefit analysis), we should cultivate virtuous character traits (respect and compassion for vulnerable others that naturally guide us to treat others well, even when we might might not be reasoning about what to do), and we should build caring relationships and just structures and institutions that provide kind of external guides that externally pull respectful and compassionate behaviors out of us. Again, even when we might not be thinking rationally. So in theory, yeah, I think happiness and suffering. In practice, that plus rights, plus character, plus relationality, plus these other structural systemic institutional things. I think it all matters. The question is, before when we were talking about moral standing, we tended, or I tended, to focus on three potential bases of moral significance. Those being consciousness, sentience and agency. And then we unpacked consciousness and sentience, but not agency. And so the question is, why agency? What do we mean when we say agency, and why might that be a basis for moral significance? I will start by being honest that I have a strong intuition that agency, by itself, is not sufficient for moral considerability. But again, there are a lot of smart people who think that it is, and I want to allow for some possibility that they may be right. Now, generally, what people mean by agency in this context is the basic ability to set and pursue goals in a self-directed manner. So it does not necessarily mean that you are rationally reflecting about, you know, 40 year life plans or anything like that. It means that you have the basic ability to set and pursue goals in a self-directed manner. This is a form of decision-making that not only humans, but other primates and mammals and vertebrates and invertebrates are capable of. Now, why might people think that agency, defined in that way, is sufficient for moral considerability? In other words, that all agents merit moral consideration. Well, some people think that welfare and moral significance is less a matter of felt experiences, like pleasures and pains, and more a matter of the ability to set and pursue and achieve your goals and satisfy your desires and preferences. And those all come along with agency. So whether you have or lack the ability to feel things, like pleasure and pain, you might still be able to aspire to things. And then the idea would be that your life is better for you, when you can achieve your goals and reach those aspirations. And your life is worse for you, when you fall short of your goals, you fall short of your aspirations. Desire-satisfaction versus desire-frustration. And if you think that that could matter, even in the absence of it feeling like anything, then that might be why you take agency to matter in and of itself.

There are two parts of this question. The first was a throwaway, but I want to address it anyway, which is, what is free will and do we have free will. Then the second was, there are these theories of consciousness that I briefly alluded to that are quite permissive, like the theory that consciousness is a fundamental property of all matter. And then what do I think about that?

Okay, so I'll briefly address both, even though they could each take up an entire hour and a half easily. So, what is freewill and do we have it? Of course, that depends on how you define it. And briefly, if you define freewill as the ability to voluntarily act in a manner that is contrary to causal forces that might be determining your behavior, then no, we do not have freewill. We are products of the causal order. All of our actions are, in various ways, predetermined by forces outside of us. But if you instead define freewill as the ability to voluntarily act as a result of your beliefs and desires and intentions and these other internal states as opposed to having your body whipped around by the wind or mind control technology, then yes, we do have freewill in that sense. And I think that kind of freewill matters and merits a lot of the types of moral conversations we have about responsibility and so on. So we could say more about that later, if you want. So I think we do have free will in a weak sense that is morally significant. We do not have it in a strong sense. But on reflection, that strong sense, acting contrary to the laws of nature or past causal forces, is kind of incoherent anyway. Okay. Now, what do I make of panpsychism? The view that consciousness is a fundamental property of all matter. Okay, briefly, first of all, sociologically, if you go look at surveys of philosophers and other experts about the mind and consciousness and so on, you see a surprising amount of credulity about this kind of permissive theory of consciousness. So there are about 10% of philosophers who either accept or are open to these types of theories of consciousness. That right there is a shortcut to at least minimal moral consideration for a vast number and wide range of beings. I mean, forget about whether they might qualify according to more demanding theories of consciousness, if you take there to be a one or five or 10% chance that panpsychism is true, and consciousness is a fundamental property of all matter, well, then that right there by itself might might give you reason to at least be a little kind to a blade of grass, if that is costless to you. Now, what do I think about it? I think that consciousness is weird and mysterious enough that it would be hubris at this stage to totally rule out the possibility that such a view is correct. So do I think that view is likely correct? No. But do I think it has a non-trivial chance of being correct given the information available to me? Yeah.

So the question is a great one, and maybe a good one to end on, especially given the time, which is: A lot of this discussion is premised on the idea that doing good is good. And the aspiration to do good is good. But of course, there are all kinds of examples where the aspiration to do good has caused great harm, right? Not only things like the invention of plastic, but also colonialism, imperialism, all of these features of our histories, features of our present that might result from good intentions, but also result from bias and ignorance and inability to explain and predict and control complex, interrelated phenomena. So we set out with good intentions, but then we make a mess of things and ended up doing more harm than good. And so how can we tell if what we are currently trying to do is going to break that trend or be another instance of that trend?

And I completely agree with you. And we'll just say a couple of things in response to you. I completely agree that this is a real concern. I'd say a couple of things about it. Thing one is: it is really difficult to tell, introspectively, whether your efforts to do good are going to do more good than harm or more harm than good. And our track record gives us at least some pause. And then thing two is: this is why before, when Spencer asked what is my theory of change, how do I recommend we start addressing these issues, that is why I said the first step is to recognize their importance and their difficulty at the same time. Because I think it can be really tempting, when you see an issue as important, you see a lot of suffering, you see a lot of death, it can be tempting to think this problem is easier to solve, because you want to be able to solve it. But you have to really wrestle with the fact that we are limited in our knowledge and our capacity and our motivation, and we do have this track record of doing more harm than good. So accept the importance and the difficulty. But then the third and final thing that I will say is: the fact that we can often do more harm than good is not a reason not to try to do better. Because we can do harm, no matter what. The status quo is full of harm. If we try to do good, we might risk causing new harms. But if we do not try to do good, then that just guarantees that our ongoing harms will continue to be ongoing and will even get amplified. So there is no risk or harm free course of action available to us, and no way to step outside of our own perspectives when assessing our courses of action. All we can acknowledge is that everything involves risk and harm, that our perspectives are limited, but are all that we have, that these issues are important and difficult, and then just do the best we can with that kind of sense of urgency, but then also that kind of sense of patience and humility.

SPENCER: Thank you all for coming. I'm so glad you could make it. And Jeff, thanks so much. You were wonderful.

JEFF: Thank you so much, Spencer.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: