CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 175: Is evolutionary psychology just a bunch of "just so" stories? (with Geoffrey Miller)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

September 14, 2023

Why do even people who accept evolutionary explanations for most biological phenomena often push back against evolutionary explanations for human psychology? To what extent should humans adjust their behavior in light of evopsych findings? How do evopsych researchers avoid formulating "just so" stories to explain specific behaviors? What can we infer about human behavior from the behaviors of chimps, bonobos, gorillas, or orangutans? What is the evopsych view of incest (which most people seem to find disgusting but which is also one of the most popular porn categories)? Are emotions primarily shaped by evolution or by culture? How can evopsych findings be applied to everyday things like dating? A safely-aligned AI system should presumably support the majority of human values; so how should AI alignment researchers think about religious values, which are generally held by the majority of humans but which differ radically in their specifics from group to group? What are some other rarely-considered AI alignment blind spots?

Geoffrey Miller is an evolutionary psychologist best known for his books The Mating Mind (2001), Mating Intelligence (2008), Spent (2009), and Mate (2015). He also has over 110 academic publications addressing sexual selection, mate choice, signaling theory, fitness indicators, consumer behavior, marketing, intelligence, creativity, language, art, music, humor, emotions, personality, psychopathology, and behavior genetics. He holds a B.A. in biology and psychology from Columbia University and a Ph.D. in cognitive psychology from Stanford University, and he is a tenured associate professor at University of New Mexico. Follow him on Twitter at @primalpoly, or find out more about him on his website, primalpoly.com.

SPENCER: Geoffrey, welcome.

GEOFFREY: It's great to be here, Spencer.

SPENCER: It's long been debated how much of our behavior is determined by things like culture and learning, and how much of it is determined by evolution, our genetic programming that's occurred over hundreds of thousands of millions of years. And I think you take a really interesting perspective, which is that we underestimate the amount of — even our modern behavior today — that comes from evolution. And so let's start there. Why don't you tell us a little bit about what is evolutionary psychology? And why do you think this is an important perspective?

GEOFFREY: Evolutionary psychology is this horrible pseudoscience. Just kidding. [chuckles] This wonderful science I've been involved with for, my god, 35 years or so. The basic notion is we want to understand human nature. How can we do that? Well, you try to understand the challenges that our ancestors faced in terms of surviving and reproducing in prehistory. So we try to leverage a lot of insights from studies of other primates, anthropological studies of hunter gatherers, behavior and genetic studies on modern humans, and many, many other sciences. We try to weave them together to understand the origins and functions of human motivations and emotions and beliefs and desires and all that good stuff. So it's basically psychology, plus Darwinian insights. It's psychology within the context of evolutionary biology.

SPENCER: Now, no one would deny that the reason that we have arms, for example, is due to evolution (at least nobody who believes in evolution would deny that, right?), or that is the reason that we have the capacity to do certain things, like have emotions. But people tend to push back a lot when we start using evolutionary explanations for specific things like, for example, modern dating preferences, or why people like certain things and not other things (their interests and things like that). So I'm curious to hear why you think that people push back so hard on those ideas?

GEOFFREY: It's politics. Long story short, it's politics. I think people often wrongly believe that there's a bunch of moral and political stakes to evolutionary psychology research that there actually aren't. And they wrongly assume that evolutionary psychologists are a bunch of reactionary conservatives, which we aren't. We actually have data on this — surveys of political orientations of evolutionary psychologists and anthropologists and so forth. — But I think people are worried about all the usual bugbears: genetic determinism, predictability, and a lot of things being considered natural or innate, that people are worried are kind of evil. Like, if you say, "Look, our capacities for aggressive interactions, or homicide, or rape, do have an evolutionary origin." People get very skittish because they think, "I never fall into the naturalistic fallacy." I personally would never believe that just because something evolved, it's therefore natural and good. But other people out there might believe that. They might think, "Oh, if evolutionary psychologists say that tribal warfare or homicide or sexual coercion are evolved adaptations that get triggered under certain circumstances," that it means we somehow are justifying those things. And that's false. And we've argued against that for decades. But there's a lot of folks who think that those kinds of political and moral implications are just unacceptable.

SPENCER: So there's the naturalistic fallacy, where people sometimes jump to the conclusion, "This thing's natural, it's good," which clearly just doesn't make any sense, if we're talking about evolution, because why would evolution optimize for what's morally good? It's just not what evolution does. But there's another thing here too, which is that I think sometimes people assume if something is evolutionary based, it can't be changed. Did you want to comment on that?

GEOFFREY: Yeah, most of my books actually are on how to change one's behavior to be healthier, happier, wiser and live a better life. My book "Spent" in 2008 was a kind of Darwinian critique of runaway consumerism, and capitalism, and marketing and advertising, and so forth. And my whole point there was: the better you understand evolutionary psychology and our kind of instincts for status seeking and virtue signaling, the better you understand all that, actually the more freedom you have, as a consumer or worker, not to get caught up in these kinds of status games. Likewise with my book, "Mate" in 2015, we were giving dating advice to young, single (mostly) straight men, about how to kind of improve their game and level up and become more attractive to potential girlfriends. And that was a message of hope and change and improvement, not a message of pessimism and despair and fatalism. So almost everybody I know who works in evolutionary psychology is doing it because we think a better understanding of human nature will lead to better capacity to improve ourselves in our society.

SPENCER: Another kind of pushback to evolutionary psychology guts is that it's full of just-so Stories that you can take almost any modern behavior — let's say people's preference for eating junk food — and you consider backfill an explanation, "Well, this is because we evolved to want to seek out fat in sugar." But then the question is: how do we know that that's true? How do we know we're not just kind of making up a story after the fact? And we could do that to explain anything we want.

GEOFFREY: This is a legit worry, but I actually think it's really overblown. Because, look, I've supervised about a dozen PhD students. And we do collect data, we don't just sit around in armchairs speculating. They will run studies. They will have hypotheses. Often the data will come back, and it won't support the hypothesis. And, "Oh, no." We're not disproving evolution; we're disproving a particular application of evolutionary thinking to understand a particular part of human nature. But often when you do that, the data is surprising. And you realize, "Oh, no, our hypothesis was wrong." Likewise, when I review journal papers for Evolutionary Psychology Journals, I'll often recommend, "This is bad, the data are bad, they don't support the conclusions." This is a mainstream empirical science, where we don't just sit around making up just-so stories. We have hypotheses about what might be the design features, the specific details, of some adaptation we want to study. And then you list them, and you use them as hypotheses. And then you gather data, and you see, "Does the data fit those hypotheses?" If they don't, your little mini theory is wrong. And this happens all the time. Most evolutionary hypotheses that have been advanced and have sake have actually been shut down and don't stand up. And we keep iterating, improving, trying again, just like in any other science.

SPENCER: So could you give an example or two, just to illustrate the kind of evidence that would be used to refute an evolutionary explanation?

GEOFFREY: Yeah, so for example, we just had a very interesting symposium. At the most recent big evolutionary psychology conference, Human Behavior and Evolution Society (HBES). The symposium was on ethical non-monogamy. And a bunch of us were talking about polyamory and open relationships. And the way that, in our view, some strands of evolutionary psychology had become a little bit overly focused on kind of monogamist interpretations of human mating. And we were kind of pushing back against the monogamous assumption that all mating is either like forming long term pair bonds to raise kids together, or it's short term casual, meaningless sex. There was no middle ground. No open relationships in the middle. And we pushed back on that with evidence from contemporary societies and studies of hunter gatherers and studies of other social primates, like chimpanzees and bonobos. And we made the argument that evpsych (evolutionary psychology) has gone a little bit astray, in terms of being overly monogamous and adopting mainstream Western civilization's concepts of marriage, and applying those concepts a little bit uncritically to our understanding of human mating. So we were trying to update evpsych in this direction, saying, "We've made some conceptual errors, actually, for 30 years in our understanding of human mating. And we can do a little better, because we have additional insights from various Western cultures and other cultures." I don't know if that's a very good example. I could probably spend another one if you want. But it's kind of salient to me because this symposium just happened a few weeks ago, and it provoked a lot of interest, I think, at HBES.

SPENCER: I think it's a good example. But maybe you could give one more where you feel like there was something that was believed to be evolutionary psych, and it was knocked down, where now nobody believes it, because the data came in and we now can reject it.

GEOFFREY: Yeah, I think, anovulatory cycle effects. There was a very strong argument that was made about 20 years ago by my friend and colleague at University of New Mexico, Steve Gangestad and Randy Thornhill. They argued: women should adaptively shift their preferences for what they find attractive in male (male potential mates) across anovulatory cycle, specifically, when women are at peak fertility mid-cycle around sort of days 10 to 15. Right around the time they could actually get pregnant, if they had sex, they should shift more towards indicators of good genes. They should care more about genetic quality, and a little bit less about "is this guy going to stick around and invest in me and be a good long term partner?" The logic of that was very strong. They integrated a lot of evidence across thousands of species, from insects to mammals. They had a fairly ironclad — it looked like — logic to this. But then you tested empirically, and these anovulatory cycle shifts in preferences don't actually seem to be as strong as we expected. The theory was strong, the evidence came back over 20 years, and now — like when I teach evolutionary psychology now, versus 10 or 15 years ago — I don't make the case that there'd be big dramatic shifts in preferences across the cycle. So that, I think, is a good example of where strong theory, evidence comes in, kind of knocks the theory down a little bit, and now everyone's kind of running around trying to rethink what is going on. We don't quite know yet.

SPENCER: That's a nice example. So let's talk about the types of evidence for a moment. So one type of evidence is non-human primates. You can look at chimps and so on and say, "Well, how did they behave?" And I wonder about that. Because how much can we generalize with that, when we're actually trying to make inferences about humans?

GEOFFREY: Yeah, you have to be really careful because, look, the other great ape species, right, there's four of them: gorillas, orangutans, chimps and bonobos. We're most closely related to chimps and bonobos. We're equally related to both of them because they actually split from each other, only about a million years ago, after we split from them. So some people will hold up chimps as like, "This is the model for human evolution. This is a highly tribal, highly aggressive, promiscuous mating system that involves a lot of hunting and a lot of conflict." Other people will pick bonobos and say, "These are the kinds of lovey-dovey apes that resolve conflicts using sex and they're all kind of bisexual, and they're romantic and peaceful, and they kind of hold those up as kind of a leftist utopian ape. It's really important not to take either of those models too seriously. And also to realize like we're the last bipedal hominid standing. There have been about a dozen other human-like apes walking around with pretty big brains on two legs, and we outcompeted them all, and they're all dead and extinct. Unfortunately, we don't have the most informative species still around that would really help us understand a lot of aspects of human nature. We don't have living Neanderthals. We don't have living Homo habilis. We don't have living Paranthropus boisei, etc. So, unfortunately, our success as humans in outcompeting the other bipedal hominids means that some of the best sources of data that we could have had, simply aren't around.

SPENCER: So how do you see this kind of evidence being used validly? What is the right way to use evidence about chimps and bonobos to make inferences?

GEOFFREY: I think a really good way is to think about trade-offs that they face, to try to identify what are the parameters and variables that they face in terms of things like different ways their environments change over time and space, and how that affects their behavior? Or, how does the sex ratio in a particular group influence their behavior? How do they deal with trade-offs between the mating effort of attracting new mates versus parenting effort raising existing offspring? All those fundamental evolutionary trade-offs and variables might apply quite well, even to modern humans. And they might involve similar kinds of psychological mechanisms. So that's one thing. It's sort of like, "How do chimps and bonobos and humans behave if (like) there's a sudden windfall of resources, and they find themselves in a wonderful environment with plenty of food and water and space, versus a very challenging environment? You see quite similar shifts in terms of how behavior changes. So I think that can be super informative. Another way you can use evidence from other apes is to look at their individual differences traits. We now have pretty good ways behaviorally to measure things like general intelligence across species. And to identify variation in personality traits, like some chimps are more neurotic, and worried, and anxious. And others are more emotionally stable, just like humans. Some chimps are more extroverted, some are more introverted and shy, and antisocial. So the fact that a lot of those individual differences traits seem to be quite similar between chimps, bonobos, and humans, I think, gives us a little more confidence that those individual differences traits really mean something in humans.

SPENCER: That's an interesting example. How does one measure intelligence across species in a way that's not sort of unfair to some species because they just have different aptitudes?

GEOFFREY: Yeah, let me be clear. We're not really talking about sort of putting all species on like a unitary IQ scale where you're kind of like ranking, "Oh, if humans are at IQ 100, then is chimp IQ more like eight or is it more like 30? No. Rather, what we're trying to do is understand the structure of individual differences in cognitive abilities within each species. So what you can do, for example, is take a whole bunch of chimps or border collie dogs or any other species, and come up with a bunch of cognitive and behavioral tasks, little puzzles that are kind of hard to solve. And where there's variation, and how well a particular chimp or dog solves it. And then you throw a whole bunch of these tasks at a whole bunch of animals, ideally 10 to 20 different tasks. And then you do the same thing that you do with human IQ scores. You look for the correlations among these different abilities. Are they all positive correlations as you typically find with humans? And lo and behold, yes, generally, chimps that are better at solving one kind of problem tend to be better at solving other kinds of problems too. Same with dogs. Same with gibbons. As far as I know, every other species that has been looked at shows a kind of general factor of intelligence, that you can pick up across many different paths. That's amazing. That's surprising. You might think, "Oh, there should be trade-offs." You might think, "Oh, the chimp that's better at spatial cognition will be worse at social cognition," because you can't be better at everything. Some cars are fast, and some cars can haul a lot of luggage. And there aren't many cars that can do both really, really well. But when it comes to cognitive abilities, it looks like these all positive correlations are quite common across species.

SPENCER: It's really interesting. So let's go to another type of evidence that's used in evolutionary psychology, which is looking at cultures around the world. Like, you may not want to over index on just, let's say Western cultures, as it often happens. But maybe looking at a wide variety of human cultures, including maybe tribal cultures that haven't much interacted with other cultures.

GEOFFREY: Yeah, it's really important to try to be the kind of globalist in your perspective. Evolutionary Psychology historically has been very much a kind of Anglo American affair. It's centered pretty heavily in England and America. We try to be very inclusive, in terms of the scientists who work in our field. And we've been fairly successful in recruiting a lot of psych people in continental Europe and some in East Asia. But it's not as global as we would like. Another way, apart from recruiting culturally diverse researchers, is taking the anthropology seriously. It's pretty standard for evolutionary psychology people to read as much anthropology as they read psychology, and often maybe even more, to get a sense of what hunter-gatherer societies are like? What are pastoralist societies like? What are Chinese or Indian societies like? And to try to be very humble that the particular details of American consumerist, capitalist, monogamous culture, do not necessarily apply equally to other cultures. And we take that pretty seriously. We're not perfect at it. But my impression is, if you go to a typical evolutionary psychology conference, versus a typical social psychology conference, or neuroscience conference, the social and neuro will tend to be much more ethnocentric, much, much more America-centered and kind of blind to the cultural variation that you see outside the West.

SPENCER: So if we find that there's a pattern of behavior that seems to be universal across different cultures, whether Western cultures or Eastern cultures or tribal cultures and so on, do you think that's strong evidence that it's programmed into us by evolution?

GEOFFREY: It's suggestive evidence, but it's not super strong. It's like one piece of evidence among about eight or 10 kinds of evidence that you'd really want to make a case that there's a particular adaptation. So Dan Dennett, the philosopher of mine, once made the point that, "Oh, you look around cultures all around the world. If people make spears, they tend to throw the spear's pointy tip first, rather than what tip first? Does that mean we have an innate tendency to do a pointy tip first spear? Does that mean there's a spear throwing instinct?" Not necessarily, right? There might be a set of tool-making and tool-using instincts, plus some sort of practical reasoning skills and learning skills that say, "Oh, if you throw the pointy tip of the spear out the gazelle, it tends to fall down and die and then you can eat it. If you throw the one tip at the gazelle, it gets frightened and runs away and you can't eat it." So you want to be careful to describe the cultural universals like the right level of description, a level of description that's actually informative about the underlying psychology. And you want to be really, really careful not to impose your own cultures, norms and terms and moral judgments inappropriately. And you do see this happening in the domain of mating, where for example, promiscuous sex tends to get stigmatized in many cultures. And long term monogamous pair bonds tend to get validated and supported and so forth. And that can lead scientists to make kind of misleading judgments about what's really happening in a particular mating system. They might do, for example, a simplistic characterization that says, "Oh, this society is so called polygynous, which means one man can mate with multiple women or marry multiple women. And then you dig a little deeper into the anthropology, and you realize, "Oh, like maybe 5% of the men in that society are married to multiple women, but the vast majority of men are actually mated monogamously." In some sense, it's kind of a polygynous culture compared to ours, where we don't legally validate polygynous unions. But if you kind of apply these western terms inappropriately to other cultures, it can be very misleading.

[promo]

SPENCER: What about on the flip side, where there's an aspect that is varying across cultures. So you see some cultures do it one way, some cultures do another way. Is that strong evidence that something is not evolutionarily programmed? Or actually, could there be some exceptions why you'd see that even if it were programmed into us?

GEOFFREY: Yeah, that's a really important issue. Just because you see cultural variation does not mean evolution has nothing to do with that trait or that behavior. We know from the field of Behavioral Ecology, which studies animal behavior, you can take a set of animals and put them in different environments, and their behavior will change in ways that make absolute sense if you understand foraging theory and how they find food, or mating dynamics and how sex ratios influence mating systems. So evolutionary biology is not just a theory of constants, it's also a theory of variables and trade-offs and change and responses to environmental structures. So variations across cultures can be often more informative about the underlying psychological constants than like a lack of variation. Because once you see variation across cultures, you can characterize it. You can say, what is really shifting (let's say) the degree of risky status seeking among young males in this culture? Well, you can study what percent of young males are actually succeeding in having kids. If that percentage is very low, then there will be very strong incentives for young males to compete very hard and aggressively in a risk-seeking way to achieve the status and the attractiveness that they need. And then you can take those insights from across cultures and apply them within a culture. And then you can do research, like, within a particular culture, are there local little differences in (let's say) percentage of men who succeed in attracting mates? And does that correlate with risky seeking within the culture. Or even do individual differences in perception of the local mating market influence individual behavior? So it's sort of a matter of trying to weave together analysis at the individual level, the sub-cultural level, and the cross cultural level?

SPENCER: Yeah, it seems like the takeaway is that: behavior is almost always a complex interaction between genetics and environment. And you kind of have to very carefully separate the two, which probably involves multiple lines of evidence all kind of pointing in the same direction. Do you agree with that?

GEOFFREY: Yeah. And another way to think about this is: we've got these massive brains, 80 billion neurons and three pounds of computation that is incredibly sensitive to all kinds of contexts. It's what we think about all day. If behavior was just a set of very simple, reactive instincts, we wouldn't need this brain at all. We're trying to understand the complexity of behavior by giving full respect to the fact that we've got a central nervous system that has a lot of neurons and a lot of computation. And we're not just a set of little tiny ganglia like insects.

SPENCER: Right. So that powerful brain that we have, not only does it do learning at the individual level so we will change our life, but it's reacting to incentives and things like that, which are what the environment puts on us.

GEOFFREY: Yeah. I should also say, a lot of evolutionary psychology is very, very focused on what you call "hot cognition" or "motivation and emotion and values and preferences," more than it's focused on what your typical cognitive scientist would tend to be interested in, like memory and categorization and information processing. So we are very focused on what motivates behavior, what people are trying to achieve, what people get passionate about, versus just an emic information processing approach to understanding the brain, which is ironic because I actually went to Stanford to study cognitive psychology in the 80s. That was my goal as a grad student, and then I kind of got seduced and distracted by the Darwinian stuff.

SPENCER: The other day, I was in a conversation with a friend, where we were talking about the idea of humans finding incest to be really disgusting. And it seems really obvious to me that this was programmed in us by evolution. And I was surprised to find that my friend didn't share that view. They thought that it was a cultural thing. I thought that was really interesting. And they pointed out, as evidence of their own view, that apparently incest related porn is incredibly popular, like one of the most popular categories or porn is sibling porn or step-sibling porn or whatever. And so, I was curious just on that random topic there. What is the evolutionary psych view on something like incest? Because you might think if there's anything that we should be genetically selected against, that would be producing offspring that can't survive. And we know that incest leads to less viable offspring.

GEOFFREY: I think there's a pretty strong case that aversion to close genetic inbreeding (aversion to incest) is very strong and evolved. You see it in thousands of species. Darwin even studied anti-incest adaptations in plants. He studied how flowering plants try to avoid self-fertilization by having pistols in the stamens in different locations and not mixing male and female gametes from the same plant. So, if even plants have incest aversion and a thousand of other species do, then humans are likely to as well. On the other hand, cultures do differ a little bit in how they define incest at the margins, like, "Is first cousin mating really incest?" Some cultures, like most of the West for the last few hundred years, has said first cousin marriage is incest. It is prohibited and they discourage it. But other cultures, like in the Middle East and a lot of Africa, say first cousin mating is is not incest and it's okay. Generally speaking, though, very few cultures actively encourage sibling mating, or first degree relative mating where you share 50% of your genes. On the case of the incest porn, I have sex researcher friends who study porn quite intensively, and they would say there's about a 10 or 100 times as much step-sibling porn as sibling porn. It's almost always framed as, "You're not actually genetically related to the stepbrother/stepsister that is depicted in the porn. Instead, you're genetically distinct from them. You just happen to be sharing the same house; how convenient. Yey, that's sexy," whatever. You do get some porn categories that are deliberately flaunting our evolved instincts and kind of playing around with them and messing with them and violating them. And that's a whole rabbit hole that we could go down. But my impression is that a lot of what people call incest porn is actually step-sibling porn. It's not really genetic incest. It's just like, "Ooh, ha, that's taboo. That's a member of your family. How wrong is that? And yet, it's hot somehow."

SPENCER: It seems like a lot of sexual interest is around taboos. There are many ways to have a taboo. One of them would be violate something like incest taboos, but making it step-sibling is like you're trying to work around an evolutionary impulse to not have sexual attraction around people you're related to, but still have the taboo aspect of it. So maybe it's hitting some kind of sweet spot there.

GEOFFREY: Yeah. I think there are legit questions about the psychology of some of this like: are young men watching step-sibling porn because a lot of them actually grew up in mixed households where they did have stepsibs that they weren't genetically related to, but may have been attracted to? Is that what's driving it? Or, is Pornhub and the porn industry just pretending that this is stepsibs, but actually, people have some latent, weird, anti-Darwinian, legit attraction to close genetic relatives? I don't know. I suspect it's more of the former than the latter, but it's hard to say. And of course, research on a lot of these topics is so stigmatized, and it's so hard to get federal research support to study it, that we don't know.

SPENCER: Another topic that is a direct application of evolutionary psychology is: why do we have emotions and what is the nature of emotions? When I think about an emotion like disgust, I think of this as evolutionary programming to help us avoid things that cause disease or illness. So for example, avoiding food that's gone bad, avoiding dead bodies that could get us sick. When I think of an emotion like anger, I think of this as programming to help us in situations where we're threatened or something we value is challenged by another person or another agent, and anger is a way of signaling, "Hey, you can't mess with me. I'm going to defend myself. You better stop." Now, you also have people like Lisa Feldman Barrett — I think one of the most famous examples — who believes (as I understand her view. Hopefully, I'll summarize it accurately) that these are not evolutionary programmed as discrete units of emotion. Instead, there's a really strong cultural component. And anger is only anger if it has the right context around it. And maybe a different culture would interpret it differently. I'm curious to hear your thoughts on a debate like that.

GEOFFREY: I think emotions are really fascinating, and it's very much at the heart of some of the most socially important and politically important aspects of human nature. I actually teach a course on human emotions that I've taught for about 15 years. Regularly, we review a lot of evolutionary psychology work, analyzing emotions — like disgust and anger — in terms of their functions and their origins and similarities to other apparently emotional states in other species, like other social primates. I think, heuristically, you can get a lot of mileage out of looking at human emotions, and asking, "What's going on here? What triggers this? What are some adaptive functions this might be solving? Why does it pop out when it pops out? What behaviors does it motivate? How does it shape our decision-making? How does it shape our memory?" I think we've made a lot of progress analyzing a wide array of emotions using that toolkit (Darwinian analysis). I don't see the people, like Lisa Feldman Barrett or the other kind of social constructivists, making a lot of systematic progress in understanding specific emotions. Post hoc, you can always look at an emotion and say, "Well, it varies a little bit in terms of its cross-cultural manifestations. And it varies a little bit in terms of its facial expressions. And there's different display rules about how you show it off in different cultures." That's all true. But in terms of understanding, "Wow, there's different forms of disgust." Disgust as a great topic — partly because my wife, Diana Fleischman, did her PhD studying disgust, partly because one of my own PhD students, Josh Tiber, who's now at Free University of Amsterdam, studies disgust. — If you understand a large part of disgust is to avoid pathogens, to avoid infectious disease, that gets you a lot of mileage. That really helps understand a lot about disgust. And then if you add to that and go, "Oh, there's this other thing called moral disgust," which isn't really about pathogens per se, but it's really about limiting the spread of bad behaviors, values, and norms that might hurt your tribe. That seems to be a major function of moral disgust where you go, "That person is bad. Nobody imitate them. Let's all punish them collectively." That's a very different emotion than anti-pathogen disgust, but I just find it really valuable to analyze emotions in terms of functions.

SPENCER: Let's talk about specific modern applications of evolutionary psych that might actually be useful to people. You wrote a book about modern dating. So can you tell us some of the ideas from that (book) that you think are actually useful to people today?

GEOFFREY: I think when it comes to modern dating, Tucker Max and I did this book, "Mate" about eight years ago. We did this whole podcast series called "The Mating Grounds" where young single straight men called in with their questions, and we did hundreds of episodes of answering young men's questions. I think a key insight from that which has evolutionary psychology is: young people should make some effort to understand which specific traits the other sex finds attractive, and then try to figure out how to level up and improve those traits within yourself that you can actually change — that are actionable, where you got some leverage about how to make them better through cultivating good habits — and focus on those things. This is in contrast to the kind of pickup artist, like red pill, manosphere subculture, which often is very distressing and pessimistic and fatalistic. A typical red pill approach to dating says, "Hey, little dudes, if you're not tall and handsome and in great shape and make a lot of money, everything is hopeless and no woman wants to date you." This is sad because men can't typically improve their height very much. They can't typically improve their facial attractiveness very much. It's really, really hard to make a lot more money than one is currently making. This is why the economy is competitive and careers are hard. Our approach was more: there's all this low hanging fruit, like traits and skills that are actually very attractive to the other sex that most people don't bother to cultivate very much, like dressing better, developing a better fashion sense, getting in shape to the extent that you can get in shape without trying to become Arnold Schwarzenegger, cultivating your sense of humor, cultivating specific skills like learning how to sing and how to draw portraits of women or men, getting better at sex, getting better at conversation, improving your vocal timbre so you don't have annoying vocal fry, like the Kardashians. There's so many actionable things where we have evolutionary psychology research on what is attractive and why it's attractive. But we also know a lot of these attractive traits are not entirely determined by your genes. You can improve them with effort if you do it in a targeted way. And long story short, we just made this plea to young men, "Hey, why don't you spend at least 10% as much effort improving your mating-relevant traits as you invest in your education and your career?" And I think if they did that, they'd actually often improve a lot and be a lot more attractive.

SPENCER: One of the challenges I see people face in mating, particularly, is that they don't seem to get good feedback. Maybe they will go on online dating websites, and nobody responds to their messages. Or they go on dates, and the people aren't interested in seeing them again. But nobody says, "Hey, let me break it down for you. You just don't have a good sense of humor." They don't actually get that. And so I'm wondering, if someone wants to be more successful in the dating market, how do they figure out what their sort of low hanging fruit is?

GEOFFREY: I think you want to cultivate some friends — ideally opposite sex friends if you're straight — who are capable of radical honesty sometimes, when you ask them for it. This is very, very difficult because we have enormously strong social norms against giving honest feedback about people's flaws, especially those that are romantically really important. So, I might have male friends where I can sort of imagine the top eight things they do wrong on first dates. But I'm not going to tell them about it honestly; that could be a very quick way to lose all your friends, if you give unsolicited feedback. To people who know any given person best, in terms of giving feedback, are their ex-lovers. But it's very, very difficult to reach out, whether by text or email, to some ex-lover and say, "I want to avoid the same mistakes that turned you off, that were annoying, that were irritating. Please give me feedback about what I can do better." Sometimes that might be worth doing. Sometimes somebody might actually be quite relieved to unburden themselves of what you did that was really annoying, but that you can also fix. I almost wish that there was a market for a service where you could hire somebody to go on dates with you, and then they would give you radically honest feedback about what you wore, how you talked, what kind of eye contact you had, where did you take the person (was it a good choice?). I suspect if there was a social norm, that it's okay to pay for that kind of service and a lot of people use it, they could benefit from it a lot.

SPENCER: A cool business idea. I don't know if people would buy it; they might be too scared. Actually, many years ago, I went on this speed dating event, where the way it worked is, after each conversation, people would jot notes about you — about what they liked and didn't like — and at the end, you got all the notes. I thought it was fascinating. But I have told people about this, and they were [like, "Oh, God, I wouldn't want that experience." So I think it just takes a certain attitude towards feedback. One trick I have found really useful in my own life for getting feedback from people is to frame it as you're doing me a favor by telling me how I can improve. So you could say, "I really want to improve at this. What do you think I should focus on? Should I focus on this or focus on that? Or do you have other ideas of what I should focus on? I really want to get better." And if you convince your friend that you really want to get better, then it flips it from "they're being obnoxious by critiquing you" to "they're helping you by critiquing you." I think that can be a positive development.

GEOFFREY: Yeah, and I think to be patient with this, you can ask a friend, "Give me the top two things you think are probably the most annoying when you've seen me interacting with women." Keep it limited to two. And then they'll be very reluctant. They'll give you one real thing and then one fake thing that's not really very important. But then, if over the subsequent weeks, you demonstrate you actually valued that feedback, you're implementing some new habits, you're trying to fix it, the next time you ask for additional feedback, they might give you a lot more honest and actionable stuff. So, I think it's a matter of building up trust with one's same sex or opposite sex friends, that if they give you some honest feedback, you actually value it, use it, and maintain the friendship with them.

[promo]

SPENCER: All right. Before we finish the conversation, I want to switch topics totally to something I know has been really on your mind lately. It seems like you've pivoted your research to really thinking about artificial intelligence. Do you want to start telling us, why did you do that?

GEOFFREY: I've gotten really worried about extinction risks from advanced artificial intelligence lately. And it seems like that might be a complete change of career for somebody my age, 58, who's a psych professor. But it's actually not that big a shift. Back when I was in grad school at Stanford, I actually spent most of my time doing neural network research and machine learning research. My first 20 papers were mostly neural networks, genetic algorithms, evolutionary robotics, and other machine learning papers. And I've been interested in AI for 35 years and following the progress a little bit from a distance. Then I got interested in the effective altruism community (which you know and love) about seven or eight years ago. And a lot of the effective altruists are very worried about AI risk and maintaining humanity in the face of existential threats. And then over the last year, with the rise of ChatGPT, and the success of open AI and Deep Mind, and so forth, it looks like progress in AI has been a little faster than we might have expected. And this has moved up the timelines of how worried we should be and when we should really start to worry. So that's my backstory: I have always been interested in AI, published a bunch on machine learning, followed it from a distance, but the stakes have just gotten very high recently, and the pace of progress seems to be accelerating quite dramatically. So I'm worried.

SPENCER: What's your approach to trying to work on this problem?

GEOFFREY: My approach is basically to try to push a lot more behavioral sciences insights into the AI safety discussion. We have this concept of AI alignment, which is how do you create advanced AI systems that are "aligned" or in accordance with human values and preferences and goals. What typically happens is: two people in AI safety talk about alignment, put a lot of effort into the issue of how you get the machine aligned with something — like, how do you get the machine to behave? — but they don't actually spend that much effort characterizing what are the human values and preferences that we would want the AI to become aligned with. So in a series of essays that I did, over the course of the last year or so, for the Effective Altruism forum, I tried to dive a little deeper into: what human values are we really talking about aligning with? Because on the one hand, we have bad actors. There's a lot of psychopaths out there who have very bad intent. Do you want AI systems to be aligned with psychopaths or nihilists or terrorists or religious or political extremists who only want their own group to flourish at the expense of other groups, or that don't even like humanity and would prefer for humanity to go extinct. That's the kind of edge case where you don't actually want AI alignment with antinatallists or misanthropes or terrorists. On the other hand, there's a lot of values that are very, very important to people, like religious values. 80% of humans around the world are still involved in organized religion somehow, but something like 80 or 90% of AI researchers are atheists, who often have quite a bit of contempt for religious values. So if you're talking about AI alignment with human values, and you're ignoring religious values, to me, that's bad science, bad research and bad ethics. But my impression is, the typical Bay Area machine learning theorist, who kind of wishes that religion would just evaporate from human life and who has quite a bit of contempt for religion, would think, "Oh, AI alignment with religious values, that's not something I don't want to think about." Well, they have to think about it. Because if you piss off the 80% of humanity that is religious by trying to push atheist values into your AI system, people won't put up with it. People will rebel. The 1.5 billion Muslims will rebel, the 2.2 billion Christians will rebel. The AI industry needs to take full diversity of human values seriously.

SPENCER: So what does it look like to build in religious values without building in a specific religious belief? Because I assume, if we're building a really intelligent AI, we're not talking about making a Muslim AI or a Christian or Jewish AI, right?

GEOFFREY: Right. Well, the subtext to a lot of my writing on this is: my increasing concern that the whole notion of AI alignment is fundamentally misconceived and basically impossible. Because I agree with you, I don't think it's possible to put some kind of generic religious values or generic spirituality into some AI system, in a way that fully respects the richness and detail and doctrine of specific existing organized religions. The Muslims of the world are going to want their AI systems to be fully aligned with Islam. The evangelical Christians of the world — however many of them there are, 500 or 600 million — are gonna want their AI systems to be fully aligned with the doctrine of Jesus Christ, our Lord and Savior period. They are not going to be happy with some kind of wishy washy superficially spiritual AI system. They are going to want their AI systems, that they use and love and pay for and are vulnerable to, to reflect their religious values. Same thing with political values. Reactionary conservatives are not going to tolerate AI systems that promote some woke, liberal agenda and vice versa. So to me, the diversity of human values, and including the value conflicts that we see everyday on Twitter, the value conflicts that have led to political and religious wars, you can't pay for those over. You can't just pretend those don't exist. You need to confront the fact that AI alignment may not be able to align simultaneously with all these different values. And there might be fundamental conflicts that cannot be reconciled in the AI, if they can't be reconciled on Twitter.

SPENCER: So where does that leave an AI researcher who wants to create AI that is benevolent and aligned with human values?

GEOFFREY: They should stop. They should not do that, at least for the moment, until we have a much better handle on AI safety and alignment on whether alignment is possible. So I'm involved in a group that is calling for pausing advanced AI research and having at least a temporary moratorium on research towards AGI or artificial general intelligence. I think, for example, it's nice to see OpenAI devoting a significant share of its resources to alignment. They made an announcement within the last 24 hours about devoting a lot more money and compute and resources to alignment. But if they keep pushing on AI capabilities, if their explicit goal is to develop AGI as fast as possible — as far as I know is still their goal — that's reckless and dangerous and evil and stupid. And I don't think they should be doing it. So I'm patient. I would be happy for AI capabilities development to be paused, for a few years or decades or even centuries, until we have a much, much better idea how to reduce extinction risks from AI. And there's no hurry, right? It took us 2 million years to evolve a three-pound brain from one-pound brain; it took us 10,000 years to develop civilization. There's no rush with AI. We have enough prosperity and happiness and peace, generally, in the world at the moment, that if it takes five or 10 generations to get AI safety right, let's just do it. Let's be cautious. Let's be prudent. Let's think about our great, great grandkids and what would be best for them rather than just chasing short term profit.

SPENCER: There's two different issues here that I think are maybe we're conflating a little bit. One is aligning a really smart AI with human values, or it reflects human values in its behavior and actions. The second is preventing it from ending civilization, either making humans go extinct or putting us into some kind of totalitarian dictatorship where AI controls everything. So can you talk about how these relate to each other, or how these are two different questions?

GEOFFREY: At an abstract level, it seems like they're very different questions. We have this sort of meme of like, "Alignment really means just don't kill everyone." That's the sort of basic level of alignment that you might think everybody would want. At the other extreme, you might take the rhetoric of alignment researchers seriously. When they say alignment with human values, they really mean alignment with human values and their full richness and diversity. But in the middle, right in between the 'not kill everyone-ism' versus the 'Christians should have Christian AI' there's a lot of very tricky gray area. There are a significant minority of people who sort of put the earth first, who think that the biosphere is more important than humanity, who think humanity is doing a very bad job of stewardship in terms of protecting other animal and plant species and who think the Earth would be better off without humans. That's a heartfelt value among millions and millions of eco-activists. Now, if you put their eco-activist values into an AI, that AI would be an extinction risk to humanity. It would think, "Well, humans are the problem, get rid of humans, and the rest of the Earth is fine." And the AI will self-terminate after it gets rid of humans. And then the biosphere continues merrily on its way. There's a lot of humans who'd be happy with that outcome. And I think we have to take seriously the fact that those people exist. On the other hand, there's a lot of religious extremists who think, "Look, if the afterlife is infinitely long, then the only thing worth doing is maximizing how many souls you save." That's the utilitarian calculus: is to convert people to your religion, so they can go to heaven rather than hell. If those people are taken seriously, and those values are incorporated into AI, then you could easily get AI-led religious wars that could be very bloody, and where people are like, "Look, what happens in this transient human life is irrelevant compared to the eternal afterlife. The only thing that matters is saving souls." So I worry that once you get into the domain of really heartfelt core values, where there's enormous diversity across humans in what we actually want, it's very, very tricky to get AI safely aligned with any of that. And I think that the AI safety community is just ducking a lot of these issues because, frankly, they live in a kind of Western comfortable bubble, where they don't get exposed to the true variety of religious and political values that are actually out there.

SPENCER: I wonder if some listeners right now might be thinking, "Well, it seems like you're treating these AIs as gods or something. If there's one AI that is built by people that think that the biosphere is better off without humans, then suddenly humans cease to exist." So do you want to give an intuition pump for why you feel even one of these AIs having misaligned values could be enough to end civilization?

GEOFFREY: The strongest intuition pump or heuristic that I can think of is: imagine there's an artificial general intelligence, which means, by definition, it's about as capable as a pretty smart human across most human domains. That's what AGI means, by definition. But also imagine it's a lot faster than humans, it can perceive and act and behave in the world, maybe, orders of magnitude faster. What are the implications of that? If it has goals that are not aligned with human goals, at a very basic level, in terms of manipulating the economy and the financial sector, if you have an AI that's as good a trader as any hedge fund trader, but it can integrate information orders of magnitude faster, then it's basically like one of these speedster superheroes — like Quicksilver or the flash — having a fight against the traditional mixed martial artist in an octagon. The speedster is always going to win. If you can move 1000 times faster than your opponent, and you're closely matched in strength and other abilities and skills, you will always win. So an AGI that's faster than humans will win in whatever it's trying to do. If it wants to crash the economy, if it wants to crash the stock market, if it wants to make a bunch of money with longs and shorts and financial derivatives, it will be able to do that far, far faster than any human trader can. And it will quickly become extremely economically powerful. And if it wants to use that power to create economic disruption and social chaos, it could easily do that, even if it doesn't have any robot bodies or any terminators or any of that stuff.

SPENCER: That intuition of it running like a human but faster, I think, is very useful. Another thing you could imagine is it being like a human, but having many copies of itself. Imagine there was a sociopathic human that wanted to end life on Earth, and that human could make copies of themselves. So it'd be like there's one today, there's two tomorrow, in a year there's 4 billion of them, whatever. So there's just more and more and more, and then suddenly maybe there's more than there are humans on Earth, and now there's a thousand times more than there are humans on Earth, and so on. So I think maybe that's another intuition pump, that if somebody is digital, it may have really fundamental advantages over something biological, and that you can make duplicates, and the duplicates are perfectly aligned with the original copies.

GEOFFREY: Yeah, that's also a very powerful and horrifying intuition. If you remember the capacity of Agent Smith in The Matrix movies to copy himself. And once you realize, "Oh, man, Neo cannot fight a thousand Agent Smiths very effectively." And this also relates back to biological evolution. Humans are relatively large and long-lived, and we are constantly fighting these wars against small, fast evolving pathogens, like viruses and bacteria. And an awful lot of our physiology and our psychology is about dealing with the threats of these very, very fast self-replicating entities. In this case, it's not just the viruses that can replicate very, very fast; it's the whole AGI (as you point out) that can replicate very, very fast. So you have to imagine a situation where, let's say instead of AGI wanting to crash the economy and the stock market and wants to manipulate human political psychology and wants to do propaganda. Well, Elon Musk understands this risk very, very well, and this is why he's pushing blue checkmark validations on Twitter. He wants Twitter to be safe from the proliferation of self-replicating AI bots that could do political or religious propaganda. If there's no barrier to entry for tweeting, if it costs nothing, then AIs can absolutely swamp any social media platform once they get good enough at propaganda. Whereas if you do have a barrier of entry, even just a few dollars a month, it's a hell of a lot harder for self-replicating AIs to swamp the public discourse with propaganda.

SPENCER: So what kind of research do you see as essential in order to make progress in making safe AI?

GEOFFREY: I think, at this point, the most important research is just reminding machine learning researchers about the full diversity and complexity and richness of human values and the fact that alignment is probably going to be much, much, much harder than they think. They already know it's technically hard. It's hard to get an AI system to behave the way any single individual wants. But I think they really need a reality check about just how difficult it will be to capture the kind of consensual goals of humanity as it actually exists today. I think that's the top priority. And I think the main deliverable from that kind of research would be, hopefully, a feeling of, "Oh, shit, this is really hard. AI safety is a much harder problem than we realized. And we better slow way down in AI capabilities development, because this project of trying to make AI safe is going to take decades, if not longer. It's not something that's going to be solvable in the four year time span that OpenAI thinks alignment is solvable.

SPENCER: A suspicion I have about people who are trying to build AIs that are probably smarter than humans, is that they kind of think to themselves, "Well, as long as I'm in control of the AI, it's okay, because I'm not gonna use it to do bad things. And so the key aspect is that I can control it; it does what I want." I'm curious to hear your reaction to that kind of thinking.

GEOFFREY: I think there's a lot of naivety about that at a couple of different levels. One is: everybody thinks they're the good guy. Everybody thinks they're the good guy. Everybody thinks whatever values they have at the moment are the right and proper values that the rest of humanity should emulate. And we saw this all the time with the GPT systems incorporating kind of lefty sort of woke values in terms of what information is censored and what kinds of questions are allowed to be asked of GPT. And that's understandable. You get a bunch of lefty kind of Bay Area AI researchers, who all kind of share vaguely lefty Bay Area values, and they think those are the natural and true values everybody should have, and then incorporate them into the AI system. And it immediately pisses off everybody who doesn't share those values. So I think having some humility — epistemic and ethical humility, what Will MacAskill calls moral uncertainty, where you realize, "Oh, my values might be different from other people's values. My values might change in 10 years after I maybe get married and have kids and move away from San Francisco or wherever. The second problem is: there are researchers who might think, "Well, as long as I am in control, I trust myself, I'm safe." Well, guess what? You're subject to leverage. Everybody's subject to leverage, incentives and blackmail. And anybody can be turned and compromised. So whoever is in control of AIs, you have to ask yourself, "Are there any incentives that other individuals or groups might have to take control of the people who are in control of AIs?" If those incentives are sufficiently large, then those groups will take control of the people who control the AIs. They will use whatever leverage they can, whether they are governments or terrorist organizations or religious extremists, or whatever. And I think most AI researchers are not very well trained in counterterrorism, or resisting blackmail attempts, or resisting social pressure. That is not their skill set. So I think they're being very naive if they think, "Oh, well, we, here in OpenAI, are well-intentioned, and we are good people, and therefore nothing could go wrong." Well, no. If your systems are powerful enough, billions or trillions of dollars are at stake, and you can potentially influence the views of millions or billions of people, that's a very sweet prize to pursue, and people will pursue it, and people will subvert you and manipulate you, and take you over. Microsoft has even done this already, to some degree, with OpenAI. Investment by Microsoft converted OpenAI from a kind of holy, benevolent, hopeful, transparent startup nonprofit into basically an arm of Microsoft. So they're already compromised, I think.

SPENCER: You mentioned two concerns there. A third concern that I see among a lot of technical AI research is that controlling these systems may just be way harder than people think. So people might think, "Well, as long as I'm in control, it's going to be okay." But it's one thing to get it to, most of the time, produce reasonable outputs when you're just having a generic text. It's another thing when you have a system that's so powerful, that if you give it a command, like make money, it could actually maybe go and make billions of dollars. But those kinds of systems, paradoxically, might actually be harder to control than the systems that have much more limited scope.

GEOFFREY: Yeah, I think that's right. My other pet theory is that AI researchers need to become parents, and they need to have more experience with kids. Because I think if they did, they would realize it's actually very, very hard to control other intelligent sentient beings that have their own agenda. Whether that's a toddler or an AGI, it's a very similar control problem. Now, we've evolved for millions of years as parents to have some ways to control little human creatures. Currently, we're just bigger and stronger than them, and we can lift them up and get them out of danger. Certainly, we have training methods for reinforcing a behavior in ways that are effective because we understand their beliefs and desires and what they want and what they don't want. But when it comes to an AGI, that's potentially much, much faster than us, that can self-replicate (as you say), that is potentially more physically or economically powerful than us, I think it's really important to have this humility that all experienced parents get that you cannot micromanage everything your kid does, you cannot control every trait they develop, you cannot control how they turn out, and there will be disasters, there will be dangers, there will be risks that you are not in control of. The main difference here is: as human parents trying to control our kids, we have these built-in advantages over them; we are bigger and smarter and more experienced. But when it comes to AI, all of those advantages parents have over kids might evaporate. In fact, we might have no advantages over AI.

SPENCER: It seems like at the very least, the AIs are likely to have much of all human knowledge already in their minds, because we already have AIs like ChatGPT, which have essentially processed most of the internet and huge numbers of books and so on. So, already imagine your toddler was 100,000 times more knowledgeable than you.

GEOFFREY: You do occasionally get situations where parents of average intelligence have a kid who's a true genius, is extremely smart, and is reading very advanced books by age 10 or 12. And that kid is asking the parents, "Why is the sky blue? Why is the Fed lowering the interest rates? Explain quantum gravity to me." And the parents are like, "Dude, we have no idea. That's beyond us." We're going to be in much the same situation relative to AGI that has kind of a panoramic understanding of human knowledge.

SPENCER: Often people make this distinction between narrow AI, which is AI designed to solve particular problems — like predicting what movies you want to watch, or even something like ChatGPT, you could say it's not completely narrow, it does a lot of things, but it's definitely not a full general AGI — and then this idea of AGI (that's general intelligence) that could do anything that a human could do. What's your view on the development of narrow AIs?

GEOFFREY: I'm actually quite pro narrow AI. I think narrow AI has already done amazing things in life. Virtually every app on my phone that I love to use that's super useful is a narrow AI by 1970s standards. Google Maps is an incredible spatial navigation narrow AI that would absolutely blow the minds of any computer scientist from the 1960s and 70s. There's a lot of other apps that we use every day on our smartphones that are narrow AI, by most traditional standards of AI, that are great. And I hope that we develop lots of other narrow AI. For example, it would be amazing to have biomedically-oriented narrow AI that really helps us with longevity research, and developing regenerative medicine, and helping to guide the integration of scientific literatures about how the body works and how aging works, and that could help suggest like, "Here's some new studies to run. Here's some new drugs that might be helpful." That would be wonderful, and I think that would get us one of the key potential benefits of AGI that AI researchers are hoping for, which is human longevity. I don't think you need AGI to get longevity. I think you can do narrow AI applications in biomedical research that gives you like 80% of the benefits that AGI could deliver with maybe less than 1% of the risks. And to me, that's a no-brainer. I think that's the path forward. I think being very careful about which kinds of narrow AI we develop, that could really help people, but being very wary about going in the direction of artificial general intelligence or artificial superintelligence.

SPENCER: Geoffrey, thanks so much for coming on.

GEOFFREY: My pleasure, Spencer. It's great to be here.

[outro]

JOSH: A listener asks, "Could AI approaches to cognition replace psychology or neuroscience?"

SPENCER: Well, you know, it's interesting because one day you could imagine that AI could replace a lot of things. Depending on how fast it develops, it might even replace everything one day that humans do. So there's sort of that way of looking at the problem. On the more immediate front, one thing we have been exploring on ourselves is looking at AI tools to help better understand psychology. And I do think there's some really interesting applications there, some of which we are pursuing. But I do think that the power of these models to kind of find patterns is potentially really applicable to thinking about patterns in human behavior.

JOSH: So one of the things that could be meant by this question is, in addition to how can AI be applied to psychology to improve psychology, to improve how psychologists do their jobs, that kind of thing, how does our understanding of let's say neural networks augment or replace or improve our understanding of human neurology?

SPENCER: I think it's an open question, to what extent our brains work in an analogous way to neural nets. Obviously, it's been proven that neural nets can do a whole bunch of things that prior to neural nets, we could never get a software system to do, right. So there's no question that they're very powerful in order to create intelligent behavior. But that doesn't necessarily mean that they work in that similar way to the way our brains achieve those same goals. I think that from my point of view, I'm not a neurologist, it's not my specialty, but I suspect that certain parts of the brain are much more similar to neural nets than others. For example, I wouldn't be surprised if our image processing system and our brains have quite a lot of analogs to neural nets, but perhaps other parts of our brain don't. And so I think there may be certainly some insights we could glean by studying neural nets about the way minds work in general, but there might be other things that are hard to generalize about just because the architectures are too different.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: