CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 181: Do technological innovations yield net gains in the long run? (with Justin Smith-Ruiu)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

October 26, 2023

What are the limits of tech solutionism? Do technological innovations create as many problems as they solve? Or, in other words, do technological innovations improve the world on average over time? Are humans living in the 21st century actually worse off than those that lived in the 11th century? What's the difference between "art" and "content"? If image-generating AIs just produce images that are stylistic averages across all of their training data, then is it even theoretically possible for such models to create art that's edgy, avant-garde, or off the beaten path? Is cinema dead? Is literature dead? Are the humanities dying, especially in the US? And might that be a significant contributing factor to the withering of democracy in the US?

Justin Smith-Ruiu, formerly known as Justin E.H. Smith, is a writer based in Paris. He writes speculative fiction, documentary metafiction, criticism, literary non-fiction, and poetry, and also translates poetry. In 2019-2020, he was the John and Constance Birkelund Fellow at the Cullman Center for Scholars and Writers of the New York Public Library. He is also a professor of philosophy in the department of history and philosophy of science at the Université Paris Cité. Learn more about him and read his writings at www.the-hinternet.com

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Justin Smith-Ruiu about tech solutionism, algorithmic content and the decline of the academic humanities.

SPENCER: Justin, welcome.

JUSTIN: Thank you very much, Spencer.

SPENCER: Today, we're gonna have a wide-ranging discussion touching on a bunch of interesting topics like, what's the limit to solving problems with tech? What's the deal with algorithmic art? Is there a decline in the academic humanities? And if so, what's it all about? Let's start with the first one, limits of tech solutionism. Do you want to give us a quick introduction to the topic?

JUSTIN: Sure. I published a book last year called "The Internet Is Not What You Think It Is," and at the time, I think a book published in 2022 that talks about, broadly speaking, the philosophy of technology, is already prehistoric. [laughs] You have to come out with a new book every year in order to keep up. Since it's been published, we have seen significant new shockwaves moving through debates around technology, in particular having to do with artificial intelligence. I read just yesterday, the guru behind the new AI revolution, Altman, is now talking about scanning eight billion irises — I don't know if you saw this — in order to give us a unique ID through our eyeballs, "Minority Report" style. And this is going to be the basis (so they're saying) of a new economy where we will get micropayments, once we're properly ID'd through our irises, for our activities online, or something like that.

SPENCER: This is Sam Altman of Worldcoin that you're talking about?

JUSTIN: Yeah, Sam Altman, yeah. So these are things that were not on the horizon, on the agenda two years ago when I finished my book. And so I think one thing this fast pace of developments makes us quickly realize is, we're just going to keep experiencing new shock waves year after year, and month after month. And all of these shock waves are going to have a particular contour to them. In particular, we're going to have people like Altman talking about these new developments as if they were solving pre-existing problems; whereas in fact, if you look at this from a zoomed out history of science and history of technology perspective — and that is my starting point as a scholar — we are almost obviously constantly generating new problems that would require solutions. So the tech world then is both generating and solving its own problems in a way that could make you easily think, if we were not so intent on innovation, we might be able to inhabit a world that didn't have so many problems that need to be solved. And it's naïve in this regard, to think that we're ever going to arrive at a point where the solutions have been definitively or relatively permanently laid out, and we can just chill for a while. [laughs] That's just obviously not going to happen. Maybe I'm just a late learner, or it takes things longer to get through my thick skull, but this is honestly something that I had really not appreciated to such a degree until, indeed, after I finished this book. I feel like the history of technology now is plainer to me than it was even for many years writing about it, both in the 21st century context, but also more deeply in the context of the centuries-long development of the basic kind of apparatuses that shaped the modern world, the development that really got going in the 17th century. So that's what I'm thinking about with this problem. I'm not a radical, I'm not an anarcho-primitivist, or anything like that but, broadly speaking, I personally share the view of someone like Wendell Berry, that whatever technological revolution we manage to bring about — say in energy, for example — we're going to create for ourselves a new mess out of it. As Wendell Berry likes to say, if we did manage to break our addiction to fossil fuels by using (say) solar panels or wind or something like that, we would inevitably very soon find ourselves in a world that is dangerously overrun with solar panels and windmills, or the like. There's no definitive end solution to this process.

SPENCER: When it comes to technology creating new problems, I think that people classically will think about environmental problems coming from technology or, more recently, they'll think about things like social media addiction. But the way you describe this, it sounds like you think this is a general principle. It's not just a couple of exceptions, but it's technology constantly creating problems. So maybe you could give a couple of more recent examples of the way technology has created specific problems that didn't exist before.

JUSTIN: I suppose my favorite example is the automobile, the innovation that made it possible for people to live further from their workplace than they previously did. And we know that Henry Ford was really keen on ensuring that Ford employees all had their own model Ts, or whatever the model was, so that they could commute. And this fundamentally changed the layout of our cities, and led us all — or many of us, not me, and probably not you in Manhattan — led many of us to spend a good part of our lives in traffic jams, at risk of fatal collisions, polluting the atmosphere, for something that initially looked like an improvement, looked like it was something that would make our lives better, because we can get from point A to point B faster. It's true, you can get from point A to point B faster, but the pressures that require you to get from point A to point B might not actually be improvements. So technological innovation creates new pressures, which then just creates a new shape of life that we might not, with sufficient hindsight, see as an improvement over what we had before. And I think what is crystal clear in the case of the automobile is, in fact, to a greater or lesser extent, the general shape of technological innovation, and that characterizes everything we've accomplished so far, perhaps since the discovery of fire. [laughs] People often hear me talking like this and, very quickly, they'll resort to that pretty crude label of 'Luddite,' and this is obviously not simply Luddite-ism. That creates a false dichotomy. Either you love it or you hate it, and anyone who has a critical gaze or takes a long term approach must ipso facto hate it and want to smash up all the machines. No, that's not the conclusion we ought to draw. Again, this is not an anarcho-primitivist critique or anything like that. But it is trying to take an honest stock of what technological innovation actually does in human history in order to better anticipate the limits of the solutionist mindset.

SPENCER: It seems to me that there are two very different perspectives you could have on this. One is that, when new technology comes out, it creates problems that are about as big as the problems it solves, and so it doesn't get you anywhere net. You're advancing one way but you're making things worse in another. Another view is that it tends to create problems, but those problems are, most of the time, less bad than the things being fixed, which is more compatible with a view that, over time, technology has actually massively improved people's lives, even though it has created problems. And I'm wondering where you fall between those two?

JUSTIN: Here's an interesting thought experiment. Behind the veil of ignorance, in a kind of John Rawls sense, imagine you're in baby heaven, and you're gonna get plunged into the world. You can pick your time period; that's all you know, all you get to pick. So you can arrive in (say) 1000 AD when almost everybody's a peasant, and the life expectancy is 30 years old, and the feudal lord has the droit de seigneur over your wife, and so on. [laughs] Or you can be born in the 21st century, and your life expectancy is 77, and you have at least a nominal liberty to go about your own life in the way you see fit, even if it's hard to get the money to do that, often, and so on. But if you choose the 21st century, then you've got nuclear weapons that could destroy the planet hundreds of times over hanging over your head every second of your life, which is basically like leading your whole life with someone pointing a gun at you, if you think about it. And similarly, I could rehash all the other plausible apocalyptic scenarios, but we're familiar with all of these. If I were in that original position, I think I would take the 1000 AD life. So whether the current situation of human life on Earth is better now than it was 1000 years ago, I think I'm not going to say simply yes or no. But I'm going to say that it will always depend on which elements of life you take into consideration. And there are some pretty compelling considerations that would make you think that life in the 21st century is much worse, and that it's much worse because of our technological innovations. And these would include not just nuclear weapons, but also plastic and synthetic fertilizers and a number of other things that have made our position on this planet extremely precarious. So in this respect, then yes, we do solve problems with technology sometimes, but on balance, the defense is going to have a pretty tough time against the prosecution if our modern technological lay of the land is put on trial.

SPENCER: I think a lot of listeners will be really surprised at that. I think the vast, vast majority of people I know would much rather be born today than be born in 1000 AD. Maybe you could unpack the case against modernity more, and see if you can persuade us that we really should rather be born in 1000 AD.

JUSTIN: I suppose in 1000 AD, the stakes were lower. You could find your village invaded by marauding hordes, but that would be the end of your village, and not the end of your hemisphere.

SPENCER: I'm a little confused about that because, absolutely, people care about the whole world. But if you're living in 1000 AD and the marauders come in and kill your family and murder you, to me, it's not clear that someone should be less scared of that than they should be of (let's say) global warming or nuclear war.

JUSTIN: Sure. Yeah, your village is what you know; it's your whole life. And it would be definitively upsetting if it were to be burned to the ground. [laughs] Of course, of course. And maybe this thought experiment isn't all that productive. But what I'm trying to get at is something that will help us understand my general law or the intuitive conviction that I have that human life has overall got neither better nor worse, and that this is a law like equilibrium. You can say life expectancy is longer, but where's the argument that that makes life better? You also need an argument for that because 70 is still basically equal to 35 when compared to eternity. [laughs] And so simply getting better measures on some of the indices that (say) economists like to consider when they're looking at quality of life, I don't think that is entirely compelling. I think there are other more broadly or more profoundly existential considerations that cannot even be touched upon by the kind of information economists study. Let's maybe step away from that and look at some kind of narrower issues like, for example, pleasure, enjoyment, entertainment. And now, as someone who has been around long enough to have actually lived through technological revolutions, and to remember the prerevolutionary state of things, it's pretty clear to me that (say) virtual reality is no better than cinema for triggering our imaginations and for inciting us to do what we do with our minds. And given that I've experienced that, and I know that firsthand, it's pretty easy for me to extrapolate from that and say, virtual reality goggles are also no better than (say) listening to an elder tell a story by the campfire when that's all you had. Or playing with little sculpted twigs is just as good as whatever most sophisticated AI-driven electronic toys we have today. It's clear enough to me that our enjoyment doesn't actually get better because we have the same brains as our Paleolithic brothers and sisters. And the brains are triggered to do their thing by different external objects, but that doesn't really change the nature of their thing. So maybe if you're not convinced by my claim of historical equilibrium in general — when I'm talking about the general kind of indifference as between life in 1000 AD versus life in the 21st century — if you're not convinced by that, then you might be convinced by a consideration of something narrower, like what I've just alluded to: the history of (let's say) external prostheses for the incitement of our imaginative faculty. I would contend that cave paintings do just as well as movies, do just as well as VR goggles.

SPENCER: I wonder here or (I should say) worry here that you may be taking your own preferences and assuming that other people have similar preferences. Because I think the vast majority of people would actually vastly prefer to live to 70 than live to 35. And it doesn't matter that, compared to eternity, those both round to zero. I think that the vast majority of people would enjoy movies more than cave paintings, and that people who only have access to cave paintings would also enjoy movies more than cave paintings, if they got to experience both and compare them.

JUSTIN: We do have almost laboratory-like settings for observing what happens when (let's say) broadly speaking, a civilizational shift occurs, where you move not from cave paintings to movies but (say) where you move from bows and arrows to guns. Or from reliance only upon your own two feet to a society that's built around human-equestrian symbiosis. I'm talking in particular about Native American history before and after contact with Europeans. This is a very vivid example. We know obviously that Native Americans wanted to get their hands on guns and alcohol and horses and all these things pretty fast. Did it make life better? Well, over several centuries, at least for those who survive, it gets somewhat difficult to say. But if you go back to the original context of the encounter and of the introduction of new technologies and substances and animals, what you find is significant trauma and significant disadvantage that accompanies the introduction of new technologies. And in general now when we have debate around (say) the few lingering so-called uncontacted groups in the world — in Amazonia, in the Andaman Islands — though I think uncontacted is a bit of a misnomer, whether we should let these people go on with their low life expectancies and their proneness to disease and perhaps with some pathologies that we've rejected in the modern world, like (say) child abuse, and what we would see as child abuse, and so on, the question is, "Okay, but what would their incorporation into modernity look like?" And the fact is that they're not going to get to join us at the higher levels, at the higher floors of modernity. The only way into modernity for an uncontacted group is through proletarianization, or through a move from the natural landscape or the traditional habitat that they've been in for generations into urban slums or on to reserves. And that is definitely not a good deal for them. So if there were a way for the introduction of new technologies to mean a seamless transition into the benefits of modernity, then perhaps the calculus would change. But we know from observing again and again, countless historical cases, that it just doesn't work like that. [laughs] It never works like that.

SPENCER: It seems to me there's one question: how can a group that hasn't been part of modern development get pulled into modern development in a way that's healthy for that group and good for that group? And I absolutely agree with you. There's lots of examples where, when the cutting edge technology meets a group that doesn't have it yet, it doesn't go well for that new group, often because they get exploited and they can't defend themselves against the new technology and so forth. But that is also, I would say, separate from the question of, okay, but what about the people who are riding near the edge of the technology? They've grown up with it, and they're continuing to grow with it. What kind of impacts does it have? I think virtually everyone would agree that technology has some harms, and also that it creates some benefits. It seems to me that you see it as, there's some kind of balancing act between the two, that there's some kind of equilibrium that the harms and benefits occur together, or in such a way that they balance each other. And I'm confused about why that would be. I guess I think of it as more like we're drawing from a distribution of technologies. Some of them are neutral; they don't make things better or worse for human values. Some of them make things better for human values; some of them make things worse for human values. And I tend to think that the mean of that distribution is somewhat positive. In other words, that on average, we're drawing more things and making things a little bit better than making things a little bit worse. And so, on net, we've made things better. The main exception I see to this is technologies that I think put the whole world at risk, whether it's technology that could help promote bioterrorism being more effective or climate change or dangerous AI, things like that.

JUSTIN: Yeah, I suppose what AI seems to be showing right now is that it might be the surprise that we had in the early 20th century. The letter Einstein wrote to FDR — I think it was in 1939, if I'm not mistaken — like, "Surprisingly, our research into the fundamental constituents of physical reality has caused us to stumble upon powers so great that they could destroy the world." That was not a one-time thing. That was not just characteristic of early 20th century physics, but it could in fact, be the general direction that all probing into the nature of reality, and that all efforts to learn how to more effectively manipulate reality, lead to. And so what I mean is that we're seeing perhaps — saying perhaps now, I'm not saying this definitively — we're seeing right now perhaps a similar development in information science that we saw almost a century ago in physics; that is to say, you probe too deeply, you get too much control over the object of your study, and sooner or later, you end up again with potentially world-destroying technology. And arguably, we've also done that over the past few centuries in our probing into the living world, like you've just evoked bioterrorism. So I'm not so sure that we can neatly divide off the potentially world-destroying paths of technological research from the neutral ones. [laughs] I'm not at all so sure. That said, one of the examples I like to discuss of win-something-lose-something equilibrium is indeed in the earlier history of information technology, with the rise of the printing press and the sudden massive circulation of printed books, which is arguably, at least distally, causal for countless people getting burned at the stake, and for the wars of religion, and all sorts of other nasty, violent stuff. You can see some kind of link between (let's say) the new ability to easily publish Bibles in the Vulgate, for example — for example, English — and the suppression of efforts to smuggle English Bibles across the channel, and burning all sorts of people at the stake for trying and so on. This is just an example. So was the printing revolution good? Well, it caused some turbulence for a while. But then overall, we all learn to read, and it made us smarter. Well, did it? What we learned from Renaissance historians, from people who've worked on the history of the book and its place in culture is that, in fact, what happened was that we just traded in one technology for another. And the older technology had the advantage of being more closely connected to techne in the sense of skill. So I'm thinking, for example, of Francis Yates's wonderful book, "The Art of Memory," which he published in the 1960s, on the incredible mnemonic structures that erudite medieval scholars built up in their minds in order to learn vast bodies of knowledge by heart, which they stopped doing at the moment they were able to store those vast bodies of knowledge on their shelves, on printed pages. And arguably, what we're experiencing in the early 21st century is something like a repeat of that kind of revolution, where now we don't even store the body of knowledge on our shelves, or bother to read the contents of what's on those shelves. We just have it at our fingertips on Wikipedia prosthetically present at any moment we might choose to inform ourselves about it. And that's a profound transformation. Again, it's a transformation I myself have lived through. Whether it's overall good or not, I think, is a question that can't even in principle be answered. But from my point of view, it's not obviously progress either. It's more something like trading one techne for another. And personally — and maybe this is again as you've already suspected, just my own sensibility, just my own idiosyncrasy, and also perhaps having something to do with getting older — personally, I see tremendous value in those technes where we would translate the Greek term 'techne' not as gadget but as skill; that is to say, those technologies, like the medieval art of memory, that are not externalized into an outer object, an apparatus of some sort, but that we carry around inside of us as learned skills. I think that's intrinsic to what it is to be a human being. And I think that each time we lose such a skill, we should think of it somewhat in analogy to the way we think of (say) species extinction or language loss. These are things that are intrinsically valuable.

[promo]

SPENCER: I find the printing press example to be strange because we went from a very rarefied class of a very small number of people that could read to allowing most people to read and, to me, that seems like a massive benefit. And I don't even see it as comparable, given how few people could read before the printing press.

JUSTIN: What's so good about reading, though? Why is reading so good? Reading is almost certainly going to turn out to have been a tiny blip in human history, something people did for a few thousand years, before moving on to something else. And there is a form of knowledge that precedes the history of literacy, that has been around vastly longer than literacy in our sense that consists in reading the edges of leaves, reading the quality of the soil, reading animal tracks, and so on, which is again, an activation of the human mind, that is exactly the same human mind as the one that we use when we read. So indeed, if you look at the past 500 years, it's certainly good at some kind of mid-range view, or maybe even a zoomed-in view to say that the literacy rate is going up. But what is literacy? It's a blip. It's a minor point about a short period of human history. It's not something human beings qua human beings ought to be doing.

SPENCER: Well, I don't care about reading for its own sake. What matters to me is that people get the things that they fundamentally care about, that they are able to achieve their intrinsic values. And to me, the literacy rate going up and the widespread availability of books is good, because it helps people get the things that they want, the things that they intrinsically value.

JUSTIN: Yeah. Okay, so this is getting us to the heart of the matter because what we intrinsically value changes, depending on what the available technologies are. A Paleolithic person who knew how to read the information in the edges of serrated leaves, for example, or in the quality of the soil or whatever, was not lacking anything he or she intrinsically wanted, any more than you and I are lacking something when we acknowledge that we are unable to teletransport. And so this really gets to the heart of the matter about the history of technology, that new technologies don't so much give us something we were lacking, as add something more to the list of things we need in order not to be lacking.

SPENCER: I think you make a good point that sometimes it creates new intrinsic values that couldn't have existed before. But also, if you're a Paleolithic person, you and your family might have been being bitten by flies all the time, and technology does alleviate something that does relate to your intrinsic values, which is not constantly being in suffering. Or technology makes it possible to save your daughter from a disease that you don't want your daughter to die of, because you intrinsically value your daughter and so on. So I think it's both.

JUSTIN: Yeah, it's curious. I don't mean to be pushing too hard a line and I realize I am kind of stating my wariness boldly, in a way that might make it sound like I'm dogmatically committed to it, and I'm not. I acknowledge that we are, in certain respects, a very special species because it's intrinsic to humans, more or less from the beginning of what's sometimes called the human revolution. I'm thinking of paleoanthropologists like Colin Renfrew here. It's pretty clear that being an anatomically modern human, over the past 150 or 250 thousand years — the timeline is always being revised — means, by definition, relying on technology, relying on external tools to make our lives (I don't want to say better but to make our lives) distinctly human, and that was true ab initio. It's not as if we've fallen from an earlier state of perfect harmony with nature or anything like that. Daniel Dennett has some wonderful reflections on the process whereby innovations become necessities, and this actually happens in the course of evolution. Even before you can speak of anything like hominid technological innovations, you've got, for example, the loss in our primate lineage of the ability to synthesize vitamin C. Other primates can do that; we can't. So if we go out to sea — that is, out to the ocean — and we have no citrus, we will die of scurvy. So we lost that ability because we started eating fruit, and eating fruit became, among other things, a cultural glue, going out and collecting fruit together. Similarly, and more obviously technologically, with the innovation of cooking, which serves to partially digest our food when it's still outside of us, and to do so in a collective social way, that's the upside. The downside is that we're a lot less able than other animal species to just go around and eat stuff we find in our environment, raw. So these are trade-offs. And it's so deep, this problem of the trade-off that, as the case of vitamin C shows, it even precedes our beginnings as homo faber, as a species that makes things even before that gets going. So it's a real problem, and it's intrinsic just to the nature of the default setting of our existence. And I don't have any answers. And again, that's why I'm not an anarcho-primitivist; I don't think there's any original harmonic state to go back to.

SPENCER: Let's jump to the next topic, which is the idea of algorithmic content and our being replaced by it. How would you set this up for us?

JUSTIN: Well, this is something I did manage to cover. I was already thinking about this in my 2022 book. I recognize that a lot of this is just putting my own preferences and idiosyncrasies on display, and also putting my age on display, as being someone whose early aesthetic and intellectual sensibilities were shaped by what I would consider non-algorithmic processes, that is to say, mostly random events. For example, you go to the used record store and you look in the bargain bin, and you flip through the records that are in the bargain bin, and you find a truly heterogeneous collection of artists and styles. It's not algorithmic. I can't find that on Spotify or iTunes today because they are deploying criteria of similarity — beats per minute or whatever — that are criteria that artificial intelligence is capable of recognizing and doing something with. So I'm being served up music, one song after another on Spotify, that AI predicts will be to my liking because it's supposed to have some similarity to what preceded it; whereas, in fact, what would be to my liking is to be able to go back and have the sort of experience that I had with the bargain bin at the used record store, which is to say, to experience true heteroclitic barrage of different styles and artists. That's getting harder and harder to do. The portals for what I consider to be true aesthetic awakening are narrowing — people are going to disagree with me and they're going to say I just sound like a crabby old person, fine — but things really have changed. [laughs] And we need to take stock of the historical significance of that change and what it represents for our future as aesthetes, as creatures that I think fundamentally require encounters with art in order to thrive. So that's one side of it, and I suppose that's the negative side or the side that I'm most pessimistic about. But I also think that, with the rise of AI art, and ultimately, the emerging situation where, as art consumers, we're not going to need artists because we can just have AI generate the works that we desire for us directly, without direct human involvement. The good side of that is perhaps, a move back to what I understand of the primary lineages of aesthetics in the early 19th century, particularly with people like Friedrich Schiller, for whom the key element of experience of art is not consumption but creation. And once we are in a situation where we don't have a certain class of people — the consumers — relying on another class of people — the artists — to provide us with the art that we want to consume, we might be in a situation where each of us can finally say, "I'm going to be an artist, and I'm going to create the kind of art that I want." Now, am I going to use AI tools to do this or am I going to go more low-tech DIY and get out the construction paper? [laughs] That's the great thing about being an artist; it's up to you, you can decide. But the prospects for art consumers, pure consumers in the 21st century, are bleak indeed.

SPENCER: It's very interesting because now that you have tools like Midjourney that can create really beautiful looking images — and here, by beautiful, I just mean that the average person would rate them as beautiful, regardless of what you, in particular, think — to some people, this is a horrifying thing because it's basically replacing genuine human creativity and passion with something that's like some kind of weird weighted average of all the images ever created on the internet. To other people, this is a wonderful thing because it now means that you can create beautiful things yourself that used to require an incredible amount of skill, and now you can make them in a few minutes. So I'm wondering if you also see a positive here or you think it's really just a negative.

JUSTIN: No, no, no, that's precisely the positive element that I wanted to emphasize. I think this is a golden age for the universalization or the democratization of artistic creativity, which is really the more important side of the pairing between creation and consumption. But at the same time, I also think that this is an era of tremendous philistinism where art consumers are increasingly ill-equipped to search out for themselves, the kind of art that will really build them up and to understand the useful, meaningful criteria for how to search it out. That's a real crisis. On the other hand, all along, there have been more important things to do with art than to consume it, [laughs] in particular, to create it. And so indeed, the positive side is the democratization of artistic creation.

SPENCER: So what is the difference between art that builds us up and art that is irrelevant or has no positive benefits?

JUSTIN: Well, I suppose the difference is precisely between art and what is being called content. I think of content as fundamentally a different species, a different category of cultural artifact than a work of art, because it is being churned out according to algorithmic criteria. And of course, there are hybrids; I don't want to open a can of worms here, and I haven't seen the movie, but from what I've read, "Barbie" is the result of the collision between some kind of creative imperative that Greta Gerwig herself feels inside of her as a creative, the collision between that and the armies of suits with their audience surveys and their eyeball tracking, saying, "No, no, we've got to put in more of this or that." Now, with the recent Hollywood strike, we're becoming aware of this plausible future scenario, near-future scenario, where it's only the eyeball tracking equipment that determines the entire content from start to finish of a given entertainment, like we need an explosion 36 seconds in and not 38 seconds in or whatever, because that will maximize audience captivation. And when you listen to other Philistines like Jeff Bezos talk about Amazon's entry into content production, he said something really naïve and almost sweet, like, "I think I understand a good story and I've got the equipment to churn good stories out." What he means is algorithmically-maximized audience captivation, and, yeah, you're gonna succeed, you're going to captivate your audiences, but we're going to lose from the landscape anything that looks even remotely related to what, in the 20th century, could still respectively be called the avant garde. That is to say, difficult things to watch, like Andy Warhol's single-shot film of the Empire State Building that lasts eight hours. What algorithm is going to tell you to go and sit through that, or tell you that it would be advisable to release such a thing into the world? No algorithm could conceivably lead us in that direction. So it's the death of the avant garde. And indeed, we see this with what little we have of an intellectual life in 21st century United States, mostly people from the publishing and academic worlds bickering with one another about mainstream entertainments like "Barbie" or "Oppenheimer." That's a dismal state of affairs. In the previous century, intellectuals did not care about mainstream entertainments like that because they were trying to push themselves to explore the most difficult and least inviting creations of the human imagination you could come up with. That is the artistic impulse. That is the true artistic impulse. And it's fundamentally at odds with everything the concept of 'content' suggests. I get a bit heated about this and again, I'm stating things in the boldest way possible, even if I understand that it is perhaps overbold.

SPENCER: No, I get that. And I could see a couple of different things coming out of this regarding, okay, why can't algorithms recommend things that are deeper or more avant garde? One path there I see is that the things that we can measure may just not be the right things. If you could really measure what is deep and profoundly effective on people, then maybe you could algorithmically recommend the stuff. But maybe that stuff is just really hard to measure, in some cases, impossible to measure. On the other side, maybe the issue is that new things can't really be recommended, like if you have no data on them, they're really so novel, then how are you going to recommend it? You don't even know what to compare it to. Would you say those are both aspects of what you're talking about?

JUSTIN: Well, yeah, an algorithm knows you just sat through Andy Warhol's "Empire" and stared at the single shot of the Empire State Building for eight hours. So what is it going to tell you next? You should stare at this 12-hour shot of the Chrysler Building or something like that. And that would be utterly wrong. It's not that you want to just see long, dull movies about skyscrapers in general. It's that this one intervention was doing something in particular that I would say, it's going to be pretty hard for an algorithm to learn what that is because algorithms do not have a share in this very difficult task of free play of the imagination. Almost by definition, if you look at (say) classical aesthetic theory — again, to go back to Schiller or to Kant — what is artistic genius? It is precisely the move made by an artist that can't be reduced to a rule, that seems to be governed by no rule. So you can write a handbook of introduction to oil painting for dummies, or whatever, and you can give the broad outlines of what you're supposed to do when you're mixing your pigments and so on. But you cannot include a concluding chapter of your painting for dummies book on how to be a genius painter. [laughs] You just can't do that, and that's the difference between competent entertainment and high art. And we can debate the historical circumstances when this ideal of high art emerged and whether it's really something worth preserving. I happen to think it is, but in any case, whether it's worth preserving or not, we need to take stock of the significance of its loss.

SPENCER: I want to point out a couple of different types of algorithms that are used for recommending because I think it's relevant here. One type of algorithm looks at the attributes of the item being recommended. So Pandora, for example, with music, my understanding is that they went and they said, "Okay, what are the different attributes of songs?" You've got rhythm and you've got genre, and so on, and they added all these attributes to different songs, and then they use that to recommend. But there's a very, very different type of recommendation algorithm, sometimes called collaborative filtering. And the idea of it is, it looks for people that like things like you, and then it recommends other things that they like. And so it seems to me, the former is gonna be unlikely to get at what you're talking about. But maybe the latter approach to collaborative filtering could recommend things that (say) would be really appealing to someone who watched an Andy Warhol film, eight hours of the Empire State Building, because maybe there's other things that that person would like that you also would like.

JUSTIN: Yeah, well, certainly, that looks like just using technology to facilitate human interaction and communication and that is indeed very different. In general, though, there's a danger that we won't have anyone left in this world to share our tastes and sensibilities if this world goes full-on Philistine, which is what we can suspect is happening. For example, it drives me crazy. The only time I watch movies these days, pretty much, is when I'm on long-haul flights. It's the only thing I can do. And if you read the little description of the movie in question, it typically attempts to summarize the plot. Now, from my point of view as (what I think is) some kind of non-Philistine cinephile, who gives a shit about the plot? That's not what counts. What counts is the year it was made, the director whose vision it was, and maybe some qualities of the cinematography, what kind of film it was shot on, and so on. And if we're living in a world where the human priorities that shaped the algorithms are so off target with respect to what really matters about the work in question, then I'm pessimistic about even the algorithms that bring us together in the way you described, that there will be anything for the two human beings brought together in this way to meaningfully share. I'm sorry, I know I'm being very, very opinionated and blunt today. [laughs] I don't know what's gotten into me.

SPENCER: No, it's totally fine. You clearly are very passionate about your view. But I can't help but feel that your view has underneath it, this distaste for the way the modern world is. You have such a strong distaste for it that it makes you want to attack the modern world. Is that unfair?

JUSTIN: Well, that's pretty fair. I think most of my (in scare quotes) "content consumption" is reading 19th century literature. And I feel like Henry James was at the top of an art form that penetrated into the depths of the human soul, and that this peaked sometime around 1880. [laughs] And I look for traces of that in the contemporary world and it's hard to find. So I admit, I confess that I'm backward-looking and I'm somewhat lost in the contemporary world. But I also think that people like me who are lost in this world, and who are trying to hold on to, and bring forth, and remind other people of things that have been lost, I hope that we have something to provide to our era, to our contemporaries. Otherwise, we're just in a constant stream of the eternal present and constant forgetfulness.

SPENCER: To a listener who might say, "Well, aren't there a lot of amazing pieces of art and music and film being created today? We have more stuff being created today than ever before, of all different genres and varieties." What would you say to that? Would you say that they've lost something essential, and in some fundamental way, they're lower quality?

JUSTIN: Yeah, definitely, I think if you look, for example, at the turn to mostly CGI-dependent superhero movies in Hollywood over the past generation, I think this is just clearly Hollywood in its death throes, that this will not go on. They've just defaulted to an approach that is the safe route — the one that works financially for now — and abandoned any hope for being central to the culture in the long run. Meanwhile, I think — and I say this rather categorically — that the Hollywood output has simply collapsed over the past generation. There's virtually nothing of any value coming out in terms of mainstream cinematic entertainment. And I see this as happening for obviously large economic and material reasons that have mostly to do with the fact that Hollywood has been rendered otiose by new technologies so creativity is surging up organically in other places. And when I'm pessimistic, and when I'm looking for the avant garde, when I'm looking for genuine creative impulse on display, I can find it still, even now, in the subcultural mimetic productions of mostly anonymous teenagers on the internet. I think that's where the culture is at right now. And I would take the mimetic exuberance of young people over a multimillion-dollar Hollywood blockbuster any day. It's a lot cheaper, for one thing. Cinema's dead, that's certain. Literature seems basically to be dead, again, in consequence of the same economic and material transformations that have turned Hollywood into a content mill. These days, in big trade publishing, they are doing nothing but the equivalent of eyeball tracking, trying to predict what phrasings will be the most profitable, down to the sentence or the word level. I know this firsthand; I've worked with agents trying to refine my own book proposals into the kind of proposals that publishers will respond to. And agents have a very, very clear idea of how much money is being lost each time you deploy a multisyllabic word or a word in French or German. [laughs] They're literally docking it from your advance contract if you dare to throw in a foreign term, for example. So books are dead; movies are dead. But of course human beings are going to keep on being their creative selves, just like they've been since the Paleolithic because you can't stop them, right? The problem now is just knowing where to look for it.

[promo]

SPENCER: I certainly can see why you would have a problem with Hollywood trying to turn movies into economic machines, and why you might have a problem with the way some publishers operate. But we also live in an era of incredibly numerous amounts of self-published works in books and also in film; so many indie films are produced every year. Do you also view those as valueless? If so, what's wrong with them? These are not things that are done by Hollywood or forced on people by their publishers or agents.

JUSTIN: Yeah, certainly, every sign of the vitality and survival of the DIY impulse gets me right in the feels, as they say. I am really happy to see it. And I admit I'm not terribly attentive to all the underground bubblings that are going on. I can say that the somewhat mainstream indie film circuit strikes me as pretty lifeless or as imitating an idea of what art should be, that's become sclerotic and, so to speak, pantomimed. Again, as I said, I brought up memes and mimetic subcultures. But on the other hand, with writing, with music, I think this is indeed a promising era for DIY creativity, which is (I think) the only hope for the survival of art in a world that is so rigorously algorithmic and financialized.

SPENCER: Just to really clarify, besides saying that you don't enjoy it, what you actually see as wrong with the self-published works that are happening now is that you view them as not novel, not innovative?

JUSTIN: No, no, no. Again, I'm on the side of DIY. I'm a Substacker myself.

SPENCER: But you're still saying that the self-published stuff is somehow less vibrant than historical work, no?

JUSTIN: No, definitely not, no. I guess I did say, with respect to indie films, my (again) not very deep take here — just because I've been out of it for years, I'm less into it than I used to be — my not-very-well-informed take is that indie films are a faint echo of some dated idea of what independent art should look like. There's a film critic I really like to read, Nick Pinkerton, who's also on Substack, and he wrote some brilliant pieces about live streaming. These cretinous young men who go out and do stupid stunts and stream it on the internet, Nick Pinkerton was arguing that that is where it's at right now; that is real DIY cinema in a lineage that goes back to (say) Dziga Vertov in the silent film era. I found that very compelling. What I don't find compelling is (say) Christopher Nolan, or maybe that's not a very good example. Wes Anderson, the indie-branded Hollywood content, I don't find it compelling at all. In general, though, independently-produced DIY output is great. It's where it's at. I produce it myself on my Substack. And I think that's the most valuable lifeline for myself as a thinker and a writer. And I want to encourage more people to do it and also, at the same time, to be aware of the dangers of the omnipresent forces of financialization and algorithmization. Substack is a curious thing because it's, to some extent, prone to the same forces as social media where, on Twitter, people end up all saying the same ridiculous things as one another because they see it incentivized in the form of likes or faves or whatever. On Substack, we are somewhat protected from that because it's just ourselves alone, writing. But nonetheless, even there, we have some idea of which things to say to get the most engagement and ultimately, to get the highest returns on our efforts. So there's no easy solution and even DIY stuff is not completely safe.

SPENCER: Do you see those two — one, algorithms and, two, financialization — as the main culprits to art declining, in your view?

JUSTIN: I would put financialization first. That's the evil number one. Because algorithms in themselves are neither good nor bad; it's how they're used. And the problem is simply that in the current political economic order, they can only be used for maximizing profit. And that's how we end up with "Ant and Bee" or whatever movies like that are. Was that the name of a movie recently? Some superhero movie where someone turns into an ant and the other one turns into a bee, and it's all CGI? I don't know. I just see ads for this stuff on billboards or on the sides of buses when I'm walking around Paris. I don't know what they are. I just know it's terrible.

SPENCER: Just to point out the glimmer of light from your perspective, you mentioned some interesting stuff happening, from your point of view, from anonymous young people on the internet. What is that stuff that you're seeing that's getting you excited?

JUSTIN: I just think the kids on Twitter are funny. Sorry, I meant "Ant Man and the Wasp," that's what I was referring to. [laughs] I just think there's a kind of vitality, an ebullience in this quick-witted takesmanship that you see on social media that really makes me think, "This is the vanguard of our culture." And this is often accompanied by rather ingenious innovations in the presentation of information in a way that combines text and image, i.e., the classic meme. I think there's incredible stuff happening there. Unfortunately, I find a lot of it politically noxious even when I recognize that it's the aesthetic vanguard. I think for better or worse right now, the people who are taking up that vanguard and running with it are people who fall, broadly speaking, into what I see as the reactionary Right and thus, in a sense, we're in a kind of 1920s moment where you've got futurism being pursued both by fascists like Marinetti and by communists like Mayakovsky. But then eventually, the communists veer off into some dull literalist socialist realism, and the only people still doing the exciting boundary-pushing stuff are the reactionaries. I feel like we're in a situation like that, and this gets us to another topic, which is maybe the decline of the humanities. I feel like, in the academic humanities, where political progressivism prevails, you just find zero imagination at work almost, as a corollary to that point. So it's a peculiar moment right now. In my childhood and adolescence and young adulthood, I just took it for granted that it was the Left that was always going to be at the forefront of artistic creative innovation because my earliest reference points were like Abbie Hoffman pretending to levitate the Pentagon and the Right-wingers were a bunch of stodgy, boring dads in suits. So I just took it for granted that that's how things were, and over the past decade, valence has completely shifted in a way that I find kind of delirious. But indeed, I think the kids on social media are infinitely creative. And if they would just stop flirting with such noxious ideas so often, I would say they're our hope for the future. [laughs]

SPENCER: Before we wrap up, two topics that we had planned to perhaps talk about are the decline of academic humanities and also of liberal democracy. Did you want to just give us a few comments on that before we finish?

JUSTIN: Well, I would say decline of the academic humanities; I wouldn't say the decline of liberal democracy. But certainly there are significant threats to liberal democracy that place it in a more precarious spot than I would have thought possible earlier in my life prior to the 2010s, let's say. Maybe I just really quickly take on both of these at the same time because the two are related. When you look, for example, at the number of undergraduates who say things to the effect that our First Amendment liberal rights should be abandoned in cases where it involves insult to marginalized groups or to persecuted minorities, there's obviously discussions to be had there about the boundaries between free speech and hate speech. I'm aware of all of this. At the same time, the insouciance with which the younger generation today speaks of moving beyond sacrosanct liberal rights, I think, is rather disconcerting, and doesn't bode well for liberal freedoms, the future in the coming years. And this is connected, I think, to the decline of the humanities, a field I know well. I've been in the academic humanities for a few decades now. And I think I got into it because I saw it as an area that was... I shared at least certain basic features with the arts. That is to say that you get interested in these things — in (say) 17th century metaphysics or the history of the life sciences in early modern Europe or whatever — you get interested in these things because it's pleasing to your imagination. It's an incitement to look at the world and to behold it in wonder. And since I began my career, that understanding of what you're doing in the humanities has just vanished. It has just gone away and has been replaced by a new understanding. Again, like the transformation of Hollywood, like the transformation of publishing, this new understanding is being driven by material and economic forces that are too large for anyone to really do anything about. But the result in the academic humanities is that one must now continually demonstrate and reaffirm the relevance of one's domain of interest to solving contemporary problems as they happen to be understood in the contemporary moment, rather than looking for strange and forgotten life worlds that might add value to our contemporary life in ways that we cannot foresee, that we cannot dictate in advance. That has just entirely disappeared and, indeed, at this point, I no longer see the academic humanities as really doing their duty of preserving or, indeed, cultivating anything of real value. And so what's the alternative if you continue to value those things? Well, again, just like with movies, just like with writing, it's DIY, you just do it yourself. [laughs] Because Hollywood's not going to save us. Farrar, Straus & Giroux or Simon and Schuster aren't going to save us. And the universities definitely aren't going to save us.

SPENCER: I think some people might wonder, why would having to justify what you're doing in terms of value in modern society, make it not have value? Because I think that's essentially what you're arguing, that that destroys the value of it.

JUSTIN: I've seen, for example, a sudden transformation of the study of the history of philosophy in an attempt to make it fit with our contemporary, early 21st century American understanding of the value of diversity. And I often have trouble articulating this because I end up sounding like I'm some kind of reactionary myself, and I really don't think I am. But I can assure you that people in classical India or in early modern Europe, did not carve up social reality in the way that we do today under the banner of diversity. And it fundamentally distorts the world as they understood it, to talk about (say) Indian philosophers from the fourth century BCE as philosophers of a marginalized ethnicity, for example. They weren't marginalized. They were Brahmins. They were at the very, very top of an elite and extremely hierarchically structured society. I teach Indian philosophy. I have a lot of trouble teaching Indian philosophy in order for the administration to cover a diversity requirement, if that makes any sense. So by retrieving and exposing the world in the way that an ancient Indian philosopher or an early modern European philosopher saw the world, we're exposing ourselves to other possibilities, to other ways of understanding, processing social reality, and to make it in some kind of a priori and dictated way, to make it relevant to our contemporary social reality by dictate is just fundamentally anti-intellectual, fundamentally unfaithful to the project of humanistic inquiry, as it had shaped up since roughly the 18th century. And I have seen it collapse, literally, since I began my career two decades ago. So again, that's why I think there's still hope. It's not like we're forgetting about classical Indian philosophy or anything like that. It's just that our institutions are distorting it because of short-term and ultimately small-minded imperatives.

SPENCER: Would you say that, because of a desire to put these kinds of works through a modern lens of how to do good, that's the source of the distortion?

JUSTIN: That's a big part of it. That's one way of seeing it. I would put it more extremely. What we're seeing is a bunch of non-intellectual administrators — that is to say, people who have never seen it as part of their lot in life to be intellectuals — we're seeing non-intellectual administrators dictating ever-changing priorities that have nothing to do with what humanistic inquiry actually is, in the way we have built it up over the centuries. And these priorities will change. For the past few years, it's been about DEI as the central value in the mission of the university; that's diversity, equality, inclusion.

SPENCER: Equality or equity?

JUSTIN: Equity, that's right, and they make distinctions on that point. The recent Supreme Court decision is going to complicate that. But for all we know, there could be a Right-wing regime in power in the next few years that's going to tell us that our real mission is to draw out and celebrate our national greatness or something like that. And then we're going to have a bunch of idiot non-intellectual administrators in our universities making that the central mission of the university, at which point, us real humanists will also be saying, "No, no, that's not what we're doing." So it's this subjection to administrative vicissitudes, these short-term changes in the way the wind is blowing, that I'm objecting to, not diversity as such, obviously not. [laughs] And I feel throughout the aughts and the 2010s, I was, I think, one of the louder people saying we need to teach more non-European philosophy, we need to find new ways of drawing out marginalized voices and all of that, and I absolutely believe this. But I don't believe it because some DEI functionary is telling me that this is now the mission of our universities. I believe it for sound intellectual reasons.

SPENCER: It seems like an overarching theme in this conversation is short-term narrow-minded incentives as opposed to people pursuing things based on their own motivations and their own creative enterprise. Would you agree that that has been a thread through many of these critiques?

JUSTIN: Yeah, yeah, definitely. And, again, maybe what this really amounts to is just taking my pulse, [laughs] checking in and seeing where I'm at, circa 2023. And indeed, I am at a point where I only want to pay attention to things for my own reasons. And these are reasons that have grown up within me organically over the years, and I don't want those threatened by outside forces, whether they are algorithms or human resources bureaucrats, or any of the other forces that are (as I see it) aggressively trying to shape the way we think.

SPENCER: Justin, thanks for coming on.

JUSTIN: Yeah, thanks for listening to my philippics.

[outro]

JOSH: A listener asks: If you were going to learn to speak a foreign language, how would you do it?

SPENCER: Yeah, so I don't speak any foreign languages. I do know a bunch of programming languages but I don't think that counts to most people. I always felt like I was bad at foreign languages in school, and I think part of it was just that it didn't come naturally to me, and part of it, I didn't have a lot of interest in it in high school. I did study some languages, like I studied French, but I just didn't make much progress and wasn't that interested. If I was gonna approach it today, I think I would do a few things. One, I think I would use spaced repetition, basically learning a concept or idea or sound or word, and then making sure to review it shortly after to make sure I still remember it, and then if I get it right, make a longer delay, and then review it again. And with each review, making sure the review is a quiz, not just passively rereading it, but making sure it's a quiz each time. So we built a tool, Thought Saver, that can help with this, that can do automated spaced repetition for you. So I would definitely use that. In addition to that, I think immersion... People I speak to who speak foreign languages just say immersion works really, really well. So in addition to doing flashcard type quizzing, really putting yourself in that place where you can speak that language all day long, and you're forced to do it, I think the two of those together probably would accelerate learning a tremendous amount.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: