CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 178: Journalism in the age of AI (with Dylan Matthews)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

October 5, 2023

Will large language models (LLMs) replace journalists any time soon? On what types of writing tasks do LLMs outperform humans? Have the US news media become less truth-seeking in recent decades? Or is truth-seeking behavior merely an aberration from a norm of propagandizing? How should we redistribute economic surplus from AI? Have any AI companies committed to a Windfall Clause? Instead of bothering to negotiate with us, wouldn't a superintelligent AI be able to get much more done by first wiping us all out? What are some subtler or less-well-known ways subscription models reshape incentive structures for journalists? Why is collective action so hard?

Dylan Matthews is a senior correspondent at Vox, where he cofounded Future Perfect, a section devoted to exploring ways to do good. He writes frequently about economics, philanthropy, global health, and more. You can email him at dylan@vox.com.

NOTE: This episode was recorded live at EAGxNYC!

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. This episode was recorded live at the EAGxNYC Conference. In it, Spencer speaks with Dylan Matthews about the impact of AI on journalism, print and media industry, and the future of labor under AI.

SPENCER: Thank you all for coming. This is our first ever live podcast, so it's very exciting. And I'm really excited to have Dylan Matthews here.

DYLAN: I'm excited to be here. Thanks for having me.

SPENCER: So Dylan has really been at the forefront of journalism related to many different effective altruism cause areas. Today, we're gonna talk about a bunch of interesting subjects. But let's start with large language models, because they're on people's minds these days with ChatGPT. Are LLMs coming for your job, Dylan?

DYLAN: I think I've had some specific worry about this because of the kind of journalism that I do. There's certain kinds of journalism where it's hard to imagine LLMs replicating it. If you're a war correspondent in Ukraine and your job is talking to soldiers, you could imagine a sort of a bot that WhatsApps soldiers and tries to interview them and things, but we're pretty far from that. I think, similarly, very deep investigative stuff that requires building trust with sources, a lot of finesse and persuasion — I know some cybersecurity people are really worried about LLMs getting advanced abilities in that area — and so maybe if we have ones that are capable of doing really complex social engineering, then investigative journalism won't be far behind that. But a lot of journalism is explanatory or analytical and that's, in particular, the kind of journalism that I tend to do. I think there's still some distinctive aspects of that that are harder to automate. For some opinion things, what you care about is the credibility of the person offering the opinion and their expertise and knowledge, and a byline from ChatGPT is not the same as a byline from a former cabinet official or someone with a PhD in a relevant topic or something like that. But I think one of our ideas in starting Vox was that there's a lot of topics in the news where mainstream coverage of day-to-day updates assumes a lot of background knowledge that a lot of people don't have. If you read a story on front in the Ukraine War, where there has recently been movement, you might not know why that front is important, what has been happening there for a while, the context for why that happened. We were trying to fill a niche there. And a lot of it is doing interviews, but a lot of it is also synthesizing information that's out there online, that's been published in government reports and things. It does seem like we're making some substantial progress there in terms of LLM's ability. And I think more generally, I'm trying to think not just in terms of what we can do now, but my understanding from reading and some of the scaling laws literature that these things are improving quite rapidly. And so, there's a lot of jobs, and journalism is one of them. But there's also an industry analysis for banks, there are other jobs where people write a lot, there are a lot of jobs with certain minimal writing and analysis within companies. And I think it's an interesting question for me and other people in journalism to think about: what are the skills we should be investing in now relative to where things are going to be because it might be that investing in certain things that have historically been very valuable may be differentially affected by shifts in these technologies?

SPENCER: I was talking to an entrepreneur. His company has a bunch of marketing copy that he had written. They do a bunch of web pages about the topic of the company. He told me that he fired three writers they have on staff, and now they just have one person basically plugged into a ChatGPT equivalent. And he says the quality is equally good. I don't know if he's right about that, but assuming he is right, it suggests that there's at least a certain type of writing that LLMs might be almost at parity with humans. And I'm wondering, having played around with ChatGPT, where do you see it failing to do what you do well, and where do you see it doing almost as well as you?

DYLAN: I think it's a very good synthesizer of information. I would distinguish here between Bing and ChatGPT as an interface, because part of what Bing is trying to do is to bring in and do actual citations to news articles and things. And there's a degree to which I worry about a breakdown of: are these articles eventually written by LLMs and that your LLMs are citing them, and you have a recursive cycle of garbage. But for now, that seems to be an effective way of avoiding certain kinds of hallucinations. Sometimes I will ask about certain papers and ChatGPT will make up citations. That seems like a kind of thing. Sometimes I'll be like, "Well, that does seem like a philosophy paper that Elizabeth Harman would have written." But it just turns out that she didn't. But with Bing and things that are actually trying to source things online, they get around that to an impressive degree. I'm thinking of it through industries. There are certain things where there are kind of how good things have to be, and whether 90% there is good enough. It seems that it differs a lot between industries. So in ad, copywriting seems an area where it has to be good enough. They have ways of A-B testing, of running experiments between markets, where if you have a basically competent ad written campaign of the kind that can be produced by these systems, that gets you most of the way there. I would suspect it would be near the last just because a huge amount of what people are paying for with law firms is just sort of peace of mind that they're not run afoul of something. And so, 80% there is almost worthless.

SPENCER: And seeming good is very different from being good. Because that little clause that turns out to screw you over when you get sued, that's what really matters.

DYLAN: Right. So, I think I'm trying to think it through. And it's a tricky thing because you can try to distinguish which job categories, this is one way that a lot of research has tried to think about labor-market impacts, as you go through descriptions of jobs, and you then sort of code subjectively how much this seems to be automatable, which is what I'm doing in this conversation and in an ad hoc way. I don't know how historically the track record of that has been. There was a famous and very widely cited 2013 article from some Oxford researchers. It was our dream that some percentage of jobs would be automatable in the next 10 years. A trivial percentage of them have been automated in the ensuing 10 years. And so, there's part of me that's skeptical that this is an epistemically valid way to do this projection. But I think it matters because I train a lot of journalists, I've spent a lot of my life trying to learn how to explain decently technical topics to lay audiences, and if that is something that is not actually a worthwhile investment as a skill to develop, given where things are going, that seems important to know. I don't know what your sense of this is. You do a lot of public facing communication, and I imagine you have some thoughts about this.

SPENCER: Well, I think for me, when I try to use ChatGPT or similar technologies as a writing tool, the prose it writes is fine, they're just not what I would say. So, it ends up being of limited utility because I'm like, "Okay, it could generate an essay about that topic. I just disagree with it." Or just it doesn't capture the nuance of what I'm trying to explain. But I'm wondering, if you try to simulate yourself with one of these tools, and then you look at the writing, how does it differ from what you would have written?

DYLAN: One thing I've learned is that it's possible to use these and tweak them stylistically in interesting ways. So one thing I will sometimes do is ask it to take something I've already written, and then I'll say, "Can you rewrite this in the style of Patrick Radden Keefe (who's a New Yorker writer I like a lot)?" And then it comes out, and it's usually better because he's a really good writer. You talk to it with a certain job — I think this is something that Ethan Mollick, the Penn professor and sort of LLM super user has suggested — tell it what role it's supposed to play. And so tell it, "Pretend you're a really good journalist, how would you write this if you are a really good journalist?" I found that to be a very useful thing. But I don't turn that into my bosses for obvious reasons. I think they have an understanding that nothing I turn to them has been written by an LLM, that it's all by my own hand. Beyond that, in my experiments with it, I have not gotten into a groove where I can productively use it. I think there are certain cases, I think Mollick said that he will often use it as a first draft, and then he'll edit. And I haven't gotten into a groove that saves any time relative to just trying to bang something out. The edits I have to make are sufficient that it's easier to just write it from scratch. But again, this is where we're at now. And I'm curious, as these things scale, as hallucinations go away, as their abilities to capture the subtleties of language, and their abilities and their context windows expand so that I could give them the corpus of everything I've ever written and then they have a much more detailed sense of who I am and the kinds of things that I would write, maybe that evolves and changes.

SPENCER: I was at a party the other night, and I was chatting with someone. They're like, "Do you want me to train an LLM on everything you've ever written?" And I was like, "Yeah." So apparently, they're gonna do that. So I'll be really curious if that makes it much closer to simulating things I actually would say or generating ideas where I'm like, "Yeah, I like that idea. That's a good idea."

DYLAN: Yeah. That's an interesting thought of using that for idea generation. But I've been thinking of it mostly instrumentally like, "I have an idea. And I have more ideas for things that I would like to write and research than I have time and energy to write and research that thing." And so, yeah, there's a degree to which you could use it to break through writer's block and be like, "I'm Spencer. I'm interested in thinking rationally and cognitive biases and how to reduce them, or some things I haven't thought about." And then it will conjure up a new type of cognitive bias that you've never thought of before and can send you off on an interesting pathway. I think another question here is just about the scarcity of the outputs. And I think this matters for how economically valuable it is. So I'm a member of the Writers Guild of America, East. My comrades who are covered by the multi-member bargaining agreement, who write TV shows and movies and things have been on strike for months at this point. The actors are on strike with them. AI is a huge part of that dispute. And I think one reason I'm sort of skeptical of the most boosterish stories I've heard about what AI can do in Hollywood is that, at no point has the binding constraint on studios been that there are not enough people who want to write scripts. It's a running joke that everyone in the LA area who is your waiter or your valet driver or whatever has a spec script that they want you to read. What's scarce are scripts that will align with what studios are excited to do and what they think are going to be profitable as full-produced movies. And it's not super clear to me that LLMs are near a point where they can just emulate a replacement level screenwriter among the best screenwriters that studios get into bidding wars over. There's also a degree of what people would rationally do based on the capabilities of these things. And then there's what they actually do. And Hollywood is an interesting case where you don't have a lot of data. They only release so many movies each year. And they release a very small number of movies that they put anything in the marketing budget for. So they just have a pretty crappy data set on what is successful as a movie. And also, what a successful movie changes very frequently, and tastes change very frequently. And so, they're making decisions — they have large analytics departments — but they're also just making a lot of decisions based on gut. And I think a number of industries are like that, either because they haven't had a Moneyball moment where people start to get serious about analytics, or just because they're in an area where data is scarce. And so you're having to make all kinds of forecasts and things on the basis of very little information that operate on a kind of gut instinct. And my sense is that the people whose guts are being consulted there, are not people who are super excited about these kinds of things, and tend to be conservative about embracing new technologies. And I suspect that will be a kind of constraining factor in the near term.

SPENCER: You could imagine a very Moneyball-oriented movie studio that just like, "We're gonna have LLMs generate thousands of different scripts. We're gonna have people read them. We're gonna hook the people up to facial monitoring that's tracking their emotions as they're reading the scripts." And you're getting training data like, "Okay. We need a sad moment now. And we're gonna have our LLMs generate 50 different sad moments and find the actual saddest one, and then we'd have the moments on." Does that actually seem a plausible scenario?

DYLAN: We've already seen certain studios that try to pick out niches like that. There's a studio that listeners who are horror fans will be very familiar with called Blumhouse (founded by Jason Blum). They almost exclusively do horror movies. Most of their movies have a few million dollar budgets. I don't think they've ever done a movie with over $100 million, even though the major studios do that all the time. And I think Jason Blum looked at the market and saw horror films are kind of seen as less serious. People aren't super excited to do them. But you can film them on really low budgets, and they have a fan base that's very devoted. And so you can pretty reliably get, in percentage terms, pretty huge margins on these things. I think he did the Insidious films. You make those for a few million dollars, and then you make $40 million, and you've turned to like a 14,000% profit or something like that. And I think he's evolved and changed as he's become a bigger figure. And there are non-pecuniary motives that drive people as well. I think he's produced a number of Jordan Peele movies, which are both commercially successful and also have real artistic merit and respectability. And I think there's the part of you as a producer that wants to make a lot of money. And there's part of you as a producer who wants a bunch of Oscars. But I see him as kind of a moneyballish figure who found an area where, the defect is that it's horror movies, and people don't take those seriously, but if I pour a lot of money into this, I can rack up pretty insane profit margins. And I think the fact that it took until the 2010s for someone to do that is kind of telling that there were some $100 bills lying on the ground that people weren't picking up.

SPENCER: It seems like a lot of fields have that, where for a surprisingly long time, people aren't really bringing an analytic approach to it. And then someone comes in and just makes a bunch of returns really quickly by just putting that pure analytical maximization hat on.

DYLAN: Yeah. And I think contexts for that will sprout up and are sometimes quite short lived. When I was coming of age in journalism, it was when Facebook was turning from this thing that college students and recent graduates in the United States had to a global behemoth. Some people, I think BuzzFeed most famously, but a number of places figured out that this was a huge opportunity and that the algorithms that Facebook was using were gameable, and that if you structured headlines in a certain way, and you used certain keywords, you thought about how it would look at something that someone shared, and what would entice someone to click through, that you could go pretty far. I think what's been interesting is the sort of 'you live by the sword, you die by the sword'. There was a site called Upworthy (that I think still exists in some form) but it was wildly successful in 2013, because they were among the smartest practitioners of this. And they had a very ruthless optimization approach. I think, internally, they had a practice where every person writing a post had to come up with 20 different headlines for it. And then they would discuss among themselves with the staff, get down to a number that could A-B test, and then ruthlessly A-B test each of those. I was working at a newspaper at that point. We'd never done anything remotely like that. We'd never A/B tested a headline. And so they gradually took over Facebook. But Facebook is made up of people, and they noticed this. And I think they didn't appreciate being gamed, and they cut them off. And there are incredible traffic charts you can see of Upworthy's traffic before and after Facebook changed the algorithm, and it just fell off a cliff. And so, I think they were an interesting example, where they found an inefficiency in the market that was not being exploited. But it was an inefficiency that was very recent and also could change very rapidly. And what's been interesting is that the places that have survived in digital media have learned how to jump between those opportunities without ever over investing in one of them. But it's very tricky. There's nothing that has worked consistently throughout the entire time that I've been in this industry. And I think that's unmooring and scary to a lot of folks, especially folks who came from a print newspaper world where, for years, there was an approach which was: have some people in a room with phones and count on people, who need to hire someone or sell a piece of furniture, to call and pay for a classified ad. And then you can reliably have 20 to 30 percent margins every year, because you have a basic monopoly over these kinds of Craigslist-like services. And that worked really well for a really long time. And then Craig Newmark destroyed it in one go by giving people a free and much more useful service. And since then, we've been jumping between models in a way that I think is kind of disconcerting for folks who are used to the stability of one model working consistently.

SPENCER: Oh, I never connected Craigslist and the decline of the media. It's interesting.

DYLAN: Yeah. And Craig Newmark has simultaneously said that he doesn't think this is true. And I disagree with him on this. But also, he's given hundreds of millions of dollars to journalism schools. He's denied that this is from guilt. I think it's probably from guilt. [chuckles] But yeah, there's an amazing article, that I like to send people, in the 2000 Nieman Reports — Nieman Reports being a Harvard-based publication on the trade of journalism — and the premise of the article was: our newspapers are on for just another decade of being the best business in the world. It was saying that the average margins were 20 to 30 percent. There were some newspapers that had 60% margins. Warren Buffett was giving interviews with people and saying, "I bought up all these newspapers. I'd buy all the newspapers in America if I could. It's such a good business. Everyone needs classified ads. Everyone needs this shelling point that everyone goes to when they need to do small scale transactions within a metro area." And it changes pretty quickly. We can talk about Craig Newmark, but the idea of doing that online for free rather than charging people, someone was going to do that. It just happened to be him. And I think there's been a lot of discussion of whether social media is damaging journalism, and it's definitely changing journalism in some ways that are damaging in some ways that are not. But I think you'd be hard pressed, especially within newspapers, to find someone who thought that was a bigger change than Craigslist. I think Craigslist was the most epochal change for newspapers, and certainly in my lifetime.

SPENCER: Some people have the sense that the news medium used to be more truth-seeking. That it was like people were just trying to figure out what's actually true in the world. And that this has eroded. I'm wondering, if you look historically, what's your view on how that's evolved over time?

DYLAN: So most of my context here is the United States, so it's probably a little parochial. And so I set that up as a caveat to anyone who's listening to this and thinks, "That's not how The Times of London ever worked." But historically, in the US, newspapers have been extremely closely tied to political parties. You will often go to some small cities where — there's a newspaper near where I grew up, in New Hampshire, called the Foster's Daily Democrat. There's a lot of Democrat gazettes. There's some Republican gazettes. If you read biographies of Abraham Lincoln, a lot of his work was trying to defend people at abolitionist newspapers that kept getting murdered or set on fire, and things like this. And so it was largely a way of disseminating arguments and thoughts to like-minded people politically. And within metro areas, it was a way of binding people together under a party banner. That really changed in the early 20th Century that you started to see a move toward more nonpartisan types of media in fits and starts. It's ironic that the Pulitzer Prizes are seen as the most prestigious award in our field, because Joseph Pulitzer very much did not believe this ideal, and was a fierce partisan of things like the Spanish American War, and very much tried to use his network of newspapers as a persuasion tool, much more than a rigorous truth-seeking effort. But over the 20th century, I think the classified model and some other forces conspired to make the idea of an objective newspaper with some kind of separation between its editorial section and its news section. So it's traditional now in most newspapers that the editorial board that issues editorials on behalf of the newspaper has a strict wall between it and the people who are writing news articles. That is a pretty recent invention. That's like a 40s or 50s thing. And even then, you didn't have some of the norms we have now about maintaining independence from certain politicians. One of my favorite stories about this is that, in the 70s, after Bob Woodward became the most famous journalist in America by taking down Richard Nixon, one of his best friends in the world, Senator Gary Hart, would sleep on Bob Woodward's couch in DC, when he was in town for the Senate to be in session. And now, if you were a Washington Post's news reporter, and you are providing free lodging to a US senator, you would be in so much crazy trouble for violating basic objectivity norms. But they've evolved pretty late in the game. I think one thing I see happening now is something of a return to earlier norms where you were very linked to political parties, there was less of a monetary incentive to write a broad-based newspaper that could appeal to everyone in a metro area rather than trying to divide them so that you would have the largest market for your classified section, toward one where most of the rewards are to develop a very active and enthused audience of like-minded people. And because there's no real need for them all to be in the same metro area, because more and more of these outlets are fully national, I think that the need to maintain a broad tent is much less. So I think none of this is normative. You can hear all this and then conclude that it's still a beautiful dream to have a fully objective truth-seeking media, that is of no-party or clique (to quote from The Atlantic's mission statement). That can still be a great goal. I think it's just a goal whose application has been fully dependent on the vagaries of the market for journalism and news at a given moment. I think those have shifted a lot in ways that are familiar to journalists, but somewhat obscure to people who are just trying to learn stuff from the news.

SPENCER: So would you say that the really strong truth-seeking norms were actually an aberration at an interim period between these hyper-partisan eras?

DYLAN: Yeah, I think it was a long aberration that lasted for several decades. I don't want to minimize it too much, but yeah, I think my takeaway from this is that there is no guarantee that that will always be a profitable way to run a news organization. That was pretty contingent. And I don't see a lot of business models in the near term that seem aligned with that. So if you look at the things that are working now, there's a small handful of places that can work on a subscription model that have very low costs. So one model is the substack model, where you have one or two people, and you pay something $10 a month to them. That's a very favorable ratio. And if you have enough readers, that way, you can get going. And then on the other extreme, there's something like The New York Times, which is not that much more expensive than that for an institution that has thousands of employees, which is kind of wild. But if you're extremely popular, and you're the one newspaper that everyone thinks of when you think of a newspaper, you can make a subscription model like that work. From more places, that doesn't really work. But it can work if it's supplemented with a wealthy benefactor, either a foundation funds sort of like the Future Perfect section of Vox or ProPublica. There are several other nonprofit intercept — Honolulu Civil Beat — or literally a wealthy person — like The Atlantic which is owned by Laurene Powell Jobs and The Washington Post is owned by Jeff Bezos — that seems to work because you can bring in enough that your losses are not outrageous, but you have a backstop.

SPENCER: You just say, "Don't lose too much money." [laughs]

DYLAN: [laughs] I think the difficulty of this, and the history of The New Republic specifically, is a history of millionaires buying it, losing some money for a while, and then saying to themselves, "I don't really want to be losing this much money for free if they were instead not taking all my money." And it only works if you're really wealthy. It works for The Washington Post because it is small enough, and Jeff Bezos is wealthy enough, that its losses are still an incredibly small share of his net worth. And even so, he apparently has been quite aggressive and has fought hard in union negotiations because he wants to stop their losses. And he very much does not want his whole net worth going into this. And so, that's the second model I've seen a lot. I think the real question is whether anyone can make an ad model work again, but that the ad model works well in classifieds. It worked well in glossy magazines over the 20th century. And we're very much trying to make it work in Vox. And I think a lot of digital news sites that don't have a paywall have been attempting to make some kind of ad model work. But I think it's the one where it seems most precarious. And it's not obvious to me that we're all gonna survive. And I think part of that is that there's a few dominant actors in digital ads; Facebook and Google control much more of the marketplace than any news organization. But also, I don't have a lot of foresight into what the digital ad market is going to look like. I didn't foresee that there would be a huge downturn this year, for instance, which prompted a lot of the massive layoffs in the tech industry that made a lot of news. But what were big waves at Meta and Alphabet became ripples that also hit organizations like us, Buzzfeed, and Vice.

SPENCER: So let's jump into our next topic, which is about how we distribute economic surplus from AI. If AI keeps going at the rate it's been going, there might be a lot of economic surplus to go around. And we've seen some interesting models for this. We've got OpenAI, as I understand it started as a nonprofit, then they're like, "We can't get enough money as a nonprofit." So they became a for profit that has a nonprofit.

DYLAN: My understanding is that it's a nonprofit that owns a for-profit. And that almost everyone who we think of is working at OpenAI works for the for profit that's owned by the nonprofit.

SPENCER: But they also have investors.

DYLAN: In the for profit that's owned by the nonprofit.

SPENCER: Right. And then it's kind of a cap system.

DYLAN: Yeah, so they have a cap profit system. I think it's 100x that investors can get 100x return and anything above that is returned to the nonprofit. And the last time I looked into this, they had not committed much to in terms of where the money would go once it's in the nonprofit. I think the paper that set the baseline for thinking about this is a paper called "The Windfall Clause" that I think Cullen O'Keefe at OpenAI is the lead author, Jade Leung, Allan Dafoe, Ben Garfinkel, and some other authors that I'm probably forgetting (and to whom I apologize). But the basic idea there was: we're sort of imagining that an AI company could have a threshold of revenue or profits that is expressed in very high absolute terms. So 0.1% or above of world GDP, which I think they note that when they wrote this, the only company with profits of that scale was Saudi Aramco, the Saudi state oil company. The last day I looked into this, even Saudi Aramco is not not up there, because I think global GDP has grown significantly since. But if you have profits in excess of that, then you should have a self-imposed tax bracket system where the first 10% and then, say, 20% of profits in excess of this get donated to a given charitable destination. It's a really good starting paper because it's full of TBDs. What charitable causes are you donating to: TBD. What does the exact tax bracket look like: TBD. There's a lot of admirable intellectual humility in it combined with maddening vagueness. But I think it's a useful starting point, because we're in an unusual space where, when Facebook and Google and the second-wave of tech companies were starting, the guys that were studying them did not have a sense that they were doing this for a broader social purpose. Effective Altruism did not exist. Mark Zuckerberg, Sergey Brin and Larry Page were not in the ethical culture society or some other group that would have made them organize it in such a way to donate a share of profits. OpenAI and Anthropic, in particular, were founded, in many cases, by EAs or people close to the EA world. Anthropic is a public benefit corporation. OpenAI started as a nonprofit, and it's still in some sense, a nonprofit. They seem unusually open to experimentation with this novel governance structure that could radically change the way that their profits are distributed. And that strikes me as a really interesting and intriguing opportunity because, if when Walmart was coming up, you went up to them, and we're like, "If you get big enough, what if you just donate all the money to people around the world?" They'd be like, "Why don't you leave my apartment?" [laughs] But I think Dario Amodei and Sam Altman might be open to it. And I think the worst thing that happens is you pitch this, no one adopts it, and everyone laughs at you. I'm fine with people laughing at me. But I want to think about what the best thing to pitch is and whether there's a good model here. Do you think this is a special case? Do you have any thoughts on the best way to distribute the surplus that aren't just the ways we distribute the surplus from any company?

SPENCER: Well, it's an interesting question because isn't that what the government is trying to do, to a large extent anyway? It's like, "Yeah. You tax, and you tax, and you take that money, and you provide social services. And in the sense that some of that goes back to society. Is the usual tax code not up to it? Is there something special about AI that makes it different?" A couple of things that you might point to: one is that you could imagine it just becoming way more concentrated more quickly than we're used to experiencing. So, one way I can imagine this happening is: suppose there's a leading edge AI company that's able to start replacing human workers at a very rapid rate so they end up (say) doing 1% of all labor in society or something like that. You could start imagining that making an amount of money that we just haven't encountered before. And then you might start to think, "Ah. Yeah, maybe the amount they're giving back to society needs to be a lot more to help all the people that have been displaced."

DYLAN: And maybe they have a rational interest in cutting a bargain like that, lest they create a sufficiently large societal discontent that much worse things happen to them. A common thing that happens in periods of extreme inequality, I think this is why you saw Andrew Carnegie writing the gospel of wealth and setting up his foundations and trying to model philanthropy for other robber barons. That was also a period where anarchism and socialism and radical movements to redistribute income in the US became very, very, very popular. And Carnegie was incredibly scared of that and wanted to model a certain behavior that could hold that at bay. So, I think to a first approximation, my instinct is: this is the kind of thing that should be done by the government. If this is just donated to a charity, that cuts out a lot of actors who should be consulted in terms of where the surplus goes. What's more, taxing companies and capital income is something that there's thousands of really smart people who have written about and have very sophisticated ideas about. One of my side interests beyond more EA relevant things is I'm really into tax law, because I'm a fascinating person. So, I spent much of 2021 following efforts to get increased capital taxation into one of Biden's budget plans. And the one thing we didn't lack for is the idea of how to do that. There are lots of ways to tax capital income that created these companies more aggressively. There are a couple of reasons why I'm interested in more charitable or private sector models than just increasing taxes. One is that, in the US, which is where most of these companies are based, capital taxation is very late. It turned out that even during a historically unusual period of democratic domination of national politics, increasing it significantly was really, really hard and there were a lot of vested interests fighting against it. But I think the bigger thing is — and it is a global phenomenon; it's not just a thing in the US, it's not just a thing in the UK — its impacts are going to not obey national boundaries. I can imagine charitable outputs like, suppose for the sake of argument that you adopt a windfall clause and the surplus goes to GiveDirectly to provide basic income or lump sum transfers to the poorest countries in Sub-Saharan Africa, that strikes me as better than 99% of uses for government revenue in most rich countries. That a vanishingly small share of the US federal budget goes to foreign aid. A larger but still small share goes to serve scientific investments that seem massively welfare-enhancing for most of humankind. It seems plausible that the specific people at these companies might be able to identify more cosmopolitan or less US-centric uses for money than a government actor. All that being said, maybe they don't. I think, in general, the strategy of trusting titans of industry to determine where their fortunes are allocated has led to some real, pretty bad misses. And it's far from obvious to me that this is an exception to that. But I think a world where the money goes to a GiveDirectly type of program is better enough than a world where it goes just to reduce the US budget deficit to make me think that's an area worth exploring.

SPENCER: When people think about risks from advancing AI, they're worried about quite a few very different things. One thing is the possibility of a super intelligence that we can't control. Another thing that seems quite different from that is a controlled super intelligence that, let's say, one or two groups might have access to that gives them unprecedented power. A third thing is a world where you might have, let's say, a whole bunch of really advanced AIs, but they do all the stuff and then humans are like, "What do we do now?" And so, it's not so much that such a concentration of power, but more all the power is now controlled by the AIs, even if it's not in the hands of the few. And when we think about windfall clauses, how do you see that fitting into those three kinds of dangerous scenarios?

DYLAN: I think the third scenario that you're imagining, which is kind of a multipolar super intelligence and humans are permanently disempowered, that seems like the least accessible scenario for a windfall clause type approach because there's reduced odds that any one of those companies reaches the scale. I'm imagining that, at most, one or two companies would adopt a clause like this. Most of them are going to be completely uninterested for totally understandable selfish reasons. And so, compared to a world where the one company that adopts this becomes the global hegemonic, you're just gonna have less money pumping through it. And so it's gonna get a smaller share. We're getting here into scenarios where I have not much faith in my ability to model, and my basic approach is usually to Google if Ajeya Cotra has written something about it [laughs] and pass the offer views and defer to her on this. I think the question there is: Will it be more useful to AI agents in charge of various firms in a multipolar scenario to appease a human population by agreeing to something like a UBI? Or will it be easier to just wipe us out? And I think the tiny Eliezer on my shoulder says, "It's just gonna be way easier to wipe us out. Why would you bother keeping us alive?" But I don't know. There's a book that I would be curious to get more AI people's response to, called 'Why Not Kill Them All?' That is a very provocative sociology book on genocide. And the big organizing question is: Why is genocide not more common? Which is an incredibly dark question. But I think the author's rationale is that there's a lot of scenarios where it seems like it would solve problems for certain state actors to just do killing on a massive scale. That, if you have a region that keeps rising up in rebellion against your rule, why not kill them all? It's just thinking in a completely amoral way, like if you were a ruthless dictator, why is this not a thing you would do? And I think it's, in certain ways, a very helpful book to me, because it concludes that there are deep and powerful reasons why most actors do not do this. It's incredibly costly to kill large numbers of people. It's hard to do without incurring massive costs to yourself. It's often way easier to strike bargains. It's often irrational for the same reasons that war is often irrational, that it involves mutual sacrifices that would be strictly worse than coming to a negotiated settlement. It's been a while since I looked at it, so I'm going to stop trying to summarize it, lest I get something wrong. But I think it's an interesting perspective that it is also perhaps useful in thinking about the motives of a super intelligence system. And yeah, as the Eliezar on my shoulder would tell me, again, "Why are you trying to reason about super intelligent systems when you are not super intelligent?" And it's true, and the author of this book is not super intelligent either. But there's a reason why horrific atrocities are exceptions that we learn about in school rather than the normal state of being. And I think that gives me some, maybe it's a very low standard for optimism, but I think it gives me some optimism for some of these doomsday scenarios.

SPENCER: My suspicion is that it depends a lot on the power differential between the groups. I am certainly not an expert in historical cases. But it seems when a much more technologically powerful civilization meets one that's less technologically advanced, it generally goes really badly for the one that's less advanced. And to me, that suggests that, the points that you're pointing out, which are game theoretic reasons not to wipe out a group, become much less relevant if you can just stomp on them like they're a bunch of ants. Right?

DYLAN: Right. And I think there's a degree to which we're talking about an out of sample situation, that even people talk about things like the Columbian Encounter, where Europeans encountered North Americans for the first time. But other Americans were not dramatically smarter than the North Americans at that point in time. They had certain technologies, but it's also true that North and Mesoamericans had certain technologies that had not been developed in Europe yet. And so it's a much closer race than when we're envisioning super intelligence versus normal intelligence scenario. It's not really a divergence of intelligence at all. But one thing I'm struggling within AI discussions is, generally, it's hard to be a good empiricist about it, that my inclination when I read these theoretical papers about instrumental goals and power seeking and the default outcomes of certain scenarios is, "This is all well and good. You have thought about this really hard and modeled this very precisely." This is a model that we cannot validate in any way, because we don't have a lot of access to ground truth. And I think a maybe productive path is to try to find imperfect analogues in history that might be different in important ways, including that there's no super intelligence, but that might provide us with a firmer ground to grasp to, than a purely theoretical research agenda.

SPENCER: It seems like, because it's so hard to find close analogues, because we're talking about something that's never happened, some people just retreat into like, "Okay, let's just go all-in to theory that's all trying to use logic to think about what may happen." Other people are like, "Oh, no, no, no, let's try to find the closest historical analogues even if they're not that close." And both of those seem like very flawed methods.

DYLAN: Oh, yeah. There's no awesome method here. But I think that some of the more encouraging stuff I've seen in a policy lately has been trying to sort of analyze and see how they develop. I know Holden Karnofsky put out a call recently for research into specific standards regimes for certain technologies or fields. And so, how did financial self-supervision come about? How did the nuclear regulatory regime come about? How did the FDA come about? And I think engaging in the literature on that. The FDA, I know, has a very large literature. I think it depends on the kind of regulation you're talking about. Those seem potentially very productive because this is not the first time we've encountered a new technology that changed society in certain disturbing ways, and that we've had to improvise and iterate on a new regulatory agenda for it.. This time could be different and much more dramatic in ways that make all those scenarios totally useless. But it could be not. It could be that they're still useful, and they still tell us important things. And it seems better to have than not have.

SPENCER: Going back to the windfall clause for a moment. At first, I was thinking, if we get a scenario where we have super intelligence that's uncontrolled. What's the windfall clause gonna do? It doesn't matter.

DYLAN: Right, no way getting around it.

SPENCER: Humans aren't in control, that doesn't make a difference. But then I was thinking, "Well, maybe it could make a difference in the incentives of the group building AI." Because imagine you're building super intelligent AI, before you get to that point, you build something that's not quite at the super intelligent level. You get close to hitting your windfall clause. And now you're kind of thinking, "Well, now the investors don't control us anymore. Now, we know that whatever we're building from now on, it's just gonna go back to society." So, maybe there's just a cycle. Maybe that actually helps psychologically incentive-wise.

DYLAN: Right, to give you a reason to keep it at a reasonable scale. Yeah, it's interesting. I recently wrote a piece on Anthropic, and I think their big recent governance change is that they announced that they're trying to migrate control over a majority of their Board to this long term trust, that will be run not by people with the big value of shares in Anthropic but by a group of people who seem responsible and reasonable and can maybe pull the throttle on them, if things are getting out of hand. My takeaway from that is that it was not Dario, Daniela, and Jared, and all the other people Anthropic have sat down and decided that this is the absolute best way to do things. My takeaway was they are groping and experimenting with ways to try to align their incentives better with long-term humanity than a traditional corporate model, which is almost comically not designed to align with that. And I have some doubts about that specific model. I think if you look through the list of initial trustees, they're all people I respect a lot, but they're all EAs and there's a part of my brain that says, "There are some EAs that are really smart. I think we're a smart group. There's a reason I'd like to come to conferences like this. We are sometimes wrong. There's a lot of smart people who aren't EAs who have come to problems with a very different set of premises, and it might be useful to have a more diverse, intellectual culture controlling something like this." But I think my biggest takeaway was I'm glad they're experimenting with this and they recognize that there is a misalignment between the incentives of a traditional corporation and the goal of avoiding extinction, and that a responsible AI company should be doing a lot of experimentation to try to align those incentives better. We've been talking for an hour and 11 minutes. Yeah, I don't know, is that a podcast? [laughs]

SPENCER: Who thinks it's a podcast? [laughs] Let's do some audience questions before we wrap up. Anyone have any questions?

DYLAN: So the question is: How does the subscription model alter your audience and what incentives does that create in terms of polarization?

I think this is a real phenomenon. Sometimes it's called the audience capture. Let's say you're an individual sub-stacker or you're an individual podcaster or YouTuber. You subsist on Patreon or Substack or other regular supporter revenue. That makes the character of your audience very important to you and makes offending them somewhat dangerous to your bottom line. Depending on your model, my friend Matt Iglesias runs a Substack, and I think he sometimes annoys his audience. And that's part of the appeal. That's who Matt is as a character. And so, you can get around it if you have a reputation as a contrarian of some kind. But I do think it's a real phenomenon. And I think one of the things that is most valuable about journalism is your ability to cultivate an audience of a certain type, and then write things that actually change their mind. The marginal impact of what I'm doing, if I'm just reinforcing things that my readers already believe, is basically zero. The impact, if I'm introducing them to arguments that they might not have been introduced to otherwise, is potentially significant. And yeah, I think this is one of my biggest fears about a subscription model. But I think, again, if it's large enough, you can be protected from that to some degree. The Wall Street Journal in the New York Times has millions of digital subscribers. It's possible that they would lose sort of 23% over one thing, but it's a lot less likely than if you're a Substacker with 6000 patrons, that just the raw number of people matter. And so you're much more susceptible. And I think you'd see more volatility in your income if you're relying on more money from a small group of people than less money from a larger group of people.

So the question is: How much of it and — correct me if I'm summarizing this wrong — what potential is there for collective action against automation of jobs by AI? What does the writer's strike tell us about this? And does polarization against specific industries limit that? Will polarization get a coalition of people against?

This is an interesting question. I'm in the middle of a book right now. It's heavily about a mid 20th century labor leader. And the history of labor is really interesting in this regard. The pattern of collective action in the US, based on your jobs, is that there were huge surges in organizing during World War I as I think the labor was scarce. You had a lot more power. Then there was a recession after the war, that made people feel vulnerable in ways that made them want to tie together and organize. The Great Depression was the biggest, because people felt incredibly vulnerable. And then they elected FDR, and one of the first things he did was pass the National Recovery Act, which gave everyone a right to join a labor union for the first time and led to just a massive surge in organizing into the 30s and then through World War II. And then after that surge, it's just been a gradual decline. It's hard to draw a lot of large lessons from this. I think one lesson I take is that things have to be really, really, really bad to get huge upward shifts in organization. We have not had any economic calamity near as bad as the Great Depression. Since then, nothing as severe and as long lasting. And I think that's part of why you haven't seen people band together. Anything in response to some specific automation worries, you haven't seen those kinds of coalitions come together. One fact that I tried to keep in mind when thinking about job automation is that in 1900, about a third of Americans who worked were working on farms. Today, it's about 1.5 to 2%. And there was a lot of labor action in the middle of this. I learned about Cesar Chavez in school. I imagine a lot of Americans learned about Cesar Chavez. Many of the farmers got organized, they wanted better conditions, but none of them was able to reduce the investment in certain kinds of capital that were labor saving there. I don't think that was even a primary motive for most of them. I think it was more conditions that might have been enabled by the threat of your job being automated, but they weren't Luddites. They weren't putting their shoes in tractors and combat harvesters to make sure they were still relevant. So I think all of that makes me somewhat skeptical. Collective action is just really, really hard. I think the Scott Alexander post on Moloch is maybe the most evocative illustration of this. But it's hard to get a group of people together to acknowledge a shared interest and have them take costly action that is more than just passively posting something on behalf of any cause. So I'm somewhat pessimistic. But also, I don't know, if Holden Karnofsky is right, and all of our plausible futures are wild, one wild future is one where there were kinds of labor organizing that we thought were dead rising again. And so I'm hesitant to rule anything out.

SPENCER: They could use AI to monitor all the protesters who are trying to get their jobs back.

DYLAN: Yeah. There's certainly some companies, in one country in particular, that are investing a lot in AI that can do that. And so depending on how advanced you think the Baidu and Tencent teams are relative to some other labs, there are definitely some techno-dystopian possibilities that seem very alarming for labor organizers.

So the question is about government funding or journalism, and whether that creates perverse incentives or not.

They're definitely existence-proofs that reasonably non-captured, reasonably independent journalism can exist when it's funded by the government. BBC is the canonical example, CBC in Canada, ABC in Australia, PBS is not the news force that it once was, but PBS and NPR in the United States. Though, I would note that a small and shrinking minority of NPR and PBS funding comes from the government. Most of it, as you may have heard, comes from listeners like you. But it's certainly possible. I think my sense is that the question of, "Is it possible to have a government funded, reliable, non-hackish, and propagandistic news agency?" is closely tied to questions of, "Why are certain governments more and less corrupt than others? Why do certain people have more and less trust in government capabilities?" I think, to a first approximation, my faith in Sweden to set up a system like that is much greater than my faith in the United States, just because I think Sweden has less industry capture of the government. There's less of a threat of institutional creep, where elected officials take on stated institutions designed to be separate from partisan control. I think one model that I've heard some people floating around is the idea of a tax credit or voucher system where you let individual people take vouchers to donate to news organizations of their choice. And I think that would definitely create a larger and more thriving media landscape. I also think it could have some really bad unintended consequences. There's been some research on similar programs like that that exist in New York Cities. I think a few other places for campaign finance that have donation matching or tax credits for people to donate to campaigns that have found that those increased polarization. The people who are politically active and aware enough to know about that and try to exploit it, tend to be on ideological extremes. The mass of voters who don't care very strongly one way or another tend to be less engaged. And so, you see relatively extreme movements on one or another side. Now, if you were someone on one or another side, that might be very exciting to you. I think I know some democratic socialists who are very excited about this as a way to get very lefty candidates through democratic elections. But I think when applied to a context of media, I would suspect it would lead to a rise of sharply polarized media outlets funded by very engaged fans who know how to use that system to funnel money toward things congenial to them. And I suspect based on the experience of tax exempt organizations in the US, and what happens when there's some effort to police who gets a tax exemption in ways that implicate political differences — in 2014, there was a huge scandal when there were accusations that the Obama IRS was targeting Right Wing organizations for their tax exemptions. And my sense is that those accusations were probably overblown. But it's also a sign that trying to crack down it all is very risky if you're a government. I suspect similar dynamics would apply there. — and so it'd be hard to deny the funding to sharply partisan groups. That's where the people in a voucher system would likely want to distribute it. I think this is an area where I'm excited for new ideas, but few of the ideas I've heard to date on government funding strike me as clearly positive, and I would be open to more innovation and models there.

SPENCER: Before we go into the second half, I'm just curious, how many of you have listened to a Clearer Thinking Podcast episode before? Raise your hand if you've listened before? So I just want to apologize for speaking so slowly. I'm sure you normally hear me at 2x speeds. [laughs]

DYLAN: What do you listen at? What's your x for when you listen to podcasts?

SPENCER: It depends on the podcast. Sam Harris, 3x [laughs]. If I listen just for pleasure, just two and a half [laughs].

DYLAN: I'm at 1.25. I'm a tiny baby. I can't live like you [laughs].

SPENCER: Well, I've just purposely kept upping it. If it's not a little bit uncomfortable from my point of view, it's not fast enough [laughs].

DYLAN: I think one thing a friend of mine once said was that my error when I tried to make podcasts was I tried to make them really information dense. I try to be like, "I don't want to waste people's time. I'm not going to do one of these four hour podcasts." If I was probably listening to this — I don't mean you. I mean, the other four hour podcast — but I want to do a concise 20-30 minutes and it's just what you need to know. My friend is like, "You're totally wrong. The reason people listen to podcasts is not for information density. No one thinks that they're the most information dense way to gain information on something. They listen to this because they like have a commute or they have a kid who is taking a nap and they need to get stuff done during the nap, and they need something that's on in the background when their kid is napping so they can go and do laundry." The point is that you can dip in and out and be doing other things and still pick up stuff. I have yet to find podcasts filling that role in my life. But it does seem to explain a lot in successive things like Sam Harris and Joe Rogan. They're very, very, very long podcasts out there.

SPENCER: Well. Yeah. I think also just people liked the idea of hanging out. You hang out with your podcast host. And it can be like eavesdropping on a conversation.

DYLAN: Now I guess the one podcast I listened to that's like that is The Always Sunny Podcast. And I also liked that because it feels sort of egalitarian. They are millionaires and I am not. But we both have to do Allbirds ads.

So the question is: If journalism just completely collapses, there are no business models. Does it become sort of an artisan trade, like painting or sculpture, or does it become farming where they just go off and do other things?

It's interesting because, for some people, gardening and farming is a casual pursuit that brings people joy. And a lot of people garden, and it seems to bring them a lot of joy and meaning in their lives. I think there are a lot of people who write for free. One of the many labor conflicts within journalism in recent years is that it used to be that unpaid internships were incredibly common. I did a couple early in my time as a journalist. Now they've been almost entirely wiped out due to social pressure in a sense that it's inappropriate to have people work for free. But I think the reason they existed at all is that there were people willing to do them. There are people from wealthy families that were willing to take on loans or things to do it because they wanted to write that badly. And so, I think there will always be people doing that. And there are people who are doing it on the side of other jobs, too. Many of them are academics who have a regular job that's a little similar. One of my favorite writers is this guy, George Scialabba, whose whole career was a building manager for one of the lecture halls at Harvard. He was not a professor. He was not a researcher anywhere. He managed the CGIS building, and on the side, was a brilliant literary critic and wrote several books. So I think that will continue to be a very common model in writing and journalism. I think another thing is that what we think of as journalism will just change and evolve. I was telling Spencer before we started recording that I think of people like him and Clearer Thinking, people like Dwarkesh Patel and the Lunar Society Podcast, I think of them as doing journalism. You're asking questions of interesting people, (not now) but sometimes important people, where their answers are relevant and of public use. And Spencer seemed to have taken aback and was like, "I don't think of myself as a journalist at all." But to some degree, it doesn't matter what you call yourself, as long as you're asking questions, collating information, and providing some of the services that journalism has traditionally provided. And so I think, the ways that people do that will continue to evolve and shift in interesting ways that might not be ones that we recognize as journalism in a traditional sense.

So the question is: Is hobbyist journalism closer or further away from the model ideal of journalism than the professional model?

I think the argument that hobbyism would be closer to some ideal is that you don't have a subscriber base you have to appease. You don't have a donor you have to appease. This is sort of one explanation that some political scientists point to for why billionaire candidates do well, is that they can say, "I already have my money. I'm not beholden to anybody." And so, I think there's a little to that. I think my worry is that one of the kinds of journalism that's been really hollowed out is long-term, careful, investigative reporting. So think of John Carreyrou getting a tip from a disgruntled former neighbor of Elizabeth Holmes, that her startup was up to some hinky stuff. It took him months to figure out whether this was just a guy with a grudge, or there was really something deeply bad going on at Theranos. The team at the New York Times that got people on the record about Harvey Weinstein, which I think is probably the most important act of journalism in the last decade. People had been trying to take down Harvey Weinstein for decades, literally decades. After that came out, I had peers that I talked to who've been in the industry for a long time. And they're like, "Yeah. I remember in the 90s, after Pulp Fiction came out, he became a really famous Hollywood producer. Everyone knew he was doing this, it was just that no one could get it on the record. No one had the time and the resources to nail it down and get people on the record saying what he was doing. And that's just not something you can do as a hobbyist.

SPENCER: There's a lot of risks too.

DYLAN: There's a tremendous amount of risk. And there's a degree to which, if you're a hobbyist with an independent source of income, maybe they can't ruin your life. Maybe you have some security that way. But Megan Twohey and Jodi Kantor needed a lot of time and a lot of resources and some very, very, very good lawyers to be able to get through that and get that out to the public. And I think that's just an incredibly difficult thing to replicate at a smaller scale. And it's one of the reasons I hope institutions like that stick around and continue funding that kind of thing, because it's hard to see how it happens any other way.

SPENCER: Dylan, thank you so much. Really appreciate it. And thank you all for coming.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: