transcript
Record Labels Sue A.I. Music Generators, Inside the Pentagon’s Tech Upgrade and HatGPT
A little something for everyone: lawsuits, fighter jets and Casey in a bucket hat.
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
- kevin roose
Casey, today I learned something new. I’m in New York. I’m visiting some friends and going to some weddings. And I’m at “The New York Times” building, and I learned just today that there’s an entire podcast studio at “The Times” building that I’ve never seen.
- casey newton
That’s how big “The New York Times” is. It’s just full of nooks and crannies that very few people have ever seen with their own eyes.
- kevin roose
Yeah. So up on the 28th floor, apparently there’s a gleaming new audio temple. I hear it’s very fancy, but I’ve never been. So right after we tape today, I’m going to go up there and I’m going to see the promised land.
- casey newton
You know what I would do if I got to see the studio, Kevin, and I were in New York?
- kevin roose
What’s that?
- casey newton
I would sneak in, and I’d get a little pocket knife, and I’d just carve “Kevin + Casey forever”—
- kevin roose
[LAUGHS]:
- casey newton
— into one of the brand new desks. And I would dare them to say anything to me about it.
- kevin roose
Yeah, Let’s not let you up there.
- casey newton
[LAUGHS]:
- kevin roose
I’m going to actually ask security to specifically —
- casey newton
Can you imagine —
- kevin roose
— not let you in there.
- casey newton
— Ezra Klein sits down to interview the Secretary General of the United Nations and he just sees carved into the desk, “Casey + Kevin forever?”
- kevin roose
Casey was here.
- casey newton
Suck it, Klein!
[MUSIC PLAYING]
- kevin roose
I’m Kevin Roose, a tech columnist from “The New York Times.”
- casey newton
I’m Casey Newton from “Platromer.” And this is “Hard Fork.” This week, the record label sued two leading AI music apps, accusing them of copyright infringement. RIAA CEO Mitch Glazer joins us to make the case. Then we go inside the pentagon’s tech turmoil with Chris Kirchhoff, author of the new book “Unit X.” And finally, a round of Hat GPT.
[MUSIC PLAYING]
Now, Kevin, not a lot of people know this, but we have something interesting in common.
- kevin roose
What’s that?
- casey newton
Well, we were a couple of the few teenagers who managed to survive the Napster era without getting sued by the Recording Industry Association of America.
- kevin roose
[LAUGHS]: Yes, although one of my friends actually did get sued by the recording industry and had to pay thousands of dollars.
- casey newton
And is he still in prison or did he get out?
- kevin roose
No, he got out. He’s fine.
- casey newton
Oh, thank god. Thank god. Well, look, Kevin. It’s always a strange day when you find yourself siding with the RIAA. And yet, when I heard this week’s news, I thought, well, I want to hear what they have to say.
- kevin roose
Yeah, let’s talk about it.
- casey newton
So these are, I think, the biggest lawsuits to come out against AI companies since your newspaper, “The New York Times,” sued OpenAI. This week, the RIAA announced that major record labels are suing two of the leading AI music companies, alleging massive copyright infringement, and are maybe trying to shut them down.
- kevin roose
Yeah. So the companies that the music labels sued are Udio and Suno. We’ve talked about them a little bit on this show before. Basically, these are tools that sort of work like ChatGPT. You can type in a prompt. You can say, make me a Country Western song about a bear fighting a dolphin, and it’ll do that.
But basically, these companies have come under a lot of criticism for allowing people to create songs without compensating the original artists. Like other AI companies, these companies do not say where they’re getting their data. Suno is releasing statements using words like “transformative” and “completely new outputs,” basically arguing that this is all fair use and that they don’t owe anything to the holders of the copyrighted songs that they were presumably using to train their models. But we’ll see how the courts see that.
- casey newton
Well, and if you’ve never heard one of these, Kevin, I think we — and I know you have — we should play a clip, I think, just so people get a sense of just how closely these services can mimic artists you might be familiar with. So, Kevin, we’re about to hear a song called “Prancing Queen,” and this was made with Suno.
- ["prancing queen" playing]
- archived recording
(SINGING) You can dance
You can jive
Having the time of your life
Ooh, see that girl
Watch that scene
Take in the dancing queen
Friday night and the lights are low
Looking out for a place to go.
- casey newton
Can you believe what they’re doing to ABBA, Kevin?
- kevin roose
[LAUGHS]: You know, I actually saw an ABBA cover band once, many years ago. And that was better than the ABBA cover band.
- casey newton
You know what I liked about that clip is it reminded me — if I had had, like, six beers and someone shoved me onto a karaoke stage and said, sing “Dancing Queen” from memory, that’s exactly what it would have sounded like.
- kevin roose
[LAUGHS]:
- casey newton
So we wanted to get to the bottom of this, so we reached out to the RIAA. And they offered up Chairman and CEO Mitch Glazer, so we’re going to bring him on and ask him what this lawsuit is all about.
- kevin roose
Let’s do it.
[MUSIC PLAYING]
- casey newton
Mitch Glazer. Welcome to “Hard Fork.”
- mitch glazer
Thanks. Thanks for having me.
- casey newton
So make your case that these two AI music companies violated copyright law.
- mitch glazer
Pretty easy case to make. They copied basically the entire history of recorded music. They stored it. Then they used it by matching it to prompts so that they rejiggered the ones and zeros. And, basically, they took chicken and made chicken salad and then said they don’t have to pay for the chickens.
- casey newton
Right.
- [laughs]
Well, some people out there say that this is a transformative use, that no matter what you put into a Udio or a Suno, you’re not going to get back the original track. You’re going to get something that has been transformed. What do you make of that case?
- mitch glazer
Well, there is such a thing as transformative use. It’s actually a pretty important doctrine. It’s supposed to help encourage human creativity, not substitute for it. There was a really important Supreme Court case on this issue, thank god, that just happened last year, where they kind of dispelled this notion that any time you take something and splash a little bit of color on it, it’s transformative. That’s not what that means. And this is very similar.
- kevin roose
Mitch, you said that these companies have scraped the entire sort of history of recorded music and used them to train their models. But I read through the complaint that came out, and there isn’t direct evidence. There’s no smoking gun. They haven’t said outright, yes, we did train on all this copyrighted music.
Presumably, that is something you hope will come out in the course of this case. But do you actually need to be able to prove that they did use copyrighted music in order to win this case? Can the lawsuit succeed without that?
- mitch glazer
I think, ultimately, we do have to show that they copied the music, but they can’t hide their inputs and then say, sorry, we’re not going to tell you what we copied. So you’re not allowed to sue us for what we copied. That, they can’t do. So what we were able to do was show in the complaint that there’s no way they could have come out with this output without copying all of this on the input side. It’s sort of this equitable doctrine in fancy legal terms that says, you’re not allowed to hide the evidence and then say you can’t sue me.
- casey newton
Right. Well, on that point, one of my favorite parts of the Suno lawsuit is where it discusses Suno reproducing what are called producer tags, which is when a producer says their name at the start or end of a song. What does it mean that Suno can nail a perfect Jason Derulo?
- mitch glazer
[LAUGHS]: Well, thank god Jason derulo likes to say his name in the beginning of his songs. Right? And in “The Blender,” that piece wasn’t ripped apart enough. And so that was sort of one of those smoking guns where we’re able to show if you look at the output, right, and Jason Derulo’s tag is in the output, I think they copied the Jason Derulo song on the input.
- kevin roose
Yeah. So one of the arguments we’ve heard from AI companies — not just AI music companies, but also companies that train language models — is that these machines, these models, they’re basically learning the way that humans learn. They’re not just regurgitating copyrighted materials. They are learning to generate wholly new works.
And I want to just read you Suno’s response that they gave to “The Verge” and have you share your thoughts on it. Suno said, quote, “We would have been happy to explain this to the corporate record labels that filed this lawsuit and, in fact, we tried to do so. But instead of entertaining a good faith discussion, they reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.” What do you make of that?
- mitch glazer
Yeah, I love this argument. I love that machines are original and machines and humans are the same. If you just use human words around machines, like learning, well, then there’s no difference between us. If you read a book, it’s the same as copying it on the xerox machine, and then mixing all the words around, and then coming out with something new. Has nothing to do with the fact that they actually happened to take all of these human created works.
Machines don’t learn. Right? Machines copy, and then they basically match a user’s prompt with an analysis of patterns in what they’ve copied. And then they finish the pattern based on predictive algorithms or models. Right? That’s not what humans do. Humans have lived experiences. They have souls. They have genius.
They actually listen, get inspired, and then they come out with something different, something new. They don’t blend around patterns based on machine-based algorithms. So nice try, but I don’t think that argument is very convincing. And I also love that they say that the creators and their partners are the ones that have resorted to the old legal playbook. They’re not resorting to, oh, we can do this. It’s based on fair use. It’s transformative. We’re going to seek forgiveness instead of permission.
- casey newton
Well, I mean, you also have the investor in the company who you quote in the lawsuit saying — because he said this to a news outlet — I don’t know if I would have invested in this company if he had a deal with the record labels. Because then they probably wouldn’t have needed to do what they needed to do, which I assume he sort of meant Hoover up all this music without paying for it.
- mitch glazer
Yeah. That’s in the legal world, what we call a bad fact.
- archived recording
[LAUGHS]:
- mitch glazer
That is a bad fact for the other side. You don’t want your investor saying, gee, if they had really done this the legal way, I don’t think I would have invested because it’s just too hard. It’s just too hard to do it the legal way.
- kevin roose
Mitch, we’ve seen other lawsuits come out in the past year from media companies, including “The New York Times,” which sued OpenAI and Microsoft last year, alleging similar types of copyright violations. How similar or different from the sort of text-based copyright arguments is the argument that you are making against these AI music generation companies?
- mitch glazer
I think the arguments are the same, that you have to get permission before you copy it, just basic copyright law. The businesses are very different. And I think looking at the public reports on the licensing negotiations going on between the news media and companies like OpenAI, news is dynamic. It has to change every single day. And so there needs to be a feed every single day for the input to actually be useful for the output.
Music is catalog. Right? You copy the song once. It’s there forever. You don’t have to change it. You don’t have to feed the beast every single day. So I think the business models are quite different, but I think that the legal basis is very similar.
- casey newton
Well, and does that suggest that, for you all, it’s actually essential that you are able to capture the value of the back catalogs for training, whereas for these media outlets they might have a better chance of securing ongoing revenue?
- mitch glazer
I think that’s right. I also think that we have an artistic intent element that’s very, very different. It’s one thing for somebody to say, you can copy this into your input. It’s another to say that you can then change it so that the output uses the work of the artist, but it doesn’t match their artistic intent.
To say that these — sort of what Kevin was saying earlier. They’re saying, look, we’re just — we had discussions. What’s your problem? Well, the problem is we work with human artists who care about the output. And so they need to have a role and a place in deciding how their art’s being used.
- kevin roose
Yeah.
- casey newton
My understanding is that it’s actually gotten much more difficult and expensive to sample lately than it used to be in ways that don’t really like. I’d probably like to see more sampling than we do. But it seems like something changed around the time that the song “Blurred Lines” came out, and now all of a sudden everybody has to like — even just a whisper of familiarity. Is there anything sort of in whatever led to that situation that you expect you’ll bring to this lawsuit?
- mitch glazer
I think sampling is actually a pretty good example because samples are licensed today. And there’s plenty of sampling going on. Now, does it mean that anybody can sample anything they want without permission? No. Do we have to have clearance departments that go out, whether you’re talking about a video, or a movie, or another song, and get those rights especially from publishers and prior artists? Yes, you do.
That’s called ownership. And you actually get to control your own art and what you do, and it’s not a simple process all the time. It takes work. We I’m sure that our companies get frustrated and trying to do clearances, but it’s what you got to do.
- kevin roose
Yeah there have been some companies that have faced copyright challenges in AI generative products that have responded by basically limiting the products, by saying you can’t refer to a living artist in a prompt. It won’t give you a response, basically to try to quell some of these concerns. Would that satisfy your concerns or are you trying to shut these things down altogether?
- mitch glazer
They’re trying to confuse the issue. They’re pretending that this is about the output. The lawsuit is about the input. Right? So actually, by saying you can’t type Jason Derulo’s name, you can’t type Adele’s name, what they’re basically doing there is further hiding the input. They’re making it so that you can’t see what they copied. And they’re pretending that this is all about the output in order to say, look, we’re putting guardrails on this thing.
That’s not what this lawsuit’s about. This lawsuit is about them training their model on all of these sound recordings, not on limiting prompts on the output to further hide the input. But it’s clever. It’s clever.
- kevin roose
OK. So you want to shut this down.
- mitch glazer
Well, I don’t think that — we want to — we call it an injunction, Kevin. We would like to shut down their business as it’s operating now, which is something illegally trained on our sound recordings with output that doesn’t reflect the artists integrity. Yes.
Does that mean that we want to shut down AI generators or AI companies? No. There’s 50 companies that are already licensed by the music industry. And I think it’s important — and this differs a lot from, I think, the old days — but nobody’s scared of this technology as in they want to shut down the technology. Everybody wants to use the technology.
But they definitely see good AI versus bad AI. Good AI complements artists, helps them stretch music, helps assists them in the creation of music. Bad AI takes from them, gives no attribution, no compensation, asks no permission, and then generates something that’s a bunch of garbage.
- kevin roose
Yeah. I know of some artists who would say they want to shut down this stuff entirely, that they don’t think there’s any good form of it. But you mentioned the old days. And so I want to ask you about this. I think a lot of my fellow millennials think of the RIAA as the group that went around suing teenagers for pirating music during the Napster era.
The RIAA has also sued a bunch of other file sharing and music sharing platforms, and actually fought the initial wave of streaming music services like Spotify because there was this fear that these all-you-can-eat streaming services would eat into CD sales. Now, of course, we know that streaming wasn’t the death of music or music labels, that actually it ended up being — sort of saving the music industry.
Do you think there’s a danger here, that actually these AI music generation programs could ultimately be great for music labels just like Spotify was, and that you might be trying to cut off something productive before it’s actually had the chance to mature?
- mitch glazer
I don’t think it’s really the same at all. I think that there’s an embrace of AI, and there was well before these generators came out or well before OpenAI, especially within the tech content partnerships that have existed, and have grown, and matured, and gotten sophisticated through the streaming age.
So even though the RIAA’s job is to be the boogeyman and to go out there and enforce rights, which we do with zeal and hopefully a smile doing our job — here, I think that really what we’re trying to do is create a marketplace like streaming, where there are partnerships and both sides can grow and evolve together. Because the truth is, you don’t have one without the other.
Record companies don’t control their prices. They don’t control their distribution. They’re now gateways, not gatekeepers. The democratization of the music industry has changed everything. And I think they’re seeking the same kind of relationships with AI companies that they have with streaming companies today.
- kevin roose
What would a good model look like? There are reports this week that YouTube is in talks with record labels about paying them a lot of money to license songs for their AI music generation software. Do you think that’s the solution here, that there will be sort of these platforms that pay record labels and then they get to use those labels’ songs in training their models? Do you think it’s fine to use AI to generate music as long as the labels get paid? Or is there sort of a larger objection to the way that these models work at all?
- mitch glazer
I think it works as long as it’s done in partnership with the artists and, at the end of the day, it moves the ball forward for the label and the artist. The YouTube example is interesting, because that’s really geared towards YouTube Shorts. Right? It’s geared towards fans being able to use generated music to put with their own videos for 15 or 30 seconds. That’s an interesting business model.
BandLab is a tool for artists, Splice, Beatport, Focusrite, Output, Waves, Eventide — every digital audio workstation that’s now using AI — Native Instruments, Oberheim. I mean, there are so many AI companies that have these bespoke agreements and different types of tools that are meant to be done with the artistic community, that I think the outliers are the Sunos and the Udios, who frankly are not very creative in trying to help with human ingenuity. Instead, they’re just substitutional to make money for investors by taking everybody else’s stuff.
- casey newton
We’ve seen some pretty different reactions to the rise of AI among artists. Some people clearly seem to want no part of it. On the other hand, we’ve seen musicians like Grimes saying, here, take my voice. Make whatever you want. We’ll figure out a way to share the royalties if any of your songs becomes a hit. I’m curious, if you’re able to get the deals that you want, do you expect any controversy within the artist community and artists saying, hey, why you sell my back catalog to this blender? I don’t to be part of that.
- mitch glazer
Yeah. I think, look, artists are entitled to be different. And there are going to be artists — I think. Kevin, you said earlier, you know artists who are so scared of this they just — they do want to shut the whole thing down. They just don’t want their music and their art touched. Right?
I know directors of movies who can’t stand that the formatting is different for an airplane. That’s their baby and they just don’t want it. Then there are artists like Grimes who are like, I’m finding experimental. I’m fine having fans take it, and change it, and do something with it.
All of that is good. They’re the artist, right? I mean, it’s their art. Our job is to invest in them, partner with them, help find a market for them. But at the end of the day, if you’re trying to find a market for an artist’s work that they don’t — and they don’t want that work in the market, it’s not going to work.
- kevin roose
Yeah. Have you listened to much AI generated music? Are there any songs you’ve heard that you thought, that’s actually kind of good?
- mitch glazer
Yeah. I think in the sort of overdubbing voice and likeness thing, that it’s a little bit better than some of the simple prompts on these AI generators like Udio and Suno. But I heard a — I Billie Eilish’s voice on a Revivalist song, and I was like, wow, she should cover this song. It was really great. Right? It just kind of seemed like a perfect fit, and it’s fun to play with those things.
But again, like in that case, I think Billie Eilish gets to decide if her voice is used on something. I think she gets to decide if she wants to do a cover. I don’t think that it’s up to Overdub to be able to do that. I did do a bunch of prompts, as you can imagine, on some of these services, trying to see what happens if you just put in a few words, like a simple country song. And then what happens if you put in 20 different descriptors?
And what’s amazing is you can — every 10 seconds you get a new song. So if you don’t like it, just put in a few more words and it rejiggers the patterns. And you can start getting to a point where you’re like, OK, it’s not human and the lyrics kind of suck. But it’s not terrible.
We are only six months into the huge progression of this technology. And if you had listened to a prompt where you were allowed to put in Jason Derulo or Mariah Carey six months ago versus now, you would notice a marked improvement. And that’s one of the reasons why we needed to get out there now. We needed to bring this suit. We need the courts to settle this issue so that we can move forward on a thriving marketplace before the technology gets so good that it is a seismic threat to the industry.
- casey newton
I’ve seen a lot of support for this lawsuit among people I follow who are more inclined to side with artists and musicians. But there have also been some tech industry folks who think this is all kind of — it sounds like the RIAA is just sort of anti-progress, anti-technology. I even saw one tech person call you the ultimate decels, which is like — in Silicon Valley, that’s sort of the biggest insult. Decels are people who want to basically stop technological progress, basically Luddites. What do you make of that line of argument from the Valley?
- mitch glazer
This has been the same argument that the Valley’s had since 1998. To me, that’s a 30-year-old argument. If you look at the marketplace today, where Silicon Valley thrives is when rights are in place and they form partnerships. And then they grow into sophisticated global leaders where they can tweak every couple of years their deals, and come up with new products that allow them to feed these devices that are nothing without the content on them.
There’s always sort of this David versus Goliath thing, no matter what side you’re on. But if you think about it, music, which is a $17 billion industry in the United States — I think one tech company’s cash on hand is five times that, not to mention they’re $289 billion market caps. Right? But they are completely dependent on the music that these geniuses create in order to thrive. And to say that these creators are stopping their progress, I think is sort of laughable.
I think what’s much more threatening is if you move fast and break things without partnerships, what are you threatening on the tech side with a no holds barred, culture destroying, machine-led world? It sounds pretty gross to me.
- casey newton
So what happens next? The lawsuits have been filed. This stuff tends to take a long time. But what can we look forward to? Will there be sort of scandalous emails unearthed in discovery that you’ll post to your website? Or what can we look forward to here?
- mitch glazer
Well, moving forward in discovery, I think we’ll be prohibited from posting anything to our —
- casey newton
Aw, man.
- mitch glazer
I know. You think you’re disappointed.
- kevin roose
If you want to just send them to HardFork@NYTimes.com, that’s fine.
- mitch glazer
I live for that stuff. But we will, of course, follow the rules. But, you know, we have filed in the districts where these companies reside. And so I hope that within a year or so we will actually get to the meat of this. Because if you think about it, the judge has to decide when they raise fair use as a defense. Is this fair use or not? Right?
And that is something that has to be part of the beginning, part of the lawsuit. So we’re hopeful that — when I say a short time, in legal terms, that means a year or two. But we’re hoping that in a short time we will actually get a decision, and that it sends the right message to investors and to new companies, like there’s a right way and a wrong way to do this. Doors are open for the right way.
- kevin roose
Yeah. I think there’s a story here about startups that are sort of moving fast, breaking things, asking for forgiveness, not permission. But I also think there’s a story here that maybe we haven’t talked about, about restraint. Because I know that a lot of the big AI companies had tools years ago that could generate music, but they did not release them.
I remember hearing a demo from someone who worked at the big AI companies — one of the big AI companies maybe two years ago of one of these kinds of tools. But I think they understood. They were scared because they knew that the record industry is very organized. It has this kind of history of litigation.
And they sort of understood that they were likely to face lawsuits if they let this out into the public. So have you had discussions with the bigger AI companies, the more established ones that are working on this stuff? Or are they just sort of intuiting correctly that they would have a lot of legal problems on their hands if they let this stuff out into the general public?
- mitch glazer
You know, you’re raising a point that I don’t think is discussed often enough, which is that there are companies out there that deserve credit for restraint. And part of it is that they know that we would bring a lawsuit. And in the past, we haven’t been shy, and that’s useful.
But part of it is also because these are their partners now. There are real business relationships here and human relationships here between these companies. And so their natural — I think they’re moving towards a world where their natural instinct is to approach their partners and see if they can work with them.
I know that YouTube did its Dreamcast experiment, approached artists, approached record companies. That was sort of the precursor or the beta to whatever they might be discussing now for what’s going to go on Shorts that we talked about earlier. And I’m sure that there are many others. But you’re right. Yes, there are going to be companies like Suno and Udio that just seek investment, want to make profit, and steal stuff. But there is restraint and constructive action by a lot of companies out there who do view the creators as their partners.
- kevin roose
Well, it’s a really interesting development and I look forward to following it as it progresses.
- casey newton
Thanks, Mitch.
- kevin roose
Thanks so much, Mitch. Thanks for coming by.
- mitch glazer
Thanks, guys. Bye. [MUSIC PLAYING]
- casey newton
When we come back, we’re going inside the Pentagon with Chris Kirchhoff, the author of “Unit X.” Are we allowed inside the pentagon?
[MUSIC PLAYING]
- kevin roose
Well, Casey, let’s talk about war.
- casey newton
Let’s talk about war. And what is it good for?
- kevin roose
[LAUGHS]:
- casey newton
Some say absolutely nothing. Others write books arguing the opposite.
- kevin roose
Yeah. So I’ve been wanting to talk about AI and technology and the military for a while on the show now. Because I think what’s really flying under the radar of the mainstream tech press these days is that there’s just been a huge shift in Silicon Valley toward making things for the military, and the US military in particular.
Years ago, it was the case that most of the big tech companies, they were sort of very reluctant to work with the military, to sell things to the Department of Defense, to make products that could be used in war. They had a lot of ethical and moral quandaries about that, and their employees did, too. But we’ve really seen a shift over the past few years.
There are now a bunch of startups working in defense tech, making things that are designed to be sold to the military and to national security forces. And we’ve also just seen a big effort at the Pentagon to modernize their infrastructure, to update their technology, to not get beat by other nations when it comes to having the latest and greatest weapons.
- casey newton
Yeah. And also, Kevin, just the rise of AI in general, I think, has a lot of people curious about what the military thinks of what is going on out here, and is it eventually going to have to adopt a much more aggressive AI strategy than the one it has today.
- kevin roose
Yeah. So a few weeks ago I met a guy named Chris Kirchhoff. He’s one of the authors, along with Raj Shah, of a book called “Unit X.” Chris is sort of a longtime defense tech guy. He was involved in a number of tech projects for the military. He worked at the National Security Council during the Obama administration.
Fun fact — he was the highest ranking openly gay advisor in the Department of Defense for years. And, most importantly, he was a founding partner of something called the Defense Innovation Unit, or DIU. It also goes by the name Unit X, which is basically this little experimental division that was set up about a decade ago by the Department of Defense to try to basically bring the Pentagon’s technology up to date.
And he and Raj Shah, who was another founding partner of the DIU, just wrote a book called “Unit X,” that basically tells the story of how the Pentagon sort of realized that it had a problem with technology and set out to fix it. So I just thought we should bring in Chris to talk about some of the changes that he has seen in the military when it comes to technology and in Silicon Valley when it comes to the military.
- casey newton
Let’s do it.
[MUSIC PLAYING]
- kevin roose
Chris Kirchhoff, welcome to “Hard Fork.”
- chris kirchhoff
Glad to be here.
- kevin roose
So I think people hear a lot about the military and technology, and they kind of assume that there are very futuristic things happening inside the Pentagon that we’ll hear about at some point in the future. But a lot of what’s in your book is actually about old technology and how underwhelming some of the military’s technological prowess is.
Your book opens with an anecdote about your co-author actually using a compact digital assistant because it was better, it had better navigation tools than the navigation system on his $30 million jet. That was how you introduced the fact that the military is not quite as technologically sophisticated as many people might think. So I’m curious. When you first started your work with the military, what was the state of the technology?
- chris kirchhoff
Well, it’s really interesting. You go to the movies — and we’ve all seen “Mission Impossible” and “James Bond.” And wouldn’t it be wonderful if that actually were the reality behind the curtain? But when you open up the curtain, you realize that actually, in this country, there are two entirely different systems of technological production. There’s one for the military and then there’s one for everything else.
And to dramatize this on the image of our book, “Unit X,” we have an iPhone. And on top of the iPhone is sitting an F-35, the world’s most advanced fighter jet, a fifth generation stealth fighter known as a flying computer for its incredible sensor fusion and weapons suites. But the thing about the F-35 is that its design was actually finalized in 2001, and it did not enter operations until 2016. And a lot happened between 2001 and 2016, including the invention of the iPhone, which, by the way, has a faster processor in it than the F-35.
And if you think about the F-35 over the subsequent years, there’s been three technological upgrades to it. And we’re now — what we’re almost in iPhone 16 season. And once you understand that, you understand why it was really important that the Pentagon thought about establishing a Silicon Valley office to start accessing this whole other technology ecosystem that is faster and generally a lot less expensive than the firms that produce technology for the military.
- kevin roose
Yeah. I remember, years ago, I interviewed your former boss, Ash Carter, the former Secretary of Defense who died in 2022. And I sort of expected that he’d want to talk about all the newfangled stuff that the Pentagon was making — autonomous drones, stealth bombers.
But instead, we ended up talking about procurement, which is basically how the government buys stuff, whether it’s a fighter jet or an iPhone. And I remember him telling me that procurement was just unbelievably complicated, and it was a huge part of what made government and the military in particular so inefficient and kind of backwards technologically. Describe how the military procures things, and then what you discovered about how to maybe short circuit that process or make it more efficient.
- chris kirchhoff
If you’re looking to buy a nuclear aircraft carrier or a nuclear submarine, you can’t really go on Amazon and price shop for that.
- casey newton
I learned that the hard way, by the way.
- chris kirchhoff
Should have upped your credit limit, Casey.
- casey newton
Yeah.
- chris kirchhoff
And so, in those circumstances, when the government is representing the taxpayer and buying one large military system, a multibillion dollar system from one vendor, it’s really important that the taxpayer not be overcharged. And so the Pentagon has developed a really elaborate system of procurement to ensure that it can control how production happens, the cost of individual items.
And that works OK it you’re in a situation where you have the government and one firm that makes one thing. It doesn’t make any sense, though, if you’re buying goods that multiple firms make or that are just available on the consumer market. And so one of the challenges we had out here in Silicon Valley, when we first did a defense innovation unit, was trying to figure out how to work with startups and tech companies who, it turns out, weren’t interested in working with the government.
And the reason why is that the government typically buys defense technology through something called the Federal Acquisition Rules, which is a little bit like the Old Testament. It’s this dictionary-size book of regulations. Letting a contract takes 18 to 24 months. If you’re a startup, your investors tell you not to go down that path for a couple reasons.
One, you’re not going to make enough money before your next valuation. You’re going to have to wait too long. You’re going to go out of business before the government actually closes the sale. And two, even if you get that first contract, it’s totally possible another firm with better lobbyists is going to take it right back away from you. So at Defense Innovation Unit, we had to figure out how to solve that paradox.
- kevin roose
Part of what I found interesting about your book was just the sort of accounts that you gave of these sort of clever loopholes that you and your team found around some of the bureaucratic slowness at the Pentagon, and in particular this loophole that allowed you to purchase technology much, much more quickly that one of your staffers found. Tell that story, and maybe that’ll help people understand the systems that you were up against.
- chris kirchhoff
It’s an amazing story. We knew when we arrived in Silicon Valley that we would fail unless we figured out a different way to contract with firms. And our first week in the office, this 29-year-old staff member named Lauren Dailey, the daughter actually of a tank commander whose way of serving was to become a civilian in the Pentagon and work on acquisition, happened to be up — because she’s a total acquisition nerd — late at night reading the just-released National Defense Authorization Act, which is another dictionary-sized compendium of law that comes out every year.
And she was flipping through it, trying to find new provisions in law that might change how acquisition worked. And sure enough, in section 815 of the law, she found a single sentence that she realized somebody had placed there that changed everything. And that single sentence would allow us to use a completely different kind of contracting mechanisms called “other transaction authorities” that were actually first invented during the space race to allow NASA, during the Apollo era, to contract with mom and pop suppliers.
And so she realized that this provision would allow us not only to use OTAs to buy technology, but the really important part is that if it worked, it was successful in the pilot, we could immediately go to buy it at scale, to buy it in production. We didn’t have to recompete it. There would be no pause, no 18-month pause between demonstrating your technology and having the Department buy it.
And when Lauren brought this to our attention, we thought oh, boy, this really is a game changer. So we flew Lauren to Washington. We had her meet with the head of acquisition policy at the Department of Defense. And in literally three weeks, we changed 60 years of Pentagon policy to create a whole new way to buy technology that, to this day, has been used to purchase $70 billion of technology for the Department of Defense.
- kevin roose
You just said that the reason that Silicon Valley tech companies, some of them didn’t want to work with the military, is because of this sort of arcane and complicated procurement process. But there are also real moral objections among a lot of tech companies and tech workers.
In 2018, Google employees famously objected to something called Project Maven, which was a project the company had planned with the Pentagon that would have used their AI image recognition software to improve weapons and things like that. And there have been just a lot of objections over the years from Silicon Valley to working with the military, to being defense contractors. Why do you think that was? And do you think that’s changed at all?
- chris kirchhoff
To me, it’s completely understandable. So few Americans serve in uniform. Most of us don’t actually know somebody who’s in the military. And it’s really easy here in Silicon Valley, where the weather’s great — sure, you read headlines in the news. But the military is not something that you encounter in your daily life.
And you join a tech company to make the world better, to develop products that are going to help people. You don’t join a tech company assuming that you’re going to be making the world a more lethal place. But at the same time, Project Maven was actually something that I got a chance to work on, and Defense Innovation Unit and a whole group of people led.
- casey newton
Remind us what Project Maven was.
- chris kirchhoff
So Project Maven was an attempt to use artificial intelligence and machine learning to take a whole bunch of footage, surveillance footage that was being captured in places like Iraq, and Afghanistan, and other military missions, and to use machine learning to label what was found in this footage. So it was a tool to essentially automate work that otherwise would have taken human analysts hundreds of hours to do. And it was used primarily for intelligence, and reconnaissance, and force protection.
So Project Maven — this is another misconception. When you talk about military systems, there’s really a lot of unpacking you have to do. The headline that got project maven in trouble said, Google working on secret drone project. And it made it look as if Google was partnering with Defense Innovation Unit and the Department of Defense to build offensive weapons to support the US drone campaign. And that’s not all what was happening. What was happening is Google was building tools that would help our analysts process the incredible amount of data flowing off many different observation platforms in the military.
- kevin roose
Right. But Google employees objected to this. They made a big case that Google should not participate in Project Maven, and eventually the company pulled out of the project. But speaking of Project Maven, I was curious because there was some reporting from Bloomberg this year that showed that the military has actually used Project Maven’s technology as recently as February to identify targets for airstrikes in the Middle East. So isn’t that exactly what the Google employees who were protesting Project Maven back when you were working on it at the Defense Department — isn’t that exactly what they were scared would happen?
- chris kirchhoff
Well, Project Maven, when Google was involved, was very much a pilot R&D project. And it since transitioned actually into much more of an operational phase. And it’s being used in a number of places. In fact, it’s actually being used in Ukraine, as well, to help the US identify military targets in Ukraine. And so this, again, speaks to AI think, a sea change in Silicon Valley since that original protest of 3,000 Google employees over Project Maven, where the world has changed a lot and not for the better.
We have a land war going on in Europe, on the border of NATO. And, in fact, that war — the Ukraine conflict — has mobilized a lot of people in Silicon Valley to want to try and help support Ukraine’s quest to defend its territory. And so I think we’re in a very different time and moment right now, as people watching the news realize that our security is actually quite a bit more fragile than we might have first imagined.
- kevin roose
I think one reaction that our listeners may have to this is they are very concerned about the use of AI and other technologies by the military. And I also hear from a lot of people at the tech companies who are really concerned about some of these contracts. I remember, during the Project Maven controversy, talking with people at Google who were part of the protest movement. And some things that they would say to me are like, well, if I wanted to work for a defense contractor, I would have gone to go work for Lockheed Martin or Raytheon.
I’m curious. What moral argument would you make to someone who maybe says, look, I did not sign up to make weapons of war, I am an AI engineer, I work on large language models, or I work on image recognition stuff? What do you tell that person if you’re working at the DIU, trying to persuade them that it’s OK to sell or license that technology to the pentagon?
- chris kirchhoff
I think you tell them that we’re at an extraordinary moment in the history of war where everything is changing. And I’ll just give you a couple data points. A few weeks ago, the United States asked the Ukrainian military to pull back from the front lines all 31 of the M1A1 Abrams tanks that we had deployed to Ukraine to allow their military to better repel a Russian invasion. These are the most advanced tanks, not only in our inventory, but in the inventory of any one of our allies. And they were getting whacked by $2,000. Russian Kamikaze drones — $2,000 drones killing tanks.
What does that tell me? That tells me that a century of mechanized warfare that began in the first World War is over. And if you’re building an army that’s full of tanks, you now are the emperor with fewer clothes anyway. And I’ll give you one other — a couple other data points.
Hamas has kicked off the largest ground war in the Middle East — because of its attack in Israel on the 7th of October — since the 1973 Arab-Israeli war, threatening to destabilize the Middle East into a wider war. How did they do it? They did it by taking quadcopters and using them to drop grenades on the generators powering the Israeli border towers. That’s what allowed the fighters to pour over the border.
Another data point — Houthi rebels in Yemen right now are holding hostage 12 percent of global shipping in the Red Sea because they’re using autonomous sea drones, missiles, and loitering munitions to harass shipping. And so we’re at this moment where the arsenal of democracy that we have, this incredibly forceful military that’s full of things like aircraft carriers and tanks, are wielding weapons that are no longer as effective as they were 10 years ago. And if our military and our adversaries doesn’t catch up quick, we may be in a situation where we don’t have the advantage we once did. And we have to think very differently about our security if that’s the case.
- kevin roose
I mean, it sounds like you’re kind of saying that the way to stop a bad guy with an AI drone is a good guy with an AI drone. Am I hearing you right, that you’re saying that we just — we have to have such overwhelmingly powerful lethal technology in our military that other countries won’t mess with us?
- chris kirchhoff
I totally hear you, and frankly, hear all the people that years ago were affiliated with the Stop Killer Robots movement. I mean, these weapons are they’re awful things. They do awful things to human beings. But, at the same time, there’s a deep literature on something called strategic stability that comes out of the Cold War. And part of that literature focuses on the proliferation of nuclear weapons and the fact that, actually, the proliferation of nuclear weapons has actually reduced great power conflict in the world. Because nobody actually wants to get in a nuclear exchange. Now, would it be a good idea for everybody in the world to have their own nuclear weapon? Probably not. So all these things have limits. But that’s an illustration of how strategic stability — in other words, a balance of power — can actually reduce the chance of conflict in the first place.
- kevin roose
I’m curious what you make of the Stop Killer Robots movement. There was a petition or an open letter that went around years ago that was signed by a bunch of leaders in AI, including Elon Musk, and Demis Hassabis of Google DeepMind. They all pledged not to develop autonomous weapons. Do you think that was a good pledge or do you support autonomous weapons?
- chris kirchhoff
I think autonomous weapons are now kind of a reality in the world. We’re seeing this on the front lines of Ukraine. And if you’re not willing to fight with autonomous weapons, then you’re going to lose.
- casey newton
So there’s this former OpenAI employee, Leopold Ashenbrenner, who recently released a long manifesto called “Situational Awareness.” And one of the predictions that he makes is that by about 2027, the US government would recognize that superintelligent AI was such a threat to the world order that AGI, a sort of artificial general intelligence, would become functionally a project of the national security state, something like an AGI Manhattan Project.
There’s other speculation out there that maybe at some point the government would have to nationalize an OpenAI or an Anthropic. Are you hearing any of these whispers yet? Are people starting to game this out at all?
- chris kirchhoff
I confess, I haven’t made it all through each 155 pages of that long manifesto.
- casey newton
Yeah. It was very long. You could summarize it with ChatGPT, though.
- chris kirchhoff
Fantastic. But these are important things to think about. Because it could be that in certain kinds of conflicts, whoever has the best AI wins. And if that’s the case, and if AI is getting exponentially more powerful, then — to take things back to the iPhone and the F-35 — it’s going to be really important that you have the kind of AI of the iPhone variety.
You have the AI that that’s new every year. You don’t have the F-35 with the processor that was baked in in 2001, and you’re only taking off on a runway in 2016. So I do think it’s very important for folks to be focused on AI. Where this all goes, though, is a lot of speculation.
- casey newton
If you had to bet in 10 years, do you think that the AI companies will still be private? Or do you think the government will have stepped in and gotten way more interested and maybe taken one of them over?
- chris kirchhoff
Well, I’d make the observation that — we all watched “Oppenheimer,” especially employees at AI firms. They seemed to love that film. And nuclear technology, it’s what national security strategists would call a point technology. It’s sort of zero to one. Either you have it or you don’t.
And AI is not going to end up being a point technology. It’s a very broadly diffuse technology that’s going to be applied not only in weapons systems but in institutions. It’s going to be broadly diffused around the economy. And for that reason, I don’t think — or it’s less likely, anyway, that we’re going to end up in a situation where somebody has the bomb and somebody doesn’t. I think the gradations are going to be smoother and not quite as sharp.
- kevin roose
Part of what we’ve seen in other industries, as technology sort of moves in and modernizes things, is that often things become cheaper. It’s cheaper to do things using the latest technology than it is to do using outdated technology. Do you think some of the work that you’ve done at DIU, trying to modernize how the Pentagon works, is going to result in smaller defense budgets being necessary going forward? Is the $2 trillion or so that the DOD has budgeted for this year, could that be $1 trillion or half a trillion in the coming years because of some of these modernizations?
- chris kirchhoff
You’re giving us a raise, Kevin. I think it’s more like $800 billion.
- kevin roose
Well, I’m sorry. I got that answer from Google’s AI overview, which —
- chris kirchhoff
There you go.
- kevin roose
— also told me to eat rocks and put glue on my pizza.
- chris kirchhoff
We should get the Secretary of Defense to try that. He’d like that answer if he had that large of a budget. You know, it’s certainly true that, for a lot less money now, you can have a really destructive effect on the world, as drone pilots in Ukraine and elsewhere in the world are showing. I think it’s also true that the US military has a whole bunch of legacy weapons systems that unfortunately are kind of like museum relics. Right?
If our most advanced tank can be destroyed by a drone, it might be time to retire our tank fleet. If our aircraft carriers cannot be defended against the hypersonic missile attack, it’s probably not a good idea to sail one of our aircraft carriers anywhere near an advanced adversary. So I think it is an opportune moment to really look at what we are spending our money on at the Defense Department and remember the goal of our nation’s founders, which is to spend what we need to on defense and not a penny more.
- casey newton
So I hear you saying that it’s very important for the military to be prepared technologically for the world we’re in. And that means working with Silicon Valley. But is there anything more specific that you want to share that you think that either side needs to be doing here, or something specific that you want to see out of that collaboration?
- chris kirchhoff
One of the main goals of defense innovation unit was literally to get the two groups talking. Before Defense Innovation Unit was founded, a Secretary of Defense hadn’t been to Silicon Valley in 20 years. That’s almost a generation. So Silicon Valley invents the mobile phone. It invents cloud computing. It invents AI. And nobody from the Defense Department bothers to even come and visit. And that’s a problem. And so just bringing the two sides into conversations itself, I think, a great achievement.
- kevin roose
Well, Chris, thanks so much for coming on. Really appreciate the conversation. And the book, which comes out on July 9, is called “Unit X, How the Pentagon and Silicon Valley Are Transforming the Future of War.”
- chris kirchhoff
Thank you.
- casey newton
Thank you, Chris.
When we come back, we’ll play another round of Hat GPT.
[MUSIC PLAYING]
All right, Kevin. Well, it’s time once again for Hat GPT.
[MUSIC PLAYING]
- kevin roose
This, of course, is our favorite game. It’s where we draw news stories from the week out of a hat, and we talk about them until one of us gets sick of hearing the other one talk and says, stop generating.
- casey newton
That’s right. Now, normally we pull slips of paper out of a hat. But due to our remote setup today, I will instead be pulling virtual slips of paper out of a laptop. But for those following along at YouTube, you will still see that I do have one of the Hat GPT hats here, and I will be using it for comic effect throughout the segment.
- kevin roose
Will you put it on, Actually?
- casey newton
Sure.
- kevin roose
If we don’t need it to draw slips out of, you might as well be wearing it.
- casey newton
I might as well be wearing it.
- kevin roose
Yeah. It’ll look so good.
- casey newton
Thank you so much. And thank you once again to the listener who made this for us.
- kevin roose
[LAUGHS]:
- casey newton
You’re a true fan.
- kevin roose
It’s so good.
- casey newton
Perfect all right, Kevin, let me draw the first slip out of the laptop.
- kevin roose
[LAUGHS]:
- casey newton
Ilya Sutskever has a new plan for safe superintelligence. Ilya Sutskever is, of course, the OpenAI co-founder who was part of the coup against Sam Altman last year. And Bloomberg reports that he is now introducing his next project, a venture called Safe Superintelligence, which aims to create a safe, powerful artificial intelligence system within a pure resource organization that has no near-term intention of selling AI products or services. Kevin, what do you make of this.
- kevin roose
Well, it’s very interesting on a number of levels, right? In some sense, this is kind of a mirror image of what happened several years ago, when a bunch of safety-minded people left OpenAI after disagreeing with Sam Altman and started an AI safety-focused research company. That, of course, was Anthropic.
And so this is sort of the newest twist in this whole saga is that Ilya Sutskever, who was very concerned about safety and how to make superintelligence that was smarter than humans, but also not evil, and not going to destroy us, who has done something very similar. But I have to say, I don’t quite get it. He’s not saying much about the project. But part of the reason that these companies sell these AI products and services is to get the money to buy all the expensive equipment that you need to train these giant models.
- casey newton
Right.
- kevin roose
And so I just don’t know. If you if you don’t have any intention of selling this stuff before it becomes AGI, how are you paying for the AGI? Do you have a sense of that?
- casey newton
No, I don’t. I mean, Daniel Gross, who is one of Ilya’s co-founders here, has basically said, don’t worry about fundraising. We are going to be able to fundraise as much as we need for this. So I guess we will see. But, yeah, it does feel a bit strange to have someone like Ilya saying he’s going to build this totally without a commercial motive, in part because he said it before. Right?
This is what is so funny about this, is it truly just is a case where the circle of life keeps repeating, where a small band of people get together and they say, we want to build a very powerful AI system and we’re going to do it very safely. And then, bit by bit, they realize, well, actually, we don’t think that it’s being built out safely. We’re going to form a breakaway faction. So if you’re playing a lot at home, I believe this is the second breakaway faction to break away from OpenAI after Anthropic. And I look forward to Ilya quitting this company eventually to start a newer, even more safe company somewhere else.
- kevin roose
The really, really safe. Superintelligence company.
- casey newton
Yeah. His next company, you’ve never seen safety like this. They wear helmets everywhere, in the office, and they just have keyboards.
- kevin roose
All right, stop generating.
- casey newton
All right, pick one out of the hat, Kevin.
- kevin roose
All right. Five men convicted of operating JetFlix, one of the largest illegal streaming sites in the US — this is from “Variety.” JetFlix was a sort of pirated streaming service that charged $9.99 a month, while claiming to host more than 183,000 TV episodes, which is more than the combined catalogs of Netflix, Hulu, Vudu, and Amazon Prime Video combined.
- casey newton
Ooh, that sounds great. I’m going to open an account.
- kevin roose
[LAUGHS]:
- casey newton
What a deal.
- kevin roose
So the Justice Department says this was all illegal. And the five men who were charged with operating it were convicted by a federal jury in Las Vegas. According to the court documents and the evidence that was presented at the trial, this group of five men were basically scraping piracy services for illegal episodes of TV and then hosting them on their own thing. It does not appear to have been a particularly sophisticated scam. It’s just, what if we did this for a while and charge people money and then got caught?
- casey newton
Well, I think this is very sad. Because here, finally, you have some people who are willing to stand up and fight inflation. And what does the government do? They come in and they say, knock it off. I will say, though, Kevin, I think these — I can actually point to the mistake that these guys made.
- kevin roose
What’s that?
- casey newton
So instead of scraping these 183,000 TV episodes and selling them for $9.99 a month, what they should have done was feed them all into a large language model. And then you can sell them to people for $20 a month.
- kevin roose
[LAUGHS]:
- casey newton
When these guys get out of prison, I hope they get in touch with me. Because I have a new business idea for them.
- kevin roose
[LAUGHS]: All right. Stop generating.
- casey newton
All right. Here’s a story called “260 McNuggets? McDonald’s Ends Drive-Through Tests Amid Errors.” This is from “The New York Times.” After a number of embarrassing videos showing customers fighting with its AI-powered drive-through technology, McDonald’s announced it was ending its three year partnership with IBM.
In one TikTok video, friends repeatedly tell the AI assistant to stop, as it added hundreds of Chicken McNuggets to their order. Other videos show the drive-through technology, adding nine iced teas to an order, refusing to add a Mountain Dew, and adding unrequested bacon to ice cream. Kevin, what the heck is going on at McDonald’s?
- kevin roose
Well, as a fan of bacon ice cream, I should say, I want to get to one of these McDonald’s before they take this thing down.
- casey newton
Ooh, me too.
- kevin roose
Did you see any of these videos or any of these —
- casey newton
I haven’t. Did you?
- kevin roose
No, but we should watch one of them together.
- casey newton
Yeah.
- kevin roose
Let’s watch one of them.
- archived recording 1
[LAUGHS]: No.
- archived recording 2
Stop!
- kevin roose
The caption is, “The McDonald’s robot is wild.” And it shows their screen at the thing where it has — it is, like, just tallying up McNuggets and starts charging them more than $200.
- casey newton
Here’s my question. Why is everyone just rushing to assume that the AI is wrong here? Maybe the AI knows what these gals need. Because, Kevin, here’s the thing. When superintelligence arrives, we’re going to think that we’re smarter than it. But it’s going to be smart. So there’s going to be a period of adjustment as we sort of get used to having our new AI master.
- kevin roose
Have you been to a drive-through that used AI to take your order yet?
- casey newton
No. I mean, I don’t even really understand — what was the AI here? Was this like, an Alexa thing where I said, McDonald’s, add 10 McNuggets? Or what was actually happening?
- kevin roose
No. So this was a partnership that McDonald’s struck with IBM. And basically, this was technology that went inside the little menu things that have the microphone and the speaker in them. And so instead of having a human say, what would you like, it would just say, what would you like. And then said it, and they would recognize it and put it into the system. So you could sort of eliminate that part of the labor of the drive-through.
- casey newton
Got it. Well, look. I for one, am very glad this happened because for so long now I’ve wondered, what does IBM do? And I have no idea. And now, if it ever comes up again, I’ll say, oh, that’s the company that made the McDonald’s stop working.
- kevin roose
[LAUGHS]: We should say it’s not just McDonald’s. A bunch of other companies are starting to use this technology. I actually think this is probably inevitable this technology will get better. They will Iron out some of the kinks. But I think there will probably still need to be a human in the loop on this one.
- casey newton
All right. Stop generating.
- kevin roose
OK.
- casey newton
Kevin, let’s talk about what happened when 20 comedians got AI to write their routines. This is in the “MIT Technology Review.” Google DeepMind researchers found that although popular AI models from OpenAI and Google were effective at simple tasks, like structuring a monologue or producing a rough first draft, they struggled to produce material that was original, stimulating, or crucially funny. And I’d like to read you an example LLM joke, Kevin.
- kevin roose
Please.
- casey newton
I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation.
- kevin roose
[LAUGHS]: Waka, waka, waka.
- casey newton
Hey, I got a laugh out of you.
- kevin roose
[LAUGHS]:
- casey newton
Kevin, what do you make of this? Are you surprised that AI isn’t funnier?
- kevin roose
No, but this is interesting. It’s like, this has been something that critics of large language models have been saying for years. it’s like, well, it can’t tell a joke. And, you know, I should say, I’ve had funny experiences with large language models, but never after asking them to tell me a joke.
- casey newton
Yeah. Remember when you said to Sydney, take my wife, please?
- kevin roose
[LAUGHS]:
I get no respect, I tell ya. No, but this is an interesting. Because this was a study that was actually done by researchers at Google DeepMind. And basically, it appears that they had a group of comedians try writing some jokes with their language models.
And in the abstract, it says that most of the participants in this study felt that the large language models did not succeed as a creativity support tool by producing bland and biased comedy tropes, which they describe in this paper as being akin to cruise ship comedy material from the 1950s, but a bit less racist. So they were not impressed, these comedians, by these language models’ ability to tell jokes. You’re an amateur comedian. Have you ever used AI to come up with jokes?
- casey newton
No, I haven’t. And I have to say, I think I understand the technological reason why these things aren’t funny, Kevin, which is that comedy is very up to the minute. Right? For something to be funny, it’s typically something that is on the edge of what is currently thought to be socially acceptable. And what is socially acceptable or what is surprising within a social context, that just changes all the time.
And these models, they are trained on decades, and decades, and decades of text. And they just don’t have any way of figuring out, well, what would be a really fresh thing to say. So maybe they’ll get there eventually, but as they’re built right now, I’m truly not surprised that they’re not funny.
- kevin roose
All right, stop generating. Next one. Waymo ditches the waitlist and opens up its robotaxis to everyone in San Francisco. This is from “The Verge.” Since 2022, Waymo has made its rides in its robotaxi service available only to people who were approved off of a waitlist. But, as of this week, they are opening it up to anyone who wants to ride in San Francisco. Casey, what do you make of this?
- casey newton
Well, I am excited that more people are going to get to try this. This is, as you’ve noted, Kevin, become kind of the newest tourist attraction in San Francisco, is when you come here, you see if you can find somebody to give you a ride in one of these self-driving cars. And now everyone is just going to be able to come here and download the app and use it immediately.
I have to say, I am scared about what this is going to mean for the wait times on Waymo. I’ve been taking Waymo more lately, and it often will take 12 or 15 or 20 minutes to get a car. And now that everyone can download the app, I’m not expecting those wait times to go down.
- kevin roose
Yeah. I hope they are also simultaneously adding more cars to the Waymo network because this is going to be very popular. I’m a little —
- casey newton
You’re saying they need “way mo” cars.
- kevin roose
They do. I’m worried about the wait times, but I’m also worried about the condition of these cars. Because I’ve noticed, in my last few rides, they’re a little dirtier.
- casey newton
Oh, wait. Really?
- kevin roose
Yeah. I mean, they’re still pretty clean, but I did see a takeout container in one the other day.
- casey newton
Really? Oh, my god.
- kevin roose
So I just — I want to know how they plan to keep these things from becoming filled with people’s crap.
- casey newton
All right, stop generating.
- kevin roose
All right, last one. This one comes from “The Verge.” TikTok’s AI tool accidentally let you put Hitler’s words in a paid actor’s mouth. TikTok mistakenly posted a link to an internal version of an AI digital avatar tool that apparently had zero guardrails. This was a tool that was supposed to let businesses generate ads using AI with paid actors, using this AI voice dubbing thing that would make the actors repeat whatever you wanted to have them say, endorse your product or whatever. But very quickly, people found out that you could use this tool to repeat excerpts of “Mein Kampf,” Bin Laden’s letter to America. It told people to drink bleach and vote on the wrong day. [LAUGHS]
- casey newton
And that was its recipe for a happy Pride celebration.
- kevin roose
[LAUGHS]:
- casey newton
Listen. Obviously, this is a very sort of silly story. It sounds like everything involved here was a mistake. And I think if you’re making some sort of digital AI tool that is meant to generate ads, you do want to put safeguards around that. Because, otherwise, people will exploit it. That said, Kevin, I do think people need to start getting comfortable with the fact that people are just going to be using these AI creation tools to do a bunch of kooky and crazy stuff.
- kevin roose
Like what?
- casey newton
Like, people are — in the same way that people use Photoshop to make nudity or offensive images — and we don’t storm the gates of Adobe saying, shut down Photoshop — the same thing is going to happen with these digital AI tools. And while I do think that there are some notable differences and it is sort of — it varies on a case by case basis, and if you’re making a tool for creating ads, it feels different, there are just going to be a lot of digital tools like this that use AI to make stuff. And other people are going to use it to make offensive stuff. And when they do, we should hold the people accountable, perhaps, more than we hold the tool accountable.
- kevin roose
Yeah, I agree with that. And I also think this sort of product is not super worrisome to me. I mean, obviously it should not be reading excerpts from “Mein Kampf.” Obviously, they did not mean to release this. I assume that when they do fix it, it will be much better. But this is not a thing that is creating deepfakes of people without their consent. This is a thing where if you have a brand, you can choose from a variety of stock avatars that are created from people who actually get paid to have their likenesses used commercially.
The specific details of this one don’t bother me that much, but it does open up some new licensing opportunities for us. We could have an AI set of avatars that could be out there advertising crypto tokens or whatever. And I, for one, I’m excited to see how people use that.
- casey newton
Oh, man. Well, and if TikTok weren’t banned, we could probably make a lot of money that way. But instead, we’re out of luck.
- kevin roose
Yeah. Get it while it’s good. All right.
- casey newton
Close up the hat!
- [music playing, applause]
- kevin roose
“Hard Fork” is produced by Rachel Cohn and Whitney Jones. We’re edited this week by Larissa Anderson. We’re fact-checked by Caitlin Love. Today’s show was engineered by Corey Schreppel. Original music by Elisheba Ittoop, Rowan Niemisto, and Dan Powell.
Our audience editor is Nell Gallogly. Video production by Ryan Manning, Sawyer Roque, and Dylan Bergersen. You can watch this full episode on YouTube, at youtube.com/hardfork. You can see Casey’s cool hat. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com.
[MUSIC PLAYING]
Record Labels Sue A.I. Music Generators, Inside the Pentagon’s Tech Upgrade and HatGPT
A little something for everyone: lawsuits, fighter jets and Casey in a bucket hat.
transcript
Record Labels Sue A.I. Music Generators, Inside the Pentagon’s Tech Upgrade and HatGPT
A little something for everyone: lawsuits, fighter jets and Casey in a bucket hat.
This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
- kevin roose
Casey, today I learned something new. I’m in New York. I’m visiting some friends and going to some weddings. And I’m at “The New York Times” building, and I learned just today that there’s an entire podcast studio at “The Times” building that I’ve never seen.
- casey newton
That’s how big “The New York Times” is. It’s just full of nooks and crannies that very few people have ever seen with their own eyes.
- kevin roose
Yeah. So up on the 28th floor, apparently there’s a gleaming new audio temple. I hear it’s very fancy, but I’ve never been. So right after we tape today, I’m going to go up there and I’m going to see the promised land.
- casey newton
You know what I would do if I got to see the studio, Kevin, and I were in New York?
- kevin roose
What’s that?
- casey newton
I would sneak in, and I’d get a little pocket knife, and I’d just carve “Kevin + Casey forever”—
- kevin roose
[LAUGHS]:
- casey newton
— into one of the brand new desks. And I would dare them to say anything to me about it.
- kevin roose
Yeah, Let’s not let you up there.
- casey newton
[LAUGHS]:
- kevin roose
I’m going to actually ask security to specifically —
- casey newton
Can you imagine —
- kevin roose
— not let you in there.
- casey newton
— Ezra Klein sits down to interview the Secretary General of the United Nations and he just sees carved into the desk, “Casey + Kevin forever?”
- kevin roose
Casey was here.
- casey newton
Suck it, Klein!
[MUSIC PLAYING]
- kevin roose
I’m Kevin Roose, a tech columnist from “The New York Times.”
- casey newton
I’m Casey Newton from “Platromer.” And this is “Hard Fork.” This week, the record label sued two leading AI music apps, accusing them of copyright infringement. RIAA CEO Mitch Glazer joins us to make the case. Then we go inside the pentagon’s tech turmoil with Chris Kirchhoff, author of the new book “Unit X.” And finally, a round of Hat GPT.
[MUSIC PLAYING]
Now, Kevin, not a lot of people know this, but we have something interesting in common.
- kevin roose
What’s that?
- casey newton
Well, we were a couple of the few teenagers who managed to survive the Napster era without getting sued by the Recording Industry Association of America.
- kevin roose
[LAUGHS]: Yes, although one of my friends actually did get sued by the recording industry and had to pay thousands of dollars.
- casey newton
And is he still in prison or did he get out?
- kevin roose
No, he got out. He’s fine.
- casey newton
Oh, thank god. Thank god. Well, look, Kevin. It’s always a strange day when you find yourself siding with the RIAA. And yet, when I heard this week’s news, I thought, well, I want to hear what they have to say.
- kevin roose
Yeah, let’s talk about it.
- casey newton
So these are, I think, the biggest lawsuits to come out against AI companies since your newspaper, “The New York Times,” sued OpenAI. This week, the RIAA announced that major record labels are suing two of the leading AI music companies, alleging massive copyright infringement, and are maybe trying to shut them down.
- kevin roose
Yeah. So the companies that the music labels sued are Udio and Suno. We’ve talked about them a little bit on this show before. Basically, these are tools that sort of work like ChatGPT. You can type in a prompt. You can say, make me a Country Western song about a bear fighting a dolphin, and it’ll do that.
But basically, these companies have come under a lot of criticism for allowing people to create songs without compensating the original artists. Like other AI companies, these companies do not say where they’re getting their data. Suno is releasing statements using words like “transformative” and “completely new outputs,” basically arguing that this is all fair use and that they don’t owe anything to the holders of the copyrighted songs that they were presumably using to train their models. But we’ll see how the courts see that.
- casey newton
Well, and if you’ve never heard one of these, Kevin, I think we — and I know you have — we should play a clip, I think, just so people get a sense of just how closely these services can mimic artists you might be familiar with. So, Kevin, we’re about to hear a song called “Prancing Queen,” and this was made with Suno.
- ["prancing queen" playing]
- archived recording
(SINGING) You can dance
You can jive
Having the time of your life
Ooh, see that girl
Watch that scene
Take in the dancing queen
Friday night and the lights are low
Looking out for a place to go.
- casey newton
Can you believe what they’re doing to ABBA, Kevin?
- kevin roose
[LAUGHS]: You know, I actually saw an ABBA cover band once, many years ago. And that was better than the ABBA cover band.
- casey newton
You know what I liked about that clip is it reminded me — if I had had, like, six beers and someone shoved me onto a karaoke stage and said, sing “Dancing Queen” from memory, that’s exactly what it would have sounded like.
- kevin roose
[LAUGHS]:
- casey newton
So we wanted to get to the bottom of this, so we reached out to the RIAA. And they offered up Chairman and CEO Mitch Glazer, so we’re going to bring him on and ask him what this lawsuit is all about.
- kevin roose
Let’s do it.
[MUSIC PLAYING]
- casey newton
Mitch Glazer. Welcome to “Hard Fork.”
- mitch glazer
Thanks. Thanks for having me.
- casey newton
So make your case that these two AI music companies violated copyright law.
- mitch glazer
Pretty easy case to make. They copied basically the entire history of recorded music. They stored it. Then they used it by matching it to prompts so that they rejiggered the ones and zeros. And, basically, they took chicken and made chicken salad and then said they don’t have to pay for the chickens.
- casey newton
Right.
- [laughs]
Well, some people out there say that this is a transformative use, that no matter what you put into a Udio or a Suno, you’re not going to get back the original track. You’re going to get something that has been transformed. What do you make of that case?
- mitch glazer
Well, there is such a thing as transformative use. It’s actually a pretty important doctrine. It’s supposed to help encourage human creativity, not substitute for it. There was a really important Supreme Court case on this issue, thank god, that just happened last year, where they kind of dispelled this notion that any time you take something and splash a little bit of color on it, it’s transformative. That’s not what that means. And this is very similar.
- kevin roose
Mitch, you said that these companies have scraped the entire sort of history of recorded music and used them to train their models. But I read through the complaint that came out, and there isn’t direct evidence. There’s no smoking gun. They haven’t said outright, yes, we did train on all this copyrighted music.
Presumably, that is something you hope will come out in the course of this case. But do you actually need to be able to prove that they did use copyrighted music in order to win this case? Can the lawsuit succeed without that?
- mitch glazer
I think, ultimately, we do have to show that they copied the music, but they can’t hide their inputs and then say, sorry, we’re not going to tell you what we copied. So you’re not allowed to sue us for what we copied. That, they can’t do. So what we were able to do was show in the complaint that there’s no way they could have come out with this output without copying all of this on the input side. It’s sort of this equitable doctrine in fancy legal terms that says, you’re not allowed to hide the evidence and then say you can’t sue me.
- casey newton
Right. Well, on that point, one of my favorite parts of the Suno lawsuit is where it discusses Suno reproducing what are called producer tags, which is when a producer says their name at the start or end of a song. What does it mean that Suno can nail a perfect Jason Derulo?
- mitch glazer
[LAUGHS]: Well, thank god Jason derulo likes to say his name in the beginning of his songs. Right? And in “The Blender,” that piece wasn’t ripped apart enough. And so that was sort of one of those smoking guns where we’re able to show if you look at the output, right, and Jason Derulo’s tag is in the output, I think they copied the Jason Derulo song on the input.
- kevin roose
Yeah. So one of the arguments we’ve heard from AI companies — not just AI music companies, but also companies that train language models — is that these machines, these models, they’re basically learning the way that humans learn. They’re not just regurgitating copyrighted materials. They are learning to generate wholly new works.
And I want to just read you Suno’s response that they gave to “The Verge” and have you share your thoughts on it. Suno said, quote, “We would have been happy to explain this to the corporate record labels that filed this lawsuit and, in fact, we tried to do so. But instead of entertaining a good faith discussion, they reverted to their old lawyer-led playbook. Suno is built for new music, new uses, and new musicians. We prize originality.” What do you make of that?
- mitch glazer
Yeah, I love this argument. I love that machines are original and machines and humans are the same. If you just use human words around machines, like learning, well, then there’s no difference between us. If you read a book, it’s the same as copying it on the xerox machine, and then mixing all the words around, and then coming out with something new. Has nothing to do with the fact that they actually happened to take all of these human created works.
Machines don’t learn. Right? Machines copy, and then they basically match a user’s prompt with an analysis of patterns in what they’ve copied. And then they finish the pattern based on predictive algorithms or models. Right? That’s not what humans do. Humans have lived experiences. They have souls. They have genius.
They actually listen, get inspired, and then they come out with something different, something new. They don’t blend around patterns based on machine-based algorithms. So nice try, but I don’t think that argument is very convincing. And I also love that they say that the creators and their partners are the ones that have resorted to the old legal playbook. They’re not resorting to, oh, we can do this. It’s based on fair use. It’s transformative. We’re going to seek forgiveness instead of permission.
- casey newton
Well, I mean, you also have the investor in the company who you quote in the lawsuit saying — because he said this to a news outlet — I don’t know if I would have invested in this company if he had a deal with the record labels. Because then they probably wouldn’t have needed to do what they needed to do, which I assume he sort of meant Hoover up all this music without paying for it.
- mitch glazer
Yeah. That’s in the legal world, what we call a bad fact.
- archived recording
[LAUGHS]:
- mitch glazer
That is a bad fact for the other side. You don’t want your investor saying, gee, if they had really done this the legal way, I don’t think I would have invested because it’s just too hard. It’s just too hard to do it the legal way.
- kevin roose
Mitch, we’ve seen other lawsuits come out in the past year from media companies, including “The New York Times,” which sued OpenAI and Microsoft last year, alleging similar types of copyright violations. How similar or different from the sort of text-based copyright arguments is the argument that you are making against these AI music generation companies?
- mitch glazer
I think the arguments are the same, that you have to get permission before you copy it, just basic copyright law. The businesses are very different. And I think looking at the public reports on the licensing negotiations going on between the news media and companies like OpenAI, news is dynamic. It has to change every single day. And so there needs to be a feed every single day for the input to actually be useful for the output.
Music is catalog. Right? You copy the song once. It’s there forever. You don’t have to change it. You don’t have to feed the beast every single day. So I think the business models are quite different, but I think that the legal basis is very similar.
- casey newton
Well, and does that suggest that, for you all, it’s actually essential that you are able to capture the value of the back catalogs for training, whereas for these media outlets they might have a better chance of securing ongoing revenue?
- mitch glazer
I think that’s right. I also think that we have an artistic intent element that’s very, very different. It’s one thing for somebody to say, you can copy this into your input. It’s another to say that you can then change it so that the output uses the work of the artist, but it doesn’t match their artistic intent.
To say that these — sort of what Kevin was saying earlier. They’re saying, look, we’re just — we had discussions. What’s your problem? Well, the problem is we work with human artists who care about the output. And so they need to have a role and a place in deciding how their art’s being used.
- kevin roose
Yeah.
- casey newton
My understanding is that it’s actually gotten much more difficult and expensive to sample lately than it used to be in ways that don’t really like. I’d probably like to see more sampling than we do. But it seems like something changed around the time that the song “Blurred Lines” came out, and now all of a sudden everybody has to like — even just a whisper of familiarity. Is there anything sort of in whatever led to that situation that you expect you’ll bring to this lawsuit?
- mitch glazer
I think sampling is actually a pretty good example because samples are licensed today. And there’s plenty of sampling going on. Now, does it mean that anybody can sample anything they want without permission? No. Do we have to have clearance departments that go out, whether you’re talking about a video, or a movie, or another song, and get those rights especially from publishers and prior artists? Yes, you do.
That’s called ownership. And you actually get to control your own art and what you do, and it’s not a simple process all the time. It takes work. We I’m sure that our companies get frustrated and trying to do clearances, but it’s what you got to do.
- kevin roose
Yeah there have been some companies that have faced copyright challenges in AI generative products that have responded by basically limiting the products, by saying you can’t refer to a living artist in a prompt. It won’t give you a response, basically to try to quell some of these concerns. Would that satisfy your concerns or are you trying to shut these things down altogether?
- mitch glazer
They’re trying to confuse the issue. They’re pretending that this is about the output. The lawsuit is about the input. Right? So actually, by saying you can’t type Jason Derulo’s name, you can’t type Adele’s name, what they’re basically doing there is further hiding the input. They’re making it so that you can’t see what they copied. And they’re pretending that this is all about the output in order to say, look, we’re putting guardrails on this thing.
That’s not what this lawsuit’s about. This lawsuit is about them training their model on all of these sound recordings, not on limiting prompts on the output to further hide the input. But it’s clever. It’s clever.
- kevin roose
OK. So you want to shut this down.
- mitch glazer
Well, I don’t think that — we want to — we call it an injunction, Kevin. We would like to shut down their business as it’s operating now, which is something illegally trained on our sound recordings with output that doesn’t reflect the artists integrity. Yes.
Does that mean that we want to shut down AI generators or AI companies? No. There’s 50 companies that are already licensed by the music industry. And I think it’s important — and this differs a lot from, I think, the old days — but nobody’s scared of this technology as in they want to shut down the technology. Everybody wants to use the technology.
But they definitely see good AI versus bad AI. Good AI complements artists, helps them stretch music, helps assists them in the creation of music. Bad AI takes from them, gives no attribution, no compensation, asks no permission, and then generates something that’s a bunch of garbage.
- kevin roose
Yeah. I know of some artists who would say they want to shut down this stuff entirely, that they don’t think there’s any good form of it. But you mentioned the old days. And so I want to ask you about this. I think a lot of my fellow millennials think of the RIAA as the group that went around suing teenagers for pirating music during the Napster era.
The RIAA has also sued a bunch of other file sharing and music sharing platforms, and actually fought the initial wave of streaming music services like Spotify because there was this fear that these all-you-can-eat streaming services would eat into CD sales. Now, of course, we know that streaming wasn’t the death of music or music labels, that actually it ended up being — sort of saving the music industry.
Do you think there’s a danger here, that actually these AI music generation programs could ultimately be great for music labels just like Spotify was, and that you might be trying to cut off something productive before it’s actually had the chance to mature?
- mitch glazer
I don’t think it’s really the same at all. I think that there’s an embrace of AI, and there was well before these generators came out or well before OpenAI, especially within the tech content partnerships that have existed, and have grown, and matured, and gotten sophisticated through the streaming age.
So even though the RIAA’s job is to be the boogeyman and to go out there and enforce rights, which we do with zeal and hopefully a smile doing our job — here, I think that really what we’re trying to do is create a marketplace like streaming, where there are partnerships and both sides can grow and evolve together. Because the truth is, you don’t have one without the other.
Record companies don’t control their prices. They don’t control their distribution. They’re now gateways, not gatekeepers. The democratization of the music industry has changed everything. And I think they’re seeking the same kind of relationships with AI companies that they have with streaming companies today.
- kevin roose
What would a good model look like? There are reports this week that YouTube is in talks with record labels about paying them a lot of money to license songs for their AI music generation software. Do you think that’s the solution here, that there will be sort of these platforms that pay record labels and then they get to use those labels’ songs in training their models? Do you think it’s fine to use AI to generate music as long as the labels get paid? Or is there sort of a larger objection to the way that these models work at all?
- mitch glazer
I think it works as long as it’s done in partnership with the artists and, at the end of the day, it moves the ball forward for the label and the artist. The YouTube example is interesting, because that’s really geared towards YouTube Shorts. Right? It’s geared towards fans being able to use generated music to put with their own videos for 15 or 30 seconds. That’s an interesting business model.
BandLab is a tool for artists, Splice, Beatport, Focusrite, Output, Waves, Eventide — every digital audio workstation that’s now using AI — Native Instruments, Oberheim. I mean, there are so many AI companies that have these bespoke agreements and different types of tools that are meant to be done with the artistic community, that I think the outliers are the Sunos and the Udios, who frankly are not very creative in trying to help with human ingenuity. Instead, they’re just substitutional to make money for investors by taking everybody else’s stuff.
- casey newton
We’ve seen some pretty different reactions to the rise of AI among artists. Some people clearly seem to want no part of it. On the other hand, we’ve seen musicians like Grimes saying, here, take my voice. Make whatever you want. We’ll figure out a way to share the royalties if any of your songs becomes a hit. I’m curious, if you’re able to get the deals that you want, do you expect any controversy within the artist community and artists saying, hey, why you sell my back catalog to this blender? I don’t to be part of that.
- mitch glazer
Yeah. I think, look, artists are entitled to be different. And there are going to be artists — I think. Kevin, you said earlier, you know artists who are so scared of this they just — they do want to shut the whole thing down. They just don’t want their music and their art touched. Right?
I know directors of movies who can’t stand that the formatting is different for an airplane. That’s their baby and they just don’t want it. Then there are artists like Grimes who are like, I’m finding experimental. I’m fine having fans take it, and change it, and do something with it.
All of that is good. They’re the artist, right? I mean, it’s their art. Our job is to invest in them, partner with them, help find a market for them. But at the end of the day, if you’re trying to find a market for an artist’s work that they don’t — and they don’t want that work in the market, it’s not going to work.
- kevin roose
Yeah. Have you listened to much AI generated music? Are there any songs you’ve heard that you thought, that’s actually kind of good?
- mitch glazer
Yeah. I think in the sort of overdubbing voice and likeness thing, that it’s a little bit better than some of the simple prompts on these AI generators like Udio and Suno. But I heard a — I Billie Eilish’s voice on a Revivalist song, and I was like, wow, she should cover this song. It was really great. Right? It just kind of seemed like a perfect fit, and it’s fun to play with those things.
But again, like in that case, I think Billie Eilish gets to decide if her voice is used on something. I think she gets to decide if she wants to do a cover. I don’t think that it’s up to Overdub to be able to do that. I did do a bunch of prompts, as you can imagine, on some of these services, trying to see what happens if you just put in a few words, like a simple country song. And then what happens if you put in 20 different descriptors?
And what’s amazing is you can — every 10 seconds you get a new song. So if you don’t like it, just put in a few more words and it rejiggers the patterns. And you can start getting to a point where you’re like, OK, it’s not human and the lyrics kind of suck. But it’s not terrible.
We are only six months into the huge progression of this technology. And if you had listened to a prompt where you were allowed to put in Jason Derulo or Mariah Carey six months ago versus now, you would notice a marked improvement. And that’s one of the reasons why we needed to get out there now. We needed to bring this suit. We need the courts to settle this issue so that we can move forward on a thriving marketplace before the technology gets so good that it is a seismic threat to the industry.
- casey newton
I’ve seen a lot of support for this lawsuit among people I follow who are more inclined to side with artists and musicians. But there have also been some tech industry folks who think this is all kind of — it sounds like the RIAA is just sort of anti-progress, anti-technology. I even saw one tech person call you the ultimate decels, which is like — in Silicon Valley, that’s sort of the biggest insult. Decels are people who want to basically stop technological progress, basically Luddites. What do you make of that line of argument from the Valley?
- mitch glazer
This has been the same argument that the Valley’s had since 1998. To me, that’s a 30-year-old argument. If you look at the marketplace today, where Silicon Valley thrives is when rights are in place and they form partnerships. And then they grow into sophisticated global leaders where they can tweak every couple of years their deals, and come up with new products that allow them to feed these devices that are nothing without the content on them.
There’s always sort of this David versus Goliath thing, no matter what side you’re on. But if you think about it, music, which is a $17 billion industry in the United States — I think one tech company’s cash on hand is five times that, not to mention they’re $289 billion market caps. Right? But they are completely dependent on the music that these geniuses create in order to thrive. And to say that these creators are stopping their progress, I think is sort of laughable.
I think what’s much more threatening is if you move fast and break things without partnerships, what are you threatening on the tech side with a no holds barred, culture destroying, machine-led world? It sounds pretty gross to me.
- casey newton
So what happens next? The lawsuits have been filed. This stuff tends to take a long time. But what can we look forward to? Will there be sort of scandalous emails unearthed in discovery that you’ll post to your website? Or what can we look forward to here?
- mitch glazer
Well, moving forward in discovery, I think we’ll be prohibited from posting anything to our —
- casey newton
Aw, man.
- mitch glazer
I know. You think you’re disappointed.
- kevin roose
If you want to just send them to HardFork@NYTimes.com, that’s fine.
- mitch glazer
I live for that stuff. But we will, of course, follow the rules. But, you know, we have filed in the districts where these companies reside. And so I hope that within a year or so we will actually get to the meat of this. Because if you think about it, the judge has to decide when they raise fair use as a defense. Is this fair use or not? Right?
And that is something that has to be part of the beginning, part of the lawsuit. So we’re hopeful that — when I say a short time, in legal terms, that means a year or two. But we’re hoping that in a short time we will actually get a decision, and that it sends the right message to investors and to new companies, like there’s a right way and a wrong way to do this. Doors are open for the right way.
- kevin roose
Yeah. I think there’s a story here about startups that are sort of moving fast, breaking things, asking for forgiveness, not permission. But I also think there’s a story here that maybe we haven’t talked about, about restraint. Because I know that a lot of the big AI companies had tools years ago that could generate music, but they did not release them.
I remember hearing a demo from someone who worked at the big AI companies — one of the big AI companies maybe two years ago of one of these kinds of tools. But I think they understood. They were scared because they knew that the record industry is very organized. It has this kind of history of litigation.
And they sort of understood that they were likely to face lawsuits if they let this out into the public. So have you had discussions with the bigger AI companies, the more established ones that are working on this stuff? Or are they just sort of intuiting correctly that they would have a lot of legal problems on their hands if they let this stuff out into the general public?
- mitch glazer
You know, you’re raising a point that I don’t think is discussed often enough, which is that there are companies out there that deserve credit for restraint. And part of it is that they know that we would bring a lawsuit. And in the past, we haven’t been shy, and that’s useful.
But part of it is also because these are their partners now. There are real business relationships here and human relationships here between these companies. And so their natural — I think they’re moving towards a world where their natural instinct is to approach their partners and see if they can work with them.
I know that YouTube did its Dreamcast experiment, approached artists, approached record companies. That was sort of the precursor or the beta to whatever they might be discussing now for what’s going to go on Shorts that we talked about earlier. And I’m sure that there are many others. But you’re right. Yes, there are going to be companies like Suno and Udio that just seek investment, want to make profit, and steal stuff. But there is restraint and constructive action by a lot of companies out there who do view the creators as their partners.
- kevin roose
Well, it’s a really interesting development and I look forward to following it as it progresses.
- casey newton
Thanks, Mitch.
- kevin roose
Thanks so much, Mitch. Thanks for coming by.
- mitch glazer
Thanks, guys. Bye. [MUSIC PLAYING]
- casey newton
When we come back, we’re going inside the Pentagon with Chris Kirchhoff, the author of “Unit X.” Are we allowed inside the pentagon?
[MUSIC PLAYING]
- kevin roose
Well, Casey, let’s talk about war.
- casey newton
Let’s talk about war. And what is it good for?
- kevin roose
[LAUGHS]:
- casey newton
Some say absolutely nothing. Others write books arguing the opposite.
- kevin roose
Yeah. So I’ve been wanting to talk about AI and technology and the military for a while on the show now. Because I think what’s really flying under the radar of the mainstream tech press these days is that there’s just been a huge shift in Silicon Valley toward making things for the military, and the US military in particular.
Years ago, it was the case that most of the big tech companies, they were sort of very reluctant to work with the military, to sell things to the Department of Defense, to make products that could be used in war. They had a lot of ethical and moral quandaries about that, and their employees did, too. But we’ve really seen a shift over the past few years.
There are now a bunch of startups working in defense tech, making things that are designed to be sold to the military and to national security forces. And we’ve also just seen a big effort at the Pentagon to modernize their infrastructure, to update their technology, to not get beat by other nations when it comes to having the latest and greatest weapons.
- casey newton
Yeah. And also, Kevin, just the rise of AI in general, I think, has a lot of people curious about what the military thinks of what is going on out here, and is it eventually going to have to adopt a much more aggressive AI strategy than the one it has today.
- kevin roose
Yeah. So a few weeks ago I met a guy named Chris Kirchhoff. He’s one of the authors, along with Raj Shah, of a book called “Unit X.” Chris is sort of a longtime defense tech guy. He was involved in a number of tech projects for the military. He worked at the National Security Council during the Obama administration.
Fun fact — he was the highest ranking openly gay advisor in the Department of Defense for years. And, most importantly, he was a founding partner of something called the Defense Innovation Unit, or DIU. It also goes by the name Unit X, which is basically this little experimental division that was set up about a decade ago by the Department of Defense to try to basically bring the Pentagon’s technology up to date.
And he and Raj Shah, who was another founding partner of the DIU, just wrote a book called “Unit X,” that basically tells the story of how the Pentagon sort of realized that it had a problem with technology and set out to fix it. So I just thought we should bring in Chris to talk about some of the changes that he has seen in the military when it comes to technology and in Silicon Valley when it comes to the military.
- casey newton
Let’s do it.
[MUSIC PLAYING]
- kevin roose
Chris Kirchhoff, welcome to “Hard Fork.”
- chris kirchhoff
Glad to be here.
- kevin roose
So I think people hear a lot about the military and technology, and they kind of assume that there are very futuristic things happening inside the Pentagon that we’ll hear about at some point in the future. But a lot of what’s in your book is actually about old technology and how underwhelming some of the military’s technological prowess is.
Your book opens with an anecdote about your co-author actually using a compact digital assistant because it was better, it had better navigation tools than the navigation system on his $30 million jet. That was how you introduced the fact that the military is not quite as technologically sophisticated as many people might think. So I’m curious. When you first started your work with the military, what was the state of the technology?
- chris kirchhoff
Well, it’s really interesting. You go to the movies — and we’ve all seen “Mission Impossible” and “James Bond.” And wouldn’t it be wonderful if that actually were the reality behind the curtain? But when you open up the curtain, you realize that actually, in this country, there are two entirely different systems of technological production. There’s one for the military and then there’s one for everything else.
And to dramatize this on the image of our book, “Unit X,” we have an iPhone. And on top of the iPhone is sitting an F-35, the world’s most advanced fighter jet, a fifth generation stealth fighter known as a flying computer for its incredible sensor fusion and weapons suites. But the thing about the F-35 is that its design was actually finalized in 2001, and it did not enter operations until 2016. And a lot happened between 2001 and 2016, including the invention of the iPhone, which, by the way, has a faster processor in it than the F-35.
And if you think about the F-35 over the subsequent years, there’s been three technological upgrades to it. And we’re now — what we’re almost in iPhone 16 season. And once you understand that, you understand why it was really important that the Pentagon thought about establishing a Silicon Valley office to start accessing this whole other technology ecosystem that is faster and generally a lot less expensive than the firms that produce technology for the military.
- kevin roose
Yeah. I remember, years ago, I interviewed your former boss, Ash Carter, the former Secretary of Defense who died in 2022. And I sort of expected that he’d want to talk about all the newfangled stuff that the Pentagon was making — autonomous drones, stealth bombers.
But instead, we ended up talking about procurement, which is basically how the government buys stuff, whether it’s a fighter jet or an iPhone. And I remember him telling me that procurement was just unbelievably complicated, and it was a huge part of what made government and the military in particular so inefficient and kind of backwards technologically. Describe how the military procures things, and then what you discovered about how to maybe short circuit that process or make it more efficient.
- chris kirchhoff
If you’re looking to buy a nuclear aircraft carrier or a nuclear submarine, you can’t really go on Amazon and price shop for that.
- casey newton
I learned that the hard way, by the way.
- chris kirchhoff
Should have upped your credit limit, Casey.
- casey newton
Yeah.
- chris kirchhoff
And so, in those circumstances, when the government is representing the taxpayer and buying one large military system, a multibillion dollar system from one vendor, it’s really important that the taxpayer not be overcharged. And so the Pentagon has developed a really elaborate system of procurement to ensure that it can control how production happens, the cost of individual items.
And that works OK it you’re in a situation where you have the government and one firm that makes one thing. It doesn’t make any sense, though, if you’re buying goods that multiple firms make or that are just available on the consumer market. And so one of the challenges we had out here in Silicon Valley, when we first did a defense innovation unit, was trying to figure out how to work with startups and tech companies who, it turns out, weren’t interested in working with the government.
And the reason why is that the government typically buys defense technology through something called the Federal Acquisition Rules, which is a little bit like the Old Testament. It’s this dictionary-size book of regulations. Letting a contract takes 18 to 24 months. If you’re a startup, your investors tell you not to go down that path for a couple reasons.
One, you’re not going to make enough money before your next valuation. You’re going to have to wait too long. You’re going to go out of business before the government actually closes the sale. And two, even if you get that first contract, it’s totally possible another firm with better lobbyists is going to take it right back away from you. So at Defense Innovation Unit, we had to figure out how to solve that paradox.
- kevin roose
Part of what I found interesting about your book was just the sort of accounts that you gave of these sort of clever loopholes that you and your team found around some of the bureaucratic slowness at the Pentagon, and in particular this loophole that allowed you to purchase technology much, much more quickly that one of your staffers found. Tell that story, and maybe that’ll help people understand the systems that you were up against.
- chris kirchhoff
It’s an amazing story. We knew when we arrived in Silicon Valley that we would fail unless we figured out a different way to contract with firms. And our first week in the office, this 29-year-old staff member named Lauren Dailey, the daughter actually of a tank commander whose way of serving was to become a civilian in the Pentagon and work on acquisition, happened to be up — because she’s a total acquisition nerd — late at night reading the just-released National Defense Authorization Act, which is another dictionary-sized compendium of law that comes out every year.
And she was flipping through it, trying to find new provisions in law that might change how acquisition worked. And sure enough, in section 815 of the law, she found a single sentence that she realized somebody had placed there that changed everything. And that single sentence would allow us to use a completely different kind of contracting mechanisms called “other transaction authorities” that were actually first invented during the space race to allow NASA, during the Apollo era, to contract with mom and pop suppliers.
And so she realized that this provision would allow us not only to use OTAs to buy technology, but the really important part is that if it worked, it was successful in the pilot, we could immediately go to buy it at scale, to buy it in production. We didn’t have to recompete it. There would be no pause, no 18-month pause between demonstrating your technology and having the Department buy it.
And when Lauren brought this to our attention, we thought oh, boy, this really is a game changer. So we flew Lauren to Washington. We had her meet with the head of acquisition policy at the Department of Defense. And in literally three weeks, we changed 60 years of Pentagon policy to create a whole new way to buy technology that, to this day, has been used to purchase $70 billion of technology for the Department of Defense.
- kevin roose
You just said that the reason that Silicon Valley tech companies, some of them didn’t want to work with the military, is because of this sort of arcane and complicated procurement process. But there are also real moral objections among a lot of tech companies and tech workers.
In 2018, Google employees famously objected to something called Project Maven, which was a project the company had planned with the Pentagon that would have used their AI image recognition software to improve weapons and things like that. And there have been just a lot of objections over the years from Silicon Valley to working with the military, to being defense contractors. Why do you think that was? And do you think that’s changed at all?
- chris kirchhoff
To me, it’s completely understandable. So few Americans serve in uniform. Most of us don’t actually know somebody who’s in the military. And it’s really easy here in Silicon Valley, where the weather’s great — sure, you read headlines in the news. But the military is not something that you encounter in your daily life.
And you join a tech company to make the world better, to develop products that are going to help people. You don’t join a tech company assuming that you’re going to be making the world a more lethal place. But at the same time, Project Maven was actually something that I got a chance to work on, and Defense Innovation Unit and a whole group of people led.
- casey newton
Remind us what Project Maven was.
- chris kirchhoff
So Project Maven was an attempt to use artificial intelligence and machine learning to take a whole bunch of footage, surveillance footage that was being captured in places like Iraq, and Afghanistan, and other military missions, and to use machine learning to label what was found in this footage. So it was a tool to essentially automate work that otherwise would have taken human analysts hundreds of hours to do. And it was used primarily for intelligence, and reconnaissance, and force protection.
So Project Maven — this is another misconception. When you talk about military systems, there’s really a lot of unpacking you have to do. The headline that got project maven in trouble said, Google working on secret drone project. And it made it look as if Google was partnering with Defense Innovation Unit and the Department of Defense to build offensive weapons to support the US drone campaign. And that’s not all what was happening. What was happening is Google was building tools that would help our analysts process the incredible amount of data flowing off many different observation platforms in the military.
- kevin roose
Right. But Google employees objected to this. They made a big case that Google should not participate in Project Maven, and eventually the company pulled out of the project. But speaking of Project Maven, I was curious because there was some reporting from Bloomberg this year that showed that the military has actually used Project Maven’s technology as recently as February to identify targets for airstrikes in the Middle East. So isn’t that exactly what the Google employees who were protesting Project Maven back when you were working on it at the Defense Department — isn’t that exactly what they were scared would happen?
- chris kirchhoff
Well, Project Maven, when Google was involved, was very much a pilot R&D project. And it since transitioned actually into much more of an operational phase. And it’s being used in a number of places. In fact, it’s actually being used in Ukraine, as well, to help the US identify military targets in Ukraine. And so this, again, speaks to AI think, a sea change in Silicon Valley since that original protest of 3,000 Google employees over Project Maven, where the world has changed a lot and not for the better.
We have a land war going on in Europe, on the border of NATO. And, in fact, that war — the Ukraine conflict — has mobilized a lot of people in Silicon Valley to want to try and help support Ukraine’s quest to defend its territory. And so I think we’re in a very different time and moment right now, as people watching the news realize that our security is actually quite a bit more fragile than we might have first imagined.
- kevin roose
I think one reaction that our listeners may have to this is they are very concerned about the use of AI and other technologies by the military. And I also hear from a lot of people at the tech companies who are really concerned about some of these contracts. I remember, during the Project Maven controversy, talking with people at Google who were part of the protest movement. And some things that they would say to me are like, well, if I wanted to work for a defense contractor, I would have gone to go work for Lockheed Martin or Raytheon.
I’m curious. What moral argument would you make to someone who maybe says, look, I did not sign up to make weapons of war, I am an AI engineer, I work on large language models, or I work on image recognition stuff? What do you tell that person if you’re working at the DIU, trying to persuade them that it’s OK to sell or license that technology to the pentagon?
- chris kirchhoff
I think you tell them that we’re at an extraordinary moment in the history of war where everything is changing. And I’ll just give you a couple data points. A few weeks ago, the United States asked the Ukrainian military to pull back from the front lines all 31 of the M1A1 Abrams tanks that we had deployed to Ukraine to allow their military to better repel a Russian invasion. These are the most advanced tanks, not only in our inventory, but in the inventory of any one of our allies. And they were getting whacked by $2,000. Russian Kamikaze drones — $2,000 drones killing tanks.
What does that tell me? That tells me that a century of mechanized warfare that began in the first World War is over. And if you’re building an army that’s full of tanks, you now are the emperor with fewer clothes anyway. And I’ll give you one other — a couple other data points.
Hamas has kicked off the largest ground war in the Middle East — because of its attack in Israel on the 7th of October — since the 1973 Arab-Israeli war, threatening to destabilize the Middle East into a wider war. How did they do it? They did it by taking quadcopters and using them to drop grenades on the generators powering the Israeli border towers. That’s what allowed the fighters to pour over the border.
Another data point — Houthi rebels in Yemen right now are holding hostage 12 percent of global shipping in the Red Sea because they’re using autonomous sea drones, missiles, and loitering munitions to harass shipping. And so we’re at this moment where the arsenal of democracy that we have, this incredibly forceful military that’s full of things like aircraft carriers and tanks, are wielding weapons that are no longer as effective as they were 10 years ago. And if our military and our adversaries doesn’t catch up quick, we may be in a situation where we don’t have the advantage we once did. And we have to think very differently about our security if that’s the case.
- kevin roose
I mean, it sounds like you’re kind of saying that the way to stop a bad guy with an AI drone is a good guy with an AI drone. Am I hearing you right, that you’re saying that we just — we have to have such overwhelmingly powerful lethal technology in our military that other countries won’t mess with us?
- chris kirchhoff
I totally hear you, and frankly, hear all the people that years ago were affiliated with the Stop Killer Robots movement. I mean, these weapons are they’re awful things. They do awful things to human beings. But, at the same time, there’s a deep literature on something called strategic stability that comes out of the Cold War. And part of that literature focuses on the proliferation of nuclear weapons and the fact that, actually, the proliferation of nuclear weapons has actually reduced great power conflict in the world. Because nobody actually wants to get in a nuclear exchange. Now, would it be a good idea for everybody in the world to have their own nuclear weapon? Probably not. So all these things have limits. But that’s an illustration of how strategic stability — in other words, a balance of power — can actually reduce the chance of conflict in the first place.
- kevin roose
I’m curious what you make of the Stop Killer Robots movement. There was a petition or an open letter that went around years ago that was signed by a bunch of leaders in AI, including Elon Musk, and Demis Hassabis of Google DeepMind. They all pledged not to develop autonomous weapons. Do you think that was a good pledge or do you support autonomous weapons?
- chris kirchhoff
I think autonomous weapons are now kind of a reality in the world. We’re seeing this on the front lines of Ukraine. And if you’re not willing to fight with autonomous weapons, then you’re going to lose.
- casey newton
So there’s this former OpenAI employee, Leopold Ashenbrenner, who recently released a long manifesto called “Situational Awareness.” And one of the predictions that he makes is that by about 2027, the US government would recognize that superintelligent AI was such a threat to the world order that AGI, a sort of artificial general intelligence, would become functionally a project of the national security state, something like an AGI Manhattan Project.
There’s other speculation out there that maybe at some point the government would have to nationalize an OpenAI or an Anthropic. Are you hearing any of these whispers yet? Are people starting to game this out at all?
- chris kirchhoff
I confess, I haven’t made it all through each 155 pages of that long manifesto.
- casey newton
Yeah. It was very long. You could summarize it with ChatGPT, though.
- chris kirchhoff
Fantastic. But these are important things to think about. Because it could be that in certain kinds of conflicts, whoever has the best AI wins. And if that’s the case, and if AI is getting exponentially more powerful, then — to take things back to the iPhone and the F-35 — it’s going to be really important that you have the kind of AI of the iPhone variety.
You have the AI that that’s new every year. You don’t have the F-35 with the processor that was baked in in 2001, and you’re only taking off on a runway in 2016. So I do think it’s very important for folks to be focused on AI. Where this all goes, though, is a lot of speculation.
- casey newton
If you had to bet in 10 years, do you think that the AI companies will still be private? Or do you think the government will have stepped in and gotten way more interested and maybe taken one of them over?
- chris kirchhoff
Well, I’d make the observation that — we all watched “Oppenheimer,” especially employees at AI firms. They seemed to love that film. And nuclear technology, it’s what national security strategists would call a point technology. It’s sort of zero to one. Either you have it or you don’t.
And AI is not going to end up being a point technology. It’s a very broadly diffuse technology that’s going to be applied not only in weapons systems but in institutions. It’s going to be broadly diffused around the economy. And for that reason, I don’t think — or it’s less likely, anyway, that we’re going to end up in a situation where somebody has the bomb and somebody doesn’t. I think the gradations are going to be smoother and not quite as sharp.
- kevin roose
Part of what we’ve seen in other industries, as technology sort of moves in and modernizes things, is that often things become cheaper. It’s cheaper to do things using the latest technology than it is to do using outdated technology. Do you think some of the work that you’ve done at DIU, trying to modernize how the Pentagon works, is going to result in smaller defense budgets being necessary going forward? Is the $2 trillion or so that the DOD has budgeted for this year, could that be $1 trillion or half a trillion in the coming years because of some of these modernizations?
- chris kirchhoff
You’re giving us a raise, Kevin. I think it’s more like $800 billion.
- kevin roose
Well, I’m sorry. I got that answer from Google’s AI overview, which —
- chris kirchhoff
There you go.
- kevin roose
— also told me to eat rocks and put glue on my pizza.
- chris kirchhoff
We should get the Secretary of Defense to try that. He’d like that answer if he had that large of a budget. You know, it’s certainly true that, for a lot less money now, you can have a really destructive effect on the world, as drone pilots in Ukraine and elsewhere in the world are showing. I think it’s also true that the US military has a whole bunch of legacy weapons systems that unfortunately are kind of like museum relics. Right?
If our most advanced tank can be destroyed by a drone, it might be time to retire our tank fleet. If our aircraft carriers cannot be defended against the hypersonic missile attack, it’s probably not a good idea to sail one of our aircraft carriers anywhere near an advanced adversary. So I think it is an opportune moment to really look at what we are spending our money on at the Defense Department and remember the goal of our nation’s founders, which is to spend what we need to on defense and not a penny more.
- casey newton
So I hear you saying that it’s very important for the military to be prepared technologically for the world we’re in. And that means working with Silicon Valley. But is there anything more specific that you want to share that you think that either side needs to be doing here, or something specific that you want to see out of that collaboration?
- chris kirchhoff
One of the main goals of defense innovation unit was literally to get the two groups talking. Before Defense Innovation Unit was founded, a Secretary of Defense hadn’t been to Silicon Valley in 20 years. That’s almost a generation. So Silicon Valley invents the mobile phone. It invents cloud computing. It invents AI. And nobody from the Defense Department bothers to even come and visit. And that’s a problem. And so just bringing the two sides into conversations itself, I think, a great achievement.
- kevin roose
Well, Chris, thanks so much for coming on. Really appreciate the conversation. And the book, which comes out on July 9, is called “Unit X, How the Pentagon and Silicon Valley Are Transforming the Future of War.”
- chris kirchhoff
Thank you.
- casey newton
Thank you, Chris.
When we come back, we’ll play another round of Hat GPT.
[MUSIC PLAYING]
All right, Kevin. Well, it’s time once again for Hat GPT.
[MUSIC PLAYING]
- kevin roose
This, of course, is our favorite game. It’s where we draw news stories from the week out of a hat, and we talk about them until one of us gets sick of hearing the other one talk and says, stop generating.
- casey newton
That’s right. Now, normally we pull slips of paper out of a hat. But due to our remote setup today, I will instead be pulling virtual slips of paper out of a laptop. But for those following along at YouTube, you will still see that I do have one of the Hat GPT hats here, and I will be using it for comic effect throughout the segment.
- kevin roose
Will you put it on, Actually?
- casey newton
Sure.
- kevin roose
If we don’t need it to draw slips out of, you might as well be wearing it.
- casey newton
I might as well be wearing it.
- kevin roose
Yeah. It’ll look so good.
- casey newton
Thank you so much. And thank you once again to the listener who made this for us.
- kevin roose
[LAUGHS]:
- casey newton
You’re a true fan.
- kevin roose
It’s so good.
- casey newton
Perfect all right, Kevin, let me draw the first slip out of the laptop.
- kevin roose
[LAUGHS]:
- casey newton
Ilya Sutskever has a new plan for safe superintelligence. Ilya Sutskever is, of course, the OpenAI co-founder who was part of the coup against Sam Altman last year. And Bloomberg reports that he is now introducing his next project, a venture called Safe Superintelligence, which aims to create a safe, powerful artificial intelligence system within a pure resource organization that has no near-term intention of selling AI products or services. Kevin, what do you make of this.
- kevin roose
Well, it’s very interesting on a number of levels, right? In some sense, this is kind of a mirror image of what happened several years ago, when a bunch of safety-minded people left OpenAI after disagreeing with Sam Altman and started an AI safety-focused research company. That, of course, was Anthropic.
And so this is sort of the newest twist in this whole saga is that Ilya Sutskever, who was very concerned about safety and how to make superintelligence that was smarter than humans, but also not evil, and not going to destroy us, who has done something very similar. But I have to say, I don’t quite get it. He’s not saying much about the project. But part of the reason that these companies sell these AI products and services is to get the money to buy all the expensive equipment that you need to train these giant models.
- casey newton
Right.
- kevin roose
And so I just don’t know. If you if you don’t have any intention of selling this stuff before it becomes AGI, how are you paying for the AGI? Do you have a sense of that?
- casey newton
No, I don’t. I mean, Daniel Gross, who is one of Ilya’s co-founders here, has basically said, don’t worry about fundraising. We are going to be able to fundraise as much as we need for this. So I guess we will see. But, yeah, it does feel a bit strange to have someone like Ilya saying he’s going to build this totally without a commercial motive, in part because he said it before. Right?
This is what is so funny about this, is it truly just is a case where the circle of life keeps repeating, where a small band of people get together and they say, we want to build a very powerful AI system and we’re going to do it very safely. And then, bit by bit, they realize, well, actually, we don’t think that it’s being built out safely. We’re going to form a breakaway faction. So if you’re playing a lot at home, I believe this is the second breakaway faction to break away from OpenAI after Anthropic. And I look forward to Ilya quitting this company eventually to start a newer, even more safe company somewhere else.
- kevin roose
The really, really safe. Superintelligence company.
- casey newton
Yeah. His next company, you’ve never seen safety like this. They wear helmets everywhere, in the office, and they just have keyboards.
- kevin roose
All right, stop generating.
- casey newton
All right, pick one out of the hat, Kevin.
- kevin roose
All right. Five men convicted of operating JetFlix, one of the largest illegal streaming sites in the US — this is from “Variety.” JetFlix was a sort of pirated streaming service that charged $9.99 a month, while claiming to host more than 183,000 TV episodes, which is more than the combined catalogs of Netflix, Hulu, Vudu, and Amazon Prime Video combined.
- casey newton
Ooh, that sounds great. I’m going to open an account.
- kevin roose
[LAUGHS]:
- casey newton
What a deal.
- kevin roose
So the Justice Department says this was all illegal. And the five men who were charged with operating it were convicted by a federal jury in Las Vegas. According to the court documents and the evidence that was presented at the trial, this group of five men were basically scraping piracy services for illegal episodes of TV and then hosting them on their own thing. It does not appear to have been a particularly sophisticated scam. It’s just, what if we did this for a while and charge people money and then got caught?
- casey newton
Well, I think this is very sad. Because here, finally, you have some people who are willing to stand up and fight inflation. And what does the government do? They come in and they say, knock it off. I will say, though, Kevin, I think these — I can actually point to the mistake that these guys made.
- kevin roose
What’s that?
- casey newton
So instead of scraping these 183,000 TV episodes and selling them for $9.99 a month, what they should have done was feed them all into a large language model. And then you can sell them to people for $20 a month.
- kevin roose
[LAUGHS]:
- casey newton
When these guys get out of prison, I hope they get in touch with me. Because I have a new business idea for them.
- kevin roose
[LAUGHS]: All right. Stop generating.
- casey newton
All right. Here’s a story called “260 McNuggets? McDonald’s Ends Drive-Through Tests Amid Errors.” This is from “The New York Times.” After a number of embarrassing videos showing customers fighting with its AI-powered drive-through technology, McDonald’s announced it was ending its three year partnership with IBM.
In one TikTok video, friends repeatedly tell the AI assistant to stop, as it added hundreds of Chicken McNuggets to their order. Other videos show the drive-through technology, adding nine iced teas to an order, refusing to add a Mountain Dew, and adding unrequested bacon to ice cream. Kevin, what the heck is going on at McDonald’s?
- kevin roose
Well, as a fan of bacon ice cream, I should say, I want to get to one of these McDonald’s before they take this thing down.
- casey newton
Ooh, me too.
- kevin roose
Did you see any of these videos or any of these —
- casey newton
I haven’t. Did you?
- kevin roose
No, but we should watch one of them together.
- casey newton
Yeah.
- kevin roose
Let’s watch one of them.
- archived recording 1
[LAUGHS]: No.
- archived recording 2
Stop!
- kevin roose
The caption is, “The McDonald’s robot is wild.” And it shows their screen at the thing where it has — it is, like, just tallying up McNuggets and starts charging them more than $200.
- casey newton
Here’s my question. Why is everyone just rushing to assume that the AI is wrong here? Maybe the AI knows what these gals need. Because, Kevin, here’s the thing. When superintelligence arrives, we’re going to think that we’re smarter than it. But it’s going to be smart. So there’s going to be a period of adjustment as we sort of get used to having our new AI master.
- kevin roose
Have you been to a drive-through that used AI to take your order yet?
- casey newton
No. I mean, I don’t even really understand — what was the AI here? Was this like, an Alexa thing where I said, McDonald’s, add 10 McNuggets? Or what was actually happening?
- kevin roose
No. So this was a partnership that McDonald’s struck with IBM. And basically, this was technology that went inside the little menu things that have the microphone and the speaker in them. And so instead of having a human say, what would you like, it would just say, what would you like. And then said it, and they would recognize it and put it into the system. So you could sort of eliminate that part of the labor of the drive-through.
- casey newton
Got it. Well, look. I for one, am very glad this happened because for so long now I’ve wondered, what does IBM do? And I have no idea. And now, if it ever comes up again, I’ll say, oh, that’s the company that made the McDonald’s stop working.
- kevin roose
[LAUGHS]: We should say it’s not just McDonald’s. A bunch of other companies are starting to use this technology. I actually think this is probably inevitable this technology will get better. They will Iron out some of the kinks. But I think there will probably still need to be a human in the loop on this one.
- casey newton
All right. Stop generating.
- kevin roose
OK.
- casey newton
Kevin, let’s talk about what happened when 20 comedians got AI to write their routines. This is in the “MIT Technology Review.” Google DeepMind researchers found that although popular AI models from OpenAI and Google were effective at simple tasks, like structuring a monologue or producing a rough first draft, they struggled to produce material that was original, stimulating, or crucially funny. And I’d like to read you an example LLM joke, Kevin.
- kevin roose
Please.
- casey newton
I decided to switch careers and become a pickpocket after watching a magic show. Little did I know, the only thing disappearing would be my reputation.
- kevin roose
[LAUGHS]: Waka, waka, waka.
- casey newton
Hey, I got a laugh out of you.
- kevin roose
[LAUGHS]:
- casey newton
Kevin, what do you make of this? Are you surprised that AI isn’t funnier?
- kevin roose
No, but this is interesting. It’s like, this has been something that critics of large language models have been saying for years. it’s like, well, it can’t tell a joke. And, you know, I should say, I’ve had funny experiences with large language models, but never after asking them to tell me a joke.
- casey newton
Yeah. Remember when you said to Sydney, take my wife, please?
- kevin roose
[LAUGHS]:
I get no respect, I tell ya. No, but this is an interesting. Because this was a study that was actually done by researchers at Google DeepMind. And basically, it appears that they had a group of comedians try writing some jokes with their language models.
And in the abstract, it says that most of the participants in this study felt that the large language models did not succeed as a creativity support tool by producing bland and biased comedy tropes, which they describe in this paper as being akin to cruise ship comedy material from the 1950s, but a bit less racist. So they were not impressed, these comedians, by these language models’ ability to tell jokes. You’re an amateur comedian. Have you ever used AI to come up with jokes?
- casey newton
No, I haven’t. And I have to say, I think I understand the technological reason why these things aren’t funny, Kevin, which is that comedy is very up to the minute. Right? For something to be funny, it’s typically something that is on the edge of what is currently thought to be socially acceptable. And what is socially acceptable or what is surprising within a social context, that just changes all the time.
And these models, they are trained on decades, and decades, and decades of text. And they just don’t have any way of figuring out, well, what would be a really fresh thing to say. So maybe they’ll get there eventually, but as they’re built right now, I’m truly not surprised that they’re not funny.
- kevin roose
All right, stop generating. Next one. Waymo ditches the waitlist and opens up its robotaxis to everyone in San Francisco. This is from “The Verge.” Since 2022, Waymo has made its rides in its robotaxi service available only to people who were approved off of a waitlist. But, as of this week, they are opening it up to anyone who wants to ride in San Francisco. Casey, what do you make of this?
- casey newton
Well, I am excited that more people are going to get to try this. This is, as you’ve noted, Kevin, become kind of the newest tourist attraction in San Francisco, is when you come here, you see if you can find somebody to give you a ride in one of these self-driving cars. And now everyone is just going to be able to come here and download the app and use it immediately.
I have to say, I am scared about what this is going to mean for the wait times on Waymo. I’ve been taking Waymo more lately, and it often will take 12 or 15 or 20 minutes to get a car. And now that everyone can download the app, I’m not expecting those wait times to go down.
- kevin roose
Yeah. I hope they are also simultaneously adding more cars to the Waymo network because this is going to be very popular. I’m a little —
- casey newton
You’re saying they need “way mo” cars.
- kevin roose
They do. I’m worried about the wait times, but I’m also worried about the condition of these cars. Because I’ve noticed, in my last few rides, they’re a little dirtier.
- casey newton
Oh, wait. Really?
- kevin roose
Yeah. I mean, they’re still pretty clean, but I did see a takeout container in one the other day.
- casey newton
Really? Oh, my god.
- kevin roose
So I just — I want to know how they plan to keep these things from becoming filled with people’s crap.
- casey newton
All right, stop generating.
- kevin roose
All right, last one. This one comes from “The Verge.” TikTok’s AI tool accidentally let you put Hitler’s words in a paid actor’s mouth. TikTok mistakenly posted a link to an internal version of an AI digital avatar tool that apparently had zero guardrails. This was a tool that was supposed to let businesses generate ads using AI with paid actors, using this AI voice dubbing thing that would make the actors repeat whatever you wanted to have them say, endorse your product or whatever. But very quickly, people found out that you could use this tool to repeat excerpts of “Mein Kampf,” Bin Laden’s letter to America. It told people to drink bleach and vote on the wrong day. [LAUGHS]
- casey newton
And that was its recipe for a happy Pride celebration.
- kevin roose
[LAUGHS]:
- casey newton
Listen. Obviously, this is a very sort of silly story. It sounds like everything involved here was a mistake. And I think if you’re making some sort of digital AI tool that is meant to generate ads, you do want to put safeguards around that. Because, otherwise, people will exploit it. That said, Kevin, I do think people need to start getting comfortable with the fact that people are just going to be using these AI creation tools to do a bunch of kooky and crazy stuff.
- kevin roose
Like what?
- casey newton
Like, people are — in the same way that people use Photoshop to make nudity or offensive images — and we don’t storm the gates of Adobe saying, shut down Photoshop — the same thing is going to happen with these digital AI tools. And while I do think that there are some notable differences and it is sort of — it varies on a case by case basis, and if you’re making a tool for creating ads, it feels different, there are just going to be a lot of digital tools like this that use AI to make stuff. And other people are going to use it to make offensive stuff. And when they do, we should hold the people accountable, perhaps, more than we hold the tool accountable.
- kevin roose
Yeah, I agree with that. And I also think this sort of product is not super worrisome to me. I mean, obviously it should not be reading excerpts from “Mein Kampf.” Obviously, they did not mean to release this. I assume that when they do fix it, it will be much better. But this is not a thing that is creating deepfakes of people without their consent. This is a thing where if you have a brand, you can choose from a variety of stock avatars that are created from people who actually get paid to have their likenesses used commercially.
The specific details of this one don’t bother me that much, but it does open up some new licensing opportunities for us. We could have an AI set of avatars that could be out there advertising crypto tokens or whatever. And I, for one, I’m excited to see how people use that.
- casey newton
Oh, man. Well, and if TikTok weren’t banned, we could probably make a lot of money that way. But instead, we’re out of luck.
- kevin roose
Yeah. Get it while it’s good. All right.
- casey newton
Close up the hat!
- [music playing, applause]
- kevin roose
“Hard Fork” is produced by Rachel Cohn and Whitney Jones. We’re edited this week by Larissa Anderson. We’re fact-checked by Caitlin Love. Today’s show was engineered by Corey Schreppel. Original music by Elisheba Ittoop, Rowan Niemisto, and Dan Powell.
Our audience editor is Nell Gallogly. Video production by Ryan Manning, Sawyer Roque, and Dylan Bergersen. You can watch this full episode on YouTube, at youtube.com/hardfork. You can see Casey’s cool hat. Special thanks to Paula Szuchman, Pui-Wing Tam, Kate LoPresti, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com.
[MUSIC PLAYING]
Listen to and follow ‘Hard Fork’
Apple | Spotify | Amazon | YouTube
Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.
Guest:
Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War
Additional Reading:
Credits
“Hard Fork” is hosted by Kevin Roose and Casey Newton and produced by Rachel Cohn and Whitney Jones. This episode was edited by Larissa Anderson. Engineering by Corey Schreppel and original music by Dan Powell, Elisheba Ittoop and Rowan Niemisto. Fact-checking by Caitlin Love.
Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda.
Kevin Roose is a Times technology columnist and a host of the podcast "Hard Fork." More about Kevin Roose
Advertisement