We Really Recommend This Podcast Episode

This week we talk about the recommendation engines on platforms like YouTube and Spotify, which are coming under renewed scrutiny from a pair of US Supreme Court cases.
Threedimensional labyrinth of violet spheres
Photograph: IMAGINESTOCK/Getty Images

The modern internet is powered by recommendation algorithms. They're everywhere from Facebook to YouTube, from search engines to shopping websites. These systems track your online consumption and use that data to suggest the next piece of content for you to absorb. Their goal is to keep users on a platform by presenting them with things they'll spend more time engaging with. Trouble is, those link chains can lead to some weird places, occasionally taking users down dark internet rabbit holes or showing harmful content. Lawmakers and researchers have criticized recommendation systems before, but these methods are under renewed scrutiny now that Google and Twitter are going before the US Supreme Court to defend their algorithmic practices.

This week on Gadget Lab, we talk with Jonathan Stray, a senior scientist at the Berkeley Center for Human-Compatible AI who studies recommendation systems online. We discuss how recommendation algorithms work, how they’re studied, and how they can be both abused and restrained.

Show Notes

Read all about Section 230. Read Jonathan Stray and Gillian Hadfield’s story on WIRED about their engagement research. Read more about the two cases before the US Supreme Court.

Recommendations

Jonathan recommends the book The Way Out, by Peter Coleman. Mike recommends the novel Denial, by Jon Raymond. Lauren recommends Matt Reynolds’ WIRED story on how you’ve been thinking about food all wrong, and also getting a bag to make nut milk.

Jonathan Stray can be found on Twitter @jonathanstray. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

If you have feedback about the show, take our brief listener survey. Doing so will earn you a chance to win a $1,000 prize.

How to Listen

You can always listen to this week's podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here's how:

If you're on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts, and search for Gadget Lab. If you use Android, you can find us in the Google Podcasts app just by tapping here. We’re on Spotify too. And in case you really need it, here's the RSS feed.

Transcript

Lauren Goode: Mike.

Michael Calore: Lauren.

Lauren Goode: When was the last time you watched something or listened to something or bought something on the internet because it popped up as recommended in your feed?

Michael Calore: This morning.

Lauren Goode: Really?

Michael Calore: Yeah, I opened Spotify and it alerted me that a new single from an artist that it knows I listen to a lot had been released. So I tapped on it, listened to it.

Lauren Goode: Was it Lola Kirke?

Michael Calore: It was not. It was Adi Oasis.

Lauren Goode: Oh, OK. Because I know she's your girlfriend.

Michael Calore: What about you?

Lauren Goode: No, not me. I'm all about the human creation. If a friend tells me to listen to a podcast, I will listen to it. I don't often take app recommendations for podcasts. But maybe that's not totally true, because I also listen to playlists recommended by Spotify.

Michael Calore: Yeah. Does it enrich your life?

Lauren Goode: No, it makes me lazier about music, which is probably something that's offensive to you.

Michael Calore: No. I mean, I'm kind of over it, but if you want me to be offended, I can get irrationally offended.

Lauren Goode: OK. Well, that's our show, folks. I'm now seeking recommendations for a new host, so let's get to that.

Michael Calore: All right, great.

[Gadget Lab intro theme music plays]

Lauren Goode: Hi, everyone. Welcome to Gadget Lab. I'm Lauren Goode. I'm a senior writer at WIRED.

Michael Calore: And I'm Michael Calore. I'm a senior editor at WIRED.

Lauren Goode: And we're also joined this week by Jonathan Stray, who is a senior scientist at the Berkeley Center for Human Compatible AI. Jonathan, thank you so much for joining us.

Jonathan Stray: Hey, thanks. Good to be here.

Lauren Goode: So you research recommendation systems; those are the different algorithms that nudge us towards certain media or news, or clothes. Basically everything. We have all experienced recommendations, whether it's from the earliest days of Google's page-ranking algorithm or Amazon's shopping recommendations or being told whatever we should watch next on Netflix. Recently though, the power and influence of recommendations has come into sharper focus as Section 230 is officially being challenged in the Supreme Court. If you're not familiar with Section 230, we'll give a brief overview. It's part of a law passed back in the 1990s that has prevented the giant online platforms from being liable for the behavior of their users. So it's the reason why Facebook and Twitter and every other social media site doesn't get sued into oblivion every time one of their users posts something defamatory or harmful. This statute has long been controversial, and now arguments are being heard on it. And recommendations are part of that. Now, Jonathan, you actually wrote the amicus brief for this SCOTUS case, arguing that Section 230 should protect the use of recommender systems. Let's back up a tiny bit first. Tell us why recommendation systems were even a part of this case.

Jonathan Stray: Right. OK. Gosh, this is one of the most in-the-weeds SCOTUS cases ever, but we're going to try to sort it out for you.

Lauren Goode: OK.

Jonathan Stray: OK. So the facts of the case are actually rather sad. The case was brought by the family of a woman who was killed in the ISIS attacks in Paris in 2015, and the plaintiff argued that YouTube supported the terrorists because it allowed ISIS to recruit new members. Now, as this went up through the courts, they threw out most of the direct claims of support because of Section 230. Google basically argued, “Well, you're talking about something that someone else posted, so we're not responsible for that.” The only part that survived the Supreme Court is the claim that YouTube used the phrase “targeted recommendations.” The idea being that the YouTube algorithm found those people who were most likely to want to watch ISIS videos and showed those videos to them. So the idea is that this is something separate from merely hosting content that someone else has said. This is some sort of affirmative action by the platform to find people who would be most responsive to terrorist propaganda and show things to them.

Michael Calore: So if I'm understanding the arguments correctly that the plaintiff is making, it seems they're saying that when YouTube does that, it takes that proactive step of putting something in front of a user, it's actually acting like a publisher and it's making a decision about what the person should be seeing. So therefore it's the same as a publisher publishing something. Is that right?

Jonathan Stray: So the actual language of the particular piece of this law—I wish I had it in front of me so I could get it exactly right—but it's basically says that internet service providers, which is this fancy piece of language that more or less translates to websites, aren't liable if they are being treated as the publisher or speaker of content provided by someone else. So a lot of the legal wrangling is about whether an algorithm to try to match people with things they'll want to watch is something that publishers do or not. I mean, clearly, a newspaper doesn't try to figure out that particular people will like particular articles, but they do try to figure out what their audience in general will want, and they make choices about prioritization and categorization. So some of the back and forth on this case at various levels of the court system has been around, “Well, is this a thing that publishers do decide, that certain people are going to be more interested in certain things?” And so one of the sort of parsing wrangles and hairballs of arguments that has been happening on this case is, “Well, is this a thing that traditional publishers did or not?”

Lauren Goode: Right. Because I mean, for those listening, this is a little bit inside baseball, but that's why there's a difference between what we do here at WIRED … And by the way, this podcast that we're taping right now then gets published to places like Apple Podcasts and Spotify, which are big platforms, but why we are different from those tech platforms is because we produce and publish news stories or podcasts, and they host and distribute them. Would you say that's an accurate summation of the differences?

Jonathan Stray: Yeah, that's the difference. The legal distinction is whether the site or app that is hosting the content made that content itself or someone else made it. And you mentioned content published by their users. It actually doesn't have to be their users. So under Section 230, Spotify isn't liable for the content of a podcast that they host unless, for example, they paid for it. They had some part in the creation of that content. And one of the arguments that went before the court is, “Well, when a platform uses an algorithm to choose what individual people see, they are creating the content in some way.” And here's the thing, I'm sympathetic to this argument, actually. I think there's clearly some line beyond which sites and apps and platforms have to be liable. So if you made a site that was all about, “Let's help terrorist organizations recruit. Let's find the best-performing recruiting videos and find the audiences that are most likely to be subject to being persuaded by them and match them together,” I mean, I think that should be illegal. Now that's not what happened here. So the problem is sort of one of drawing lines, and the argument we made in the brief that I joined with the Center for Democracy and Technology and a bunch of other notable technologists is basically the way you're thinking about drawing this line isn't going to work. The plaintiffs asked for the line to be drawn around the phrase “targeted recommendations,” but they don't define what a targeted recommendation is, and nobody really knows. If the court ruled that a targeted recommendation incurred liability, it's not really clear that you could operate a website that had any sort of personalization at all. So if Wikipedia selects an article for you based on the language you speak, is that targeted? Nobody really knows the answers to these questions. And that's why I joined with the side saying, “Don't interpret the law this way.”

Lauren Goode: Right. You argued that it's not that the big tech platforms shouldn't ever be liable, but you don't see a functional difference between recommending content and displaying content.

Jonathan Stray: Right. And this sort of goes back to the offline analogy as well. So there was another brief which said, “OK. So if I'm a bookstore and I put a Stephen King book next to, I don't know, an Agatha Christie novel, what I'm saying is that the people who like Stephen King books might also like this older type of mystery book.” Is that a targeted recommendation? I mean, I'm using some information about a user or a customer to decide what I'm going to show them next. So there's all these sort of really weird lines, and we tried to argue that the court should apply a different standard. There's a previous case which has this language around material contribution, which is, did the platform do something specifically to help the terrorists in this case? Now, we haven't really gotten to this point in the courts, but if it got to that point, I think we would find that the answer was no. In fact, YouTube was trying to remove terrorist content at the time, which leads us to the case which was heard the next day, which was called Taamneh v. Twitter. And that one was, if a platform knows that terrorists are using their site, are they liable for helping the terrorists even if they're trying to remove terrorist content?

Michael Calore: So the arguments in both of these cases have already been made. The court has heard them. They're not going to release the decisions for months, many months. We do know how the arguments were made by the lawyers, and we know what questions the justices asked. So is there any way to foreshadow or predict whether the rulings will be drastic? No big deal? Somewhere in between?

Jonathan Stray: So from the questions that the justices were asking on the first case, the Gonzalez v. Google case on Section 230 specifically, I think they're going to shy away from making a broad ruling. I think it was Kagan who had this line, “We're not the nine greatest experts on the internet,” which got a big laugh by the way. And what she means by that is, it was part of a discussion where she was asking, “Well, shouldn't Congress sort this out?” I think that that's honestly the answer here. In fact, there are a bunch of proposed laws in Congress right now which would modify Section 230 in various ways, and we can talk about which of those I think make sense and which ones don't. But I think the court would like to punt it to Congress, and so is going to try to figure out a way to either dodge the question entirely, which they might do, because if you answer no on the second case, on the Taamneh case, and say, “Well, even if they're not immune under Section 230, they are not liable if they were trying to remove terrorist content and didn't get it all.” And so that would allow them to just not rule on that case. I think that's a reasonably likely outcome. I think they would like to find some way to do that, but who knows.

Lauren Goode: All right, Jonathan, this has been super helpful background. We're going to take a quick break and then come back with more about recommendation systems.

[Break]

Lauren Goode: So, Jonathan, you've been researching recommendation systems for years, and obviously this is a space that evolves a lot. It's a relatively new area of tech. We've maybe only been experiencing these for 20 years or so, and a lot of research has been done, but recently a new paper was published that said that some of the previous work around the extreme content on platforms like YouTube and TikTok might have been “junk”—that the methodology in this research has been problematic. Can you explain this? And also, does this mean that our worries about extreme content are all over and we can just go back to the internet being a happy place?

Jonathan Stray: Right.

Lauren Goode: That was a hyperbolic question. Yeah,

Jonathan Stray: Right. OK. Well, I may have been a little hyperbolic in “junk,” but OK. So I'm an academic, which means I have the luxury of not needing to root for a particular side in this debate, and I can take weirdly nuanced positions around this stuff. Basically the problem is this: There's all kinds of things that could be the bad effects of social media. It's been linked to depression, eating disorders, polarization, radicalization, all of this stuff. The problem is, it's pretty hard to get solid evidence for what the actual effects of these systems are. And one of the types of evidence that people have been relying on is a type of study which basically goes like this: You program a bot to watch … Let's say if you're doing YouTube. You can do this on TikTok or whatever. You program a bot to watch one video on YouTube, and then you're going to get a bunch of recommendations on the side, up next, and then randomly click one of those, and then watch the next video and randomly click one of the recommendations after that. So you get what they call a “random walk” through the space of recommendations. What these kinds of studies showed is that a fair number of these bots, when you do this, are going to end up at material that's extreme in some way. So extreme right, extreme left, more terrorist material. Although the really intense terrorist material is mostly not on platforms, because it's been removed. OK. So this has been cited as evidence over the years that these systems push people to extreme views. What this paper which came out last week showed—and this is a paper called “The Amplification Paradox and Recommender Systems,” by Ribeiro, Veselovsky, and West—was that when you do a random walk like this, you overestimate the amount of extreme content that is consumed, basically because most users don't like extreme content. They don't click randomly, they click on the more extreme stuff less than randomly. So as an academic and a methodologist, this is very dear to my heart, and I'm like, “This way of looking at the effects doesn't work.” Now, I don't think that means there isn't a problem. I think there's other kinds of evidence that suggests that we do have an issue. In particular, there's a whole bunch of work showing that more extreme content or more outrageous or more moralizing content or content that speaks negatively of the outgroup, whatever that may mean for you, is more likely to be clicked on and shared and so forth. And recommender algorithms look at these signals, which we normally call “engagement,” to decide what to show people. I think that's a problem, and I think there's other kinds of evidence that this is incentivizing media producers to be more extreme. So it's not that everything is fine now, it's that the ways we've been using to assess the effects of these systems aren't really going to tell us what we want to know.

Michael Calore: So there's a lot we don't know about how these recommendation systems work. And I think, partly, mostly to blame for this lack of knowledge is that researchers can't gain access to the algorithms' inner workings. I mean, there are obvious reasons, right? There're like intellectual property reasons why companies wouldn't want to allow anybody to see how they're making recommendations. So how can you study something that you can only really approach in the abstract?

Lauren Goode: This is a good question, because we know these algorithms are black boxes in many cases.

Jonathan Stray: Yeah. So the solution to this problem ultimately is to allow researchers access to these systems. And in the end, we don't really know what the effects of these systems are, because there's never been, let's say, a proper controlled experiment about comparing two different recommendations systems. So you could imagine one which weighs whether the user will re-share something much more highly than the other one. And then you look at if, as a result, people are looking at more extreme material. You can design this experiment. I and other scholars have designed such experiments, but you can't run them, because you need the cooperation of the platforms. Right now there is no way to compel the platforms to do so, and they're understandably hesitant to allow insiders in. Having said that, I am actually working on a collaborative experiment with Meta, and there's an article coming out in WIRED, I think tomorrow, on this experiment. So I guess we can link to that in the show notes.

Michael Calore: Perfect.

Lauren Goode: By the time the show publishes that article will be out.

Jonathan Stray: There you go. Perfect. But broadly speaking, the experiment that many researchers would like to do isn't possible yet. The platforms have sometimes studied this stuff, like Facebook did a big study on social comparison on Instagram—whether it's making teens unhappy by seeing a bunch of really skinny girls on Instagram, that sort of thing. And we only know about it because of the Facebook papers. So even when they are asking the right questions, we don't get to see the answers. And that's a problem. Unfortunately, there's no simple way to do this, because one ex-platform person I know says, “Well, they should just be legally required to publish the results of every experiment.” It sounds really appealing. But if we did that, I think there would be an incentive not to ask the hard questions. So this is one of those policy areas where it's filled with good intentions and perverse incentives.

Lauren Goode: And subjective reporting would likely be problematic too. Right? Going around and asking users, monitor your own YouTube usage for a month or so and then report back whether or not you were influenced. We all believe we're not influenced, and we are in subtle ways. We don't even fully understand.

Jonathan Stray: I mean, so people do that stuff, and it's interesting and it's suggestive. For example, that's how we know there's a correlation between social media use and depression and anxiety type stuff. The problem is that when you do that, you don't know if that's because unhappy people spend more time on social media or because spending more time on social media makes people unhappy or both. So the only way you can really disentangle that is by doing something like a randomized experiment.

Lauren Goode: Jonathan, one of my own personal fascinations related to this topic is digital memories. I've written about this for WIRED. I published a feature a couple of years ago that was about my own personal experience with the constant resurfacing of digital memories because our photo apps are now algorithmic. And because targeted advertising has now put me in this kind of bucket—basically I was about to get married, I was planning a wedding, I was with someone for a long time, and broke off the engagement. You can read all about it on WIRED.com when you're done reading Jonathan's latest article in WIRED. It's real shot-chaser kind of thing. So for years, I mean still to this day, this week, Google Photos surfaced a memory for me of my ex with our cat. And for a while I was put in the wedding bucket. So I kept getting ads related to wedding content and wedding vendors. And then I think because of my age and because of that, I was then put in the maternity bucket for advertisers, because now I get a lot of that. So I feel that in some way that these surfaced memories are starting to affect our human experiences or our growth processes or our grief processes. When you think about digital memories, do you see this as another form of “recommendation”? How would you categorize that? What do you think could ever be done about that?

Jonathan Stray: Yeah. Well, I mean, here's the thing about recommender systems. It's very much a can't-live-with-them, can't-live-without-them kind of problem. Because the problem is, before the internet—and I'm old enough to have grown up in this time period, so we can get off my lawn—there was a fairly small number of media channels. Many people said, “Well, wouldn't it be great if we could access all of the world's knowledge over a modem from our desktop computer?” OK, great. So now we can mostly do that. Most things that are published I can get from my laptop, but that doesn't help me with the problem of filtering them. So we started with search engines—great, essential technology. Search engines are not free from these problems. They also have to decide what it is that I should most show in response to any query. And also they're personalized, because if you type in “locksmith,” you don't want every locksmith in the world, you want a locksmith in your city. And if I type in “Python,” I probably want the programming language, but other people want the snake. So you have all the same kind of personalization problems. But search engines don't solve the problem that we don't know what we don't know. The classic example of that is a news recommender. If I had to type in keywords for the top stories of the day, a news recommender would be useless. So we really do want these systems to help us deal with information overload. It gets tricky when you have things like “show me my own personal history,” and I've had that experience too. Maybe I didn't want to see that photo of my ex. Right? I guess I'm more hesitant to try to show people memories from their own life than maybe other stuff that is happening in the world. In theory, maybe if the system knew how you felt about your breakup, they might be able to make a better choice. But it's really touchy stuff. I don't know, maybe as we get more fluent AIs, just like your friends may know not to remind you of that, maybe the machines will as well.

Lauren Goode: I feel like Google should have an “I'm Over It” button in the way that it has an “I'm Feeling Lucky” button—just, I'm over it, click. It's been a long time. No more, but keep sending the cat.

Jonathan Stray: Right. I mean, I think that one of the major ways of mitigating some of these problems is to have more user control. So there's a lot of researchers, including myself, who are trying to figure out how to build better controls for these systems. Unfortunately, like privacy settings, most people don't use controls. So the defaults still have to be right.

Michael Calore: I have to admit that when recommendation systems first started showing up in things like music listening or in YouTube, I was adamantly against clicking on them. I have the attitude of, “I am very, very knowledgeable about these things. I can curate my own experience, thank you very much. Keep that stuff out.” But over time, they've gotten so good at showing me things, and the discovery options have gotten so good—particularly with music on Spotify, if I'm going to pick one that I've gotten to trust, that I know will show me thing that are actually interesting to me. So I think our trust has to evolve in some of these cases, the same way it has with, for example, discovery recommendations.

Jonathan Stray: Yeah, it's funny you mentioned the Spotify recommender. I quite like it as well, because it helps with music discovery. Although, I got to say, Shazam helps with music discovery at least as much when I walk into a café and I'm like, “Oh, what is that?” But that's why they build them, right? Think of YouTube, right? Now, there are subscriptions on YouTube, but most people don't use it that way. And ordering every video on YouTube chronologically doesn't make any sense. So there's a certain line of argument that is like, “Well, we shouldn't have recommender algorithms.” And I think when people think of that, they're thinking of Twitter or Facebook, where following people is a fundamental way you use the platform and a chronological feed kind of makes sense. But a chronological feed of every news article doesn't make sense. You want something else.

Lauren Goode: Jonathan, I feel like we could talk about this stuff forever, but we are going to have to wrap this segment and take a quick break, and when we come back, we'll do our own human-curated recommendations.

[Break]

Lauren Goode: All right. As our guest of honor this week, you get to go first. What is your recommendation?

Jonathan Stray: OK. So my side hustle is political conflict, which also ties into recommenders.

Lauren Goode: Wow, I thought you were going to say pickleball or something.

Jonathan Stray: No. I mean, I'm also an amateur aerial performer. I do aerial ropes, so that's my—

Lauren Goode: Cool.

Jonathan Stray: … side hustle. I don't know. But I run a thing called the Better Conflict Bulletin, and that's all about how to have a better culture war: betterconflictbulletin.org. And so I read every polarization book, and my recommendation, if you want a book about polarization, read Peter Coleman's The Way Out. He runs something called the Difficult Conversations Lab at Columbia Universit, whee he puts people in a room and asks them to talk about whatever is controversial. He also is deeply connected with the international peace-building community. And so I think this is the best single book about polarization.

Lauren Goode: And does he have a side hustle as a marriage counselor?

Jonathan Stray: It's funny you say that, because some of the people who work on polarization started as marriage therapists. That's Braver Angels, which is an organization in this space. Their first group dialog session was designed by a marriage counselor.

Lauren Goode: That's fascinating. Have you participated in these experiments with him? Have you been in the room?

Jonathan Stray: I've seen the lab. I've never been in the lab. I probably would be a bad subject, because I know too much about the science that is being done.

Michael Calore: Right.

Lauren Goode: That's a fantastic recommendation. When did the book come out?

Jonathan Stray: Just this year or late last year, I suppose.

Lauren Goode: Great. Sounds like something we could all use. Mike, what's your recommendation?

Michael Calore: I want to recommend a book that came out towards the end of last year as well. It's by Jon Raymond, the novelist, and it's a novel called Denial. It's a science fiction story. It takes place about 30 years in the future, and it deals with a journalist. So it's a book about journalism. A journalist who is trying to track down climate deniers. There was a reckoning in this alternative future where we put on trial all of the executives from energy companies and made them … We punished them for the destruction that they did to the planet. So this journalist goes to try and find some of the energy executives who took off and refused to stand trial and basically became refugees. So it's an interesting book, because it's a detective story and it's a story about a journalist trying to out these refugees, but also it's a story about the future and what the near future looks like. My favorite thing about the book is that it's a science fiction story about the future that doesn't dwell on the technological stuff. Phones are still phones. Books are still books. Cars are still cars. And there are some fun differences, but it doesn't really go deep on that. It really goes deep into our relationship with the climate and how people 30 years from now experience climate change. But it doesn't do it in a scientific way. It does it in a very sort of down-to-earth practical way.

Lauren Goode: Oh, interesting. We're not all living through VR headsets, that sort of thing.

Michael Calore: Correct. There are VR headsets in the book, but it's not like a huge deal. It's just part of daily life. Anyway, I really love it. Denial, by Jon Raymond.

Lauren Goode: We should also mention that Jonathan here used to be a journalist.

Michael Calore: Oh, yeah?

Lauren Goode: Speaking of journalists.

Jonathan Stray: Yeah. Well, so I study AI and media and conflict, and I had a career in computer science and then a career in journalism. I was an editor at the Associate Press. I worked for ProPublica for a while. That's why I combine these interests.

Lauren Goode: Were you ever assigned a story where you had to hunt down, I don't know, the chief executive of Shell?

Jonathan Stray: No. I mean, this is definitely a book of fiction. The idea that energy executives would be prosecuted for climate change. I don't see that happening in reality.

Michael Calore: In the book, it is part of a period of social strife that they refer to as “the upheaval,” where basically society did in fact have enough and demanded change at such a high level that this did happen. So I don't know. It's kind of optimistic, I guess.

Jonathan Stray: I guess. I mean, maybe that's happening now. It definitely feels like we're in a period of rapid cultural change. And one of the big questions I have is, how do we manage that without coming out of it hating each other when people disagree?

Michael Calore: Yeah. We all need to read that book about conflict.

Jonathan Stray: There you go.

Michael Calore: Lauren, what is your recommendation?

Lauren Goode: My recommendation has a couple of layers to it. Now, I'm going to start out with two words for you, Mike, “nut bag.” OK, let me back up a little bit. So the first part is that you should read Matt Reynolds' story in WIRED about a new way to think about food. Now, this isn't diet content. I'm not weighing in on the Ozempic arguments here, if that's how you pronounce it. The story is about how some scientists are looking to reclassify how healthy our diets are based on how much processed food we eat instead of looking at things like fat and sugar and salt. The article is great. It's been one of our most-read articles on WIRED.com in the past couple weeks. Everyone is reading it. You should go read it. But as a result of reading this article over the past week, I've been reading labels I never have before. I'm looking for sneaky signs of processed food, and I drink a lot of almond milk. I take it in my coffee every day, multiple times a day. And I happened to be looking on the back of my almond milk carton, and I'm not going to say which brand. And it has, in addition to water and almonds in it, it has calcium carbonate, sunflower lecithin.

Jonathan Stray: Lecithin.

Lauren Goode: Thank you. Lecithin, sea salt, natural flavor, locust bean gum. Gellan or gellan gum? Potassium citrate, et cetera, et cetera. So at the recommendation of Mike, I'm going to buy a nut bag this weekend, and you use the eco nut bag you passed along to me. It's like nine bucks.

Michael Calore: And this is a bag. It's a sort of a muslin-type cotton bag.

Lauren Goode: Unbleached.

Michael Calore: That you use to make nut milk.

Lauren Goode: Right. I'm going to soak some almonds, and I'm going to try making my almond milk, and we'll see how this goes.

Michael Calore: I think you'll like it.

Lauren Goode: I think I probably will too, if I remember to soak the almonds in advance.

Michael Calore: Yes.

Lauren Goode: There's going to be a whole downstream conversation at some point about how much water almonds take, because nothing we do these days has zero environmental impact, but we're going to worry about that another day. Come back next week to hear how the almond milk experiment went. That's my recommendation.

Jonathan Stray: That's a seriously healthy recommendation. That's impressive.

Lauren Goode: Jonathan, I feel like you'll understand us because you're dialing in from Berkeley. I'm becoming so San Francisco. Mike has me riding my bike into the office a little bit more now. I'm making my own nut milk. I mean, it's happening. It's all happening.

Michael Calore: I'm delighted that I'm having this positive influence on your life, Lauren.

Jonathan Stray: Do you have your own sourdough starter now?

Lauren Goode: I did during the pandemic, but that didn't last very long. It's name was Henry. It was pretty great. Do you have yours? Do you still have yours?

Jonathan Stray: No. I have a housemate who has a sourdough starter, but I let other people deal with the microorganisms.

Lauren Goode: And then do you eat the sourdough that your housemate makes?

Jonathan Stray: Well, naturally.

Lauren Goode: Of course. You have to be a tester. You're like, “I must research this.” Jonathan, thank you so much for joining us on this week's episode of Gadget Lab. This has been incredibly illuminating and really fun.

Jonathan Stray: You're not tired of Section 230 by now?

Lauren Goode: No. We could probably go for another hour. We'll let our listeners go for now, because they probably are tired of it, but let's just keep going.

Jonathan Stray: Yeah. OK, great.

Lauren Goode: And thanks to all of you for listening. If you have feedback, you can find all of us on Twitter. Just check the show notes. Jonathan, tell us your handle.

Jonathan Stray: Jonathan Stray on Twitter.

Lauren Goode: Great. And our producer is the excellent Boone Ashworth. Goodbye for now. We'll be back next week.

[Gadget Lab outro theme music plays]