The unchecked growth of technology is like climate change, says tech ethics advocate and Time Well Spent co-creator Tristan Harris.
Problems such as “tech addiction, polarization, outrage-ification of culture, the rise in vanities [and] micro-celebrity culture” are all, in fact, symptoms of a larger disease, he said: “The race to capture human attention by tech giants.” And as those giants are making technology smarter, they are indirectly making all of us dumber, meaner, and alienated from one another.
“It’s sort of a civilizational moment when an intelligent species, us, we produce a technology where that technology can simulate the weaknesses of the creator,” Harris said on the latest episode of Recode Decode with Kara Swisher. “It’s almost like the puppet that we’ve created can actually simulate a version of its creator and know exactly what puppet strings to pull on the creator, so we’re all outraged.
“When technology exploits our weaknesses, it gains control,” he said.
And fully understanding the scope of this problem is perhaps harder than understanding the effects of global warming. Harris, the co-founder of the Center for Humane Technology, compared Facebook and YouTube to energy companies like Shell and Exxon, except “worse” because “they also own the satellites that can detect how much pollution’s being created.”
But there is some good news, he said: If tech giants and policymakers can be convinced that this is an existential problem, then poisonous digital products can be reworked to emphasize less outrage and more healthy conversations. And the responsibility for solving the problem should fall on all of them, regardless of their past culpability, which means that a company like Apple, despite being generally disinterested in the attention economy, “can play such a huge role because they can incentivize that race.
“Human downgrading is the climate change of culture,” Harris said. “Like climate change, it can be catastrophic. Unlike climate change, only about 1,000 people, among like five companies, need to change what they’re doing. Now, in saying that, I’m not trying to disempower the thousands and millions of us outside of these systems that are like, ‘Well then, I guess I’m not included.’ That’s not it at all. This is going to take everyone.”
Below, we’ve shared a lightly edited full transcript of Kara’s conversation with Tristan.
Kara Swisher: Hi, I’m Kara Swisher, editor-at-large of Recode. You may know me as someone who can quit using my phone any time I want — hang on, I have to get this — but in my spare time I talk tech and you’re listening to Recode Decode from the Vox Media Podcast Network.
Today in the red chair is Tristan Harris, the co-founder of the Center for Humane Technology. He was formerly a design ethicist at Google and started a movement combating tech addiction called Time Well Spent. Now he’s back with a new problem he wants to solve which he calls “human downgrading.”
Tristan, welcome back to Recode Decode.
Tristan Harris: It’s so good to be back here with you.
So we were in this same tiny little room talking about this and it was way before people were thinking about it, this idea of Time Well Spent. I was just riveted to the idea of someone who was within one of these companies started to discuss these issues that were important. Now it seems to have intersected not just with all the other problems, but that it’s an integral part of the social unrest problems and everything else, depression, addiction, the way people behave online, “fake news.” It’s all a part of the same thing.
So give people just a very quick what happened, how you started it, and what we were previously talking about and where we’re going now.
Great, yeah. Many people I think who are aware of the work know vaguely of the story that back in 2013 I was a product manager at Google and I made this presentation about how we have a moral responsibility to get the attention economy right. That we, in our hands, in Google’s hands, were holding in our hands 2 billion peoples’ attention and we were shaping the flows that guided what 2 billion people were thinking about.
And because thoughts precede action, that means that you’re controlling and shaping culture, relationships, politics, elections. Somehow, people latched onto the addiction thing. That’s how the common public heard it, but it was always about what happens when you shape 2 billion peoples’ attention?
Yeah, the moral, what’s your moral responsibility?
And what’s your moral responsibility?
Which is something I talk about a lot, this idea of what are you doing? What are you … It’s actually basically a simple question. Why are you behaving this way and do you understand your power?
Yeah, absolutely. I think that the tech industry, it’s very hard to understand your power because there you are. You’re a Gmail engineer or you’re a Facebook News Feed engineer and you’re just writing some lines of code every day. You don’t think that’s gonna influence a genocide happening in Myanmar or influence the politics of France or populism around the world.
The whole point of this big, new agenda that we’re launching, the humane agenda, is to say that we need to move from this disconnected set of grievances and scandals, that these problems are seemingly separate: tech addiction, polarization, outrage-ification of culture, the rise in vanities, micro-celebrity culture, everyone has to be famous. These are not separate problems. They’re actually all coming from one thing, which is the race to capture human attention by tech giants.
With increasingly powerful AIs pointed at your brain to reverse engineer, what can I throw in front of your nervous system to crawl down your brain stem and get something out of you? Whether that’s an ad click or an addiction or a political convergence or whatever. This is all part of one connected system. We call it “human downgrading,” which is the social climate change of culture.
It’s important to have that because otherwise it’s almost like before there was climate change there was just one group of people working on coral reefs, another group of people working on hurricanes and …
Right, now these polar ice caps, you know.
And then these ice caps and it’s like, “Oh, these are disconnected.” It’s like, “No, they’re not disconnected. These are all connected issues.” So we have to have to have a unified agenda and we have, because that’s the only way it’s going to change because it’s an urgent problem. That’s why we hosted this big event in San Francisco last week.
Right. I’m going to back up just slightly to Time Well Spent because it co-opted, too.
Mark Zuckerberg co-opted it a little bit. Why do people pick on addiction first? You were talking about, I remember you were talking about the slot machine of attention, the idea that people are using all these tricks and tools. Tell me how you think that went, that part of it. I agree with you, the interconnectedness is critical to understand.
Yeah, well, so going back in time … I mean, first of all ,going back to two years ago when we were in this room. This is two months after the election. We were trying to say, “Hey, social media and technology has a huge invisible impact on the way the world is playing out.” That was a bold claim back then. Not that many people were talking about that even though researchers, to give them credit, have been studying this for a long time, but that was not the popular media understanding.
No, and Mark was saying, “It’s crazy to think …”
No, nothing. That’s nothing going on there.” In terms of what happened with Time Well Spent and the important thing to say about that, which relates to the work we’re doing now too, is that I saw how powerful, simple language of a common understanding could create change.
It’s sort of hard to remember this, but before two years ago, people weren’t using phrases like, “Technology has hijacked our minds.” “It’s this race to the bottom of the brain stem for attention,” and “We need time well spent.” Those three phrases … We came up with some ways to talk about something that a lot of people were feeling but they didn’t have a simple description for. I’m not trying to say this with any ego. I just mean that until there’s good language to describe our problem, it’s just in the invisible felt sense layer and no one knows what to do about it.
As soon as this language was out there I — and this is behind our theory of Change Now, when you have common understanding and it gets repeated three times in one day. What happens when three times in one day someone repeats a book to you?
Mm-hmm, you think about the book.
Or, you think about it but you have to buy the book or you have to address that problem. So part of this is how do we create a lever that’s big enough now to address the problem of human downgrading, which is that while we’ve been upgrading the machines we’ve been downgrading humans: downgrading our attention spans, downgrading our civility and our decency, downgrading democracy, downgrading mental health. And not intentionally, but that the race to upgrade the machines sometimes …
Well, sometimes intentionally. You’re being lovely. You’re being nice. I do want to get to … What do you think the …
With Time Well Spent?
With Time Well Spent, they, all of a sudden Apple did its Screen Time. Google talked about it. Everyone talked about it. How do you assess what’s happened? Some people think it’s not nearly far enough, but they begin to discuss the idea of it, grayscale, everything else.
Right, well, totally. So again, going back to culture change, you could say … Let me take you back to February 2013. I sat in Google for two years trying to see, could we change things from the inside? Could we change how Android worked to reduce the addiction problem? Could we change how some of these products worked? I didn’t get very far.
Right, because it’s, “Why do we want to do that?”
Why do we want to do that? The culture hadn’t caught up. It wasn’t like, “Hey, we’re motivated by profit and you’re going to take away our money.”
That’s what people miss. It’s not that.
It’s not that, exactly. People think, “Oh, it’s just these greedy corporations.” It wasn’t that, I can tell you. I was in the room with Android product managers. It was just that it wasn’t a priority. What I saw, which was very powerful to see, is what happens when you create shared language, you go out there, you create public awareness, a lot of people speak out, Roger McNamee, Jaron Lanier, Guillaume Chaslot, Renée DiResta. People start speaking in this shared language and it starts to create change.
So now what happens? So as you said, Apple and Google both at the same time, May of last year, launched these digital well-being features. It’s the reason why everyone has these charts and graphs of where their time is going on the phones. This is definitely a direct consequence of this awareness raising.
One hundred percent. They never would have done it otherwise.
Right. So that’s a really powerful lesson for me because it shows you that if a lot of people say, “there’s a problem here,” and they understand that the responsibility really is on the side of technology, then that can cause change to happen. The most important thing about it is that all these companies did it at the same time. Apple and Google both launched these things in May of 2018. Now, it’s like we flipped from a race to the bottom to now it’s a race to the top for who cares more about people’s well-being?
Now, to your question of, “Is that enough?” No, no, no, absolutely not. It’s like .01 …
How do you assess what they’ve done so far? Let’s just … where we are right now.
Well, I mean if you just take the … First of all …
You want to say “Thank you” but at the same time it’s like … I don’t know. It reminds me of when someone does something that they should have done anyway. Like, “Thank you for not hitting me.”
Right, right. Or it’s like when someone apologizes a little bit but not enough. Also, different companies bear different levels of responsibility. By the way, Apple’s in one of the best positions to do so much more on this topic because their business model isn’t about attention at all. They’re kind of like the Federal Reserve or the Central Bank of attention. People forget that. People think, “How are we going to regulate the attention economy?” They go to governments. But let’s go to Apple. Apple can change the incentives really quickly. In a year from now, we could have iOS 13 or 14 suddenly flipping the incentives around.
In terms of what you’re saying, it’s really important to celebrate when things do move in the right direction, but like you said, we want to celebrate it just a tiny little bit, like a little golf clap, because it’s like …
A golf clap.
When you really examine the full surface area of harms — polarization, radicalization, outrage-ification culture, call-out culture, groups being marginalized, people feeling threatened and trolled — these are all direct consequences of a race to get attention. Because the stuff that’s best at getting attention, it turns out … Let me give you an example with outrage. For each word of moral outrage that you add to a tweet it increases your retweet rate by 17 percent. So if you say, “It’s abominable!” “It’s a disgrace!” “Oh these assholes that …” You just get attention.
Yes, I’ve noticed that.
You’ve noticed. The thing is because attention is finite and Twitter’s looking at whatever gets the most attention. That stuff starts filling up a greater and greater percentage of the channel. Then it makes culture seem like everyone is outraged all the time.
I almost … I studied a little bit of magic and hypnosis when I was, the last few years. I almost want to sort of snap my fingers and tell everybody, “You can wake up now.” There’s a bit of artificial polarization that we’ve all gone … Society’s been thrown into the outrage washer machine and we’ve been like spinning around in there for a couple of years. Yes, there’s deep grievances and deep problems in society, but they’ve been so amplified by social media.
If we can … There was some criticism of this in our presentation last week about saying, “Take a breath.” It’s like, it’s more just to notice, first of all, that this isn’t all real. The grievances are real but the amplification and polarization is artificial.
Well, it was interesting, there was this interesting story in the New York Times. It was about there’s Twitter Democrats and real Democrats, which was, they’re not quite as mad. They’re not quite as angry. They’re not quite as incensed. Because it does, it creates this outrage culture, although it does filter down. It filters everywhere, all over the place.
Well, that’s the thing people miss, actually, because people say, “Oh, well, I don’t use these products so I’m fine and I’m not outraged,” but you live in a society that’s affected by these dynamics, meaning the outrage spills out of the screen into the social fabric.
So for example, maybe you say, “I don’t use YouTube and I don’t believe conspiracies.” Well, guess what? You live in a culture where you maybe send your kids to a school where other people do and they stop vaccinating their kids because they’ve been surrounded by social media saying, “Don’t vaccinate your kids.” The WHO says that the anti-vaccination or vaccine hesitancy is now a top 10 global health threat. In part, I know you interviewed Renée DiResta, who is one of my heroes, about how this has spread on social media.
The point is that even if you don’t use these products, it’s actually holding the pen of history. It’s shaping the outcomes of every major election around the world. It’s shaping the politics of outrage. It’s shaping populism. In Brazil, I don’t think you would have gotten a Bolsonaro without WhatsApp and Facebook, where you have videos of all of his supporters shouting, “Facebook! Facebook! WhatsApp! WhatsApp!” Proudly saying, this is what won the election.
Yeah, Trump. I think that this stuff is really shaping everything. I was just with Frank Luntz down in the Milken Conference and even he’s saying, “This is not good. We have to have a reconciliation. We have to realize that we can’t keep going in this direction. We have to reverse the polarization.”
So Apple’s one of them. What about Google? Then I want to get to what’s your solution.
Yeah. Google also launched these digital well-being features. Again, without us writing a single line of code, to be able to say, “If you change culture, billions of dollars of where companies are directing their resources and their design can change.” That’s really important because with this podcast, with other people who are speaking, if we have a surround sound where everyone’s saying the same problem of human downgrading, then they can start to reverse it.
Apple and Google both, collectively, that’s gonna reach more than a billion phones this year. So people’s phones go grayscale at the end of the night. That helps reduce the effects of blue light and some of the addictive qualities, but again, these are baby steps.
Facebook, last year, embraced Time Well Spent. In Zuckerberg’s January 2018 letter he said, “Our new goal for the company is to make sure the time people spend is time well spent.” However, as you may know, he followed that up in the shareholder call with, “That means people spending more time watching videos together online.” It’s like, well …
The answer to Facebook is Facebook, right?
Facebook is Facebook.
That’s my favorite.
Yeah, so that’s not enough. We have to see, what is the price tag of human downgrading? Is it just that we have less attention spans or are addicted or we’re more polarized? It’s like, if you add up this balance sheet, free is the most expensive business model we’ve ever created because if people can’t agree on shared facts and truth, that’s it. That’s it.
When truth becomes political.
When truth becomes political. We have such enormous problems with inequality and climate change especially that are increasing, and the quality of our sense-making is going down while our problems are going up. We can’t even construct a shared agenda of what we want to do about it. It’s harder than ever.
If we don’t have technology that … I’m not saying that technology solves the problem but obviously it’s pointing us the wrong way right now. If we reverse it … Imagine if we have superhuman powers to find common ground. I’m not trying to speak like a techno-utopian. I’m not saying tech’s going to solve the answer. I’m saying we have to go from technology amplifying the polarization to at least, in its …
Well, you know that was the idea from the beginning, right? That was what attracted me to it 25 years ago, was the idea that this was commonality, that we had commonality and that these tools could bring commonality.
Right, they could create a global commonality, in a way.
Yeah, we used to have things like the Fairness Doctrine in television, that we’d have equal sides. In Europe, I think, during the soccer breaks they have like 10 minutes of a common news thing during the period where everyone’s paying attention to soccer games, football games. We don’t have that with social media. It has completely fragmented our truth. And more importantly, the scale of the disinformation and misinformation, people tend to underestimate.
I don’t think you’ve had Guillaume Chaslot on the program?
No, I haven’t had him on.
He’s the ex-YouTube recommendations engineer. He works with us. His research showed that Alex Jones was recommended 15 billion times.
Right. Well, you know how I feel about the recommendation systems on YouTube.
Yeah, yeah. As we said in our presentation last weekend that the whole thing is … Imagine a spectrum. You’ve got the calm Walter Cronkite, David Attenborough side of YouTube, and then on the other side of the spectrum you have Crazytown, UFOs, Bigfoot, etc. No matter where you start, you could start in Calmtown or you could start in Crazytown, but if I’m YouTube, which direction on those two poles am I going to send you? I’m always going to tilt the playing field towards Crazytown because that’s where I get more time spent.
YouTube’s recommending — teen girl watches a dieting video. This is a year ago. It recommends anorexia videos. You start with the NASA moon landing, you get Flat Earth. Once people go to these more extreme beliefs, or these more conspiracy beliefs, there’s a study showing the best predictor about believing in conspiracy theories is whether you already believe one. If you believe one it kind of ratchets you up into this different …
To the next one.
To the next one.
“The moon landing, I think, yes, definitely, vaccines” or whatever.
It changes … Speaking as like a magician hypnotist person, when you flip people’s mind into that kind of questioning, paranoid mindset. I had a birthday present about two years ago where a friend had me walk down to these docks in Brooklyn and she said like, “Sit there.” This is an assignment, kind of. I was just looking out at the ocean. Someone came by. They were flipping, they were taking Polaroid photos, and I thought they were just taking Polaroids like a tourist.
Then suddenly she handed the Polaroid to me and said, “Go to this address on this dock” and she wrote a little letter on the Polaroid and handed it to me. The point of saying this, is it flipped my whole perception around because suddenly I was wondering, “Who’s in on this thing?”
Right, right, sure.
And like everyone around me could have been in on it.
I think that was a Michael Douglas movie.
Exactly, The Game. Suddenly, conspiracy theories magnified by these platforms times billions of people, by the way, it flips everyone’s mind into this kind of questioning mindset that questions institutions and trust and … People don’t believe the media. They don’t believe in government anymore. If you look at the disinformation actors, what Russia’s trying to do as well, the point is to get people to distrust their institutions because …
Right, that’s all you have to do.
That’s all you have to do.
All they have to do. So that’s why you called it, going from Time Well Spent which is talking about the addiction issues which are actually physical issues. It actually is addictive.
It absolutely is, yeah.
To the idea of downgrading. What shifted you? You got a ton of attention for Time Well Spent, and again, addiction was the biggest part of it. What shifted you to it? Obviously you spent a lot of time with Roger, who is talking about this on Facebook. What made you go that way?
As you mentioned, Roger, who’s fantastic, we ended up pairing up as a team and going into Congress. We were at these conferences talking to all the developing countries’ groups where you’d have genocides being amplified by the ethnic tensions from social media amplifying the fake news about minority groups, whether it’s the Rohingya in Myanmar or it’s …
There’s just so many examples. Cameroon, Nigeria. When you get a deeper and fuller accounting of the harms … It wasn’t that I was unaware of those things before, by the way, it’s just part of this is …
Right, no we talked about it.
Part of this is just getting, yeah, it’s just getting … If you think about how fast the harms are accruing and you think about what is it going to take to fix these problems, we need language that when you say the phrase “human downgrading” it should speak to the full climate change set of harms. Otherwise if you pull on the lever and on the other side of that lever it’s only fixing the addiction problem or you’re only trying to fix the polarization problem, we’re not going to get changed fast enough.
We actually need to move market forces in the direction of reversing human downgrading. I think of this almost like moving from an era of an extractive attention economy that’s the race to the bottom to strip mine, frack human attention out of human beings, to see human beings …
Frack is a very good word.
… to see human beings as resources to a regenerative attention economy that says, almost like the birth of solar panels and wind or whatever, we still need attention from people, but what if we’re living in a world where we’re competing to minimize the attentional footprint and most create leverage for people to get what they want out of their lives?
With the useful part of what it is that we’re, where you get utility.
Yeah, right. And let’s also speak to this because we often get criticized as the sort of anti-tech group or something like that.
Yes, you do. Yeah.
It’s really …
My section of town, too.
It’s funny how these things …
We love tech!
Yeah, it’s funny how these things happen. Let me give you a few examples. YouTube is fantastic. It has never been easier to learn a musical instrument. It has never been easier to do-it-yourself fix any item in your home.
One hundred percent.
It has never been easier to get health advice. Our co-founder Aza Raskin had this leg injury and he looked up on YouTube, how do you massage your leg to heal faster and it helped him more than any doctor he’d seen.
Dr. Google. There’s certain benefits that are amazing. It’s never been easier to laugh with friends with YouTube. That’s a perfect example. The problem is that that’s not what these products are about. These are great examples, but if you look at the surface area of what’s actually happening, what are most people experiencing, most of the time? Like climate change, all I have to do is tilt the playing field just two degrees on the polarization or on the outrage or on the conspiracy.
Use plastic bags, use plastic bottles.
Yeah, and suddenly the whole thing can tilt and go crazy so fast. If you look at the solutions that the companies are offering, like “let’s hire 10,000 content moderators.”
It’s impossible. They can’t … and I’m not saying this to complain. I just mean this is a very delicate, dangerous, complex situation. When you hire 10,000 content moderators, I would just ask, “How many Facebook or YouTube engineers speak the 22 languages of India, where there’s an election coming up next month? There was four fact-checkers in Nigeria, for a country of 24 million people.
They can’t do it. They can’t. I know that. I always say that. It’s ridiculous. What you’re doing is ridiculous, because they don’t know what to do.
We’re here with Tristan Harris. He’s the co-founder of the Center for Humane Technology. Explain what that is now. What are you doing? Where is it at, what’s the … and then we’ll get into these solutions and this new focus on downgrading humanity.
Yeah, great. We’re a nonprofit. We have seven people in San Francisco. We need to grow to about 20. We have about every major world government knocking on our door. We’re trying to help the tech companies navigate these issues. We’re a nonprofit based in San Francisco.
A lot of the former technology insiders who built some of these things, Aza Raskin, he’s one of my best friends and is a co-founder of Center for Humane Technology. His father, Jef, invented the Macintosh project at Apple, who actually wrote a book called The Humane Interface. That’s actually where the phrase “humane” came from because Jef’s work … his dad, Jef, said that the purpose of being humane is to be considerate of human needs and responsive to human frailties. You start by understanding the frailties of human nature and you design to protect those things. That’s what it is to be humane in terms of designing technology.
And what we think is that technology right now is fitting society like a glove but it’s fitting our reptilian brain like a glove.
We need to fit our social fabric like a glove. We have to ask what strengthens, what are we fitting it to? That’s why we often invoke this metaphor of ergonomics. Just like an ergonomic chair, if you don’t know anything about the geometry of your back, then everyone’s sitting in these chairs that are just misaligned because no one’s ever looked at what is the geometry of backs. It’s almost like society has a geometry of what makes for civility, decency, trust, open-mindedness, etc. Broadcasting to 50 thousand people and having call-out culture at scale is not a very good ergonomic fit for that.
This is a group that you’re dedicated to doing what?
We are dedicated to reversing human downgrading.
By advocacy, research. We’re trying to educate designers to basically change how they’re designing these products because the point is, per your point about AI is not going to solve the problem, content moderators aren’t going to solve the problem, it’s actually a sophistication about human nature and social dynamics that’s going to fix the problem. What are the kinds of spaces where people are open-minded and civil? Let’s just take this common ground example. It turns out groups … for example, take something like your feeling about whether you can participate in a group. Have you ever been in a dinner where it’s 20 people? Do you feel like it’s easy to participate?
I do, but go ahead.
Yeah, I bet you do.
But, for most people, 20 feels too big, so that’s un-ergonomic to participation. Let’s take that value of participation, let’s say, “Well, what tends to work ergonomically? What fits that value?” Oh, a group of about six people. And that’s one example. Living Room Conversations is an organization, nonprofit and a project that basically facilitates six-person conversations, equal political sides about a shared topic of interest with real passion, with good facilitation, and generates common ground.
They get real people. They feel like you have a meaningful conversation because you can actually talk to each other and respond to the last point that was made as opposed to, “Oh, keep going around the table,” because there’s 20 people you can’t actually respond. That’s an example of how do we apply a lesson like that to how Twitter is designed or how Facebook is designed or how Reddit is designed? Because another example with Reddit is there is a project called Change My View where it’s a whole channel that was dedicated to people saying, “I invite you to change my mind about X.”
For example, “I think climate change isn’t real because of X, Y, and Z. Please, change my mind.” That’s an invitation that says, “Let’s have a dialogue about that.” They successfully created this whole community where people gained, they’re called delta points. The more you changed people’s minds, the more points you get. It creates this community of trust and thoughtfulness, and rewarding expertise as opposed to rewarding outrage and winning based on who’s better at punning or shaming the other side.
Those are the lessons that we need, not better AI, not more data, not just the blockchain, not just more Jeremy Bentham ethics, Kantian … People often have this thing here at Stanford, and we have to train the engineers in ethics, believe me, as a former design ethicist, I embrace that, I think that’s great, but that’s actually still not enough.
What we really need is the subtle sophistication about how do you design social systems to bring out the best in human nature? What we try to do at the Center for Humane Technology is provide frameworks and help educate those design teams to do that.
And also point out when people are not doing that.
Correct, and do that peacefully or with pointed critiques that are not directed at the individuals and saying, “Look, this is a system that produces these harms, and we have to have an honest balance sheet of those harms.”
Do you feel like you’ve gotten through to these leaders? These are still the founders, still the original people, which I hate to use a religious term, but I find them religious. They are religious about what they’ve done.
Say more about that.
I think when you’re someone who’s running a TV network now is very different than the founders of the TV network. They pushed forward in ways that they don’t do now. They just run it like a business, essentially.
The founders believe in their mission in a way that’s very hard to get them off of it.
For them to develop in any way.
I think a good example of this is Susan Wojcicki who’s running YouTube now, who says, “My job is not to fix these things. My job is to run the business.”
I have to say, I’m sorry for calling her out, but a platform that is steering 2 billion peoples’ thoughts in a known, documented, radicalization way that we know is causing people to believe conspiracy theories, and to increase hatred, and 50 percent of white nationalists in this Bellingcat study said YouTube is what “red-pilled” them into white nationalism.
We know it’s doing this, and they’re not solving the problem.
Right. I think she is concerned. I was just with her, and she looked beside herself. I think …
It’s great that … there was an interview in Bloomberg where she said, “It’s my job to run the business.” I know that quotes get taken out of context …
I can see her saying that, because she’s a business person, but we were talking about, there was an issue, which was really interesting, someone who was anti-gay and lesbian put a bunch of biblical quotes on YouTube and did a video. It was just biblical quotes, essentially. They did those, and then sold them as ads. It wasn’t a YouTube channel, it was an ad, and they bought an ad of the biblical quotes, then those ads got put into gay and lesbian’s videos, and so all of these gay and lesbian videos were like, “What are you doing putting an anti-gay ad into our thing?” and then they pulled it.
It’s a digital Franken — they can’t control it.
They pulled it and then the guy who made it was like, “I’m not violating anything, this is from the bible.” She was like, “I don’t even know what to do.” I had to say, “I don’t know what you should do. I think you should shut the whole thing down. Shutting down is my only …”
I tend to agree. The problem isn’t just, let’s just shut down technology.
No, I wrote that at the Times, it was like Sri Lanka, I was sort of like “Good, shut it down. Just for now, just to calm everything down.”
It’s just like what … Roger and I had this saying about a year-and-a-half ago, it was this Tylenol example. When it was found that there was poison in Tylenol, Johnson & Johnson took it off the shelf until it was safe. And their stock price tanked, but then it went up even higher because people trusted them.
The problem is that the harms now are not as simple as whether or not we’re all just getting poisoned from Tylenol, it’s this diffused climate change harm. Maybe it doesn’t affect you, but it’s causing genocides around the world, or it’s causing millions of people to believe conspiracy theories and debasing our social fabric, but because that doesn’t affect you, people don’t have that same level of urgency of we have to shut it down, but they really need to see, it’s not like the world was broken before, 10 years ago, when you could watch a video and nothing auto-played.
The important thing to say here in terms of who’s the responsible party, 70 percent of YouTube’s traffic — this is actually a year ago, so it’s even higher now, I’m sure — 70 percent of YouTube’s traffic is driven by what the algorithm is recommending. An example of this is, people think, “That’s not true. I’m the one choosing my way through YouTube.” Let me debase that. That is not true.
Even my 13-year-old knows that.
Right. The simplest example is, you’re sitting there, you’re about to hit play on a YouTube video, and you’re like, “I know there’s other times where I get sucked in, but this time is going to be different,” and then of course you hit play and two hours later you wake up from a trance and you’re like, “What the hell just happened to me?”
What happened in that moment, people don’t see, is that somewhere in a Google server, it wakes up this avatar model voodoo doll version of you, and based on all of your clicks and likes — those are like your hair filings and nail clippings — it makes this model behave more and more like you. Then what they do is, they print the avatar model with 100 million videos and they say, “Well, which one if I were to test this video, this video, this video, would keep you there the longest?” It’s like playing chess against your mind.
Yeah, and it’s going to win. If you think about it, why does Gary Kasparov beat you and me at chess?
I can’t play chess, but …
He sees more moves ahead on the chessboard. When you’re playing it out, you’re playing out simulated versions of, “Well if I did this, he would do this,” but he’s just playing out more simulations, so he wins. When Gary loses against the best supercomputer in 2004, Gary doesn’t just lose chess in that match, he was the best human chess player we had, so from that moment onward, all of humanity is now worse at chess than computers. They’ve overtaken humans at chess.
Right, so a better Gary Kasparov.
Now here we are, 2 billion human, social animals, with the best hardware we’ve got, like bringing a knife to a space laser fight. We’re bringing our tiny little prefrontal cortex, which is amazing, but also very limited. We have paleolithic emotions, and we are bringing that to bear when we are about to hit play on a YouTube video. YouTube has now overrun … and Facebook, by the way, anyone with a supercomputer — Google, YouTube, Facebook — can simulate the perfect things to show us.
This is actually a deep point that people really underestimate, because it’s sort of a civilizational moment when an intelligent species, us, we produce a technology where that technology can simulate the weaknesses of the creator.
It’s almost like the puppet that we’ve created can actually simulate a version of its creator and know exactly what puppet strings to pull on the creator, so we’re all outraged. Take the kids example. You have kids who are now addicted to what they look like on social media because Snapchat promotes this beautification filter, basically rewarding you whenever you look different than the way you actually do. It’s never been easier to see that people only like you if you look different than you actually look.
Fifty-five percent of plastic surgeons reported seeing someone who wants to get plastic surgery to look like their Snapchat beautification filter. This is for teen girls, if you don’t know this, the beautification filters in Snapchat, they plump your eyes, your cheeks, your lips, so we’re distorting people’s identity. When you realize that this is having a control over our social fabric, it’s having a control over children’s mental health, it’s having control over our politics, it’s having control over our elections, and people really haven’t realized that technology is holding the pen of history right now. We’re not in control.
When you think about all of the different things, one of the things that you do get is that they are all working together, but not thinking of it at all together. They don’t think, each individual, that they’re making a problem. It’s like everyone getting a plastic bottle, everybody buying …
It’s like climate change. Facebook and YouTube are kind of Shell and Exxon, but it’s worse though, because they also own the satellites that can detect how much pollution’s being created. This is a really important point: How much human downgrading and polarization or anger is happening in each of these countries from Facebook? We don’t know, because guess who has the data? You and I don’t.
They do. We had this line, “It’s a living, breathing crime scene in each of these elections.” They are the only ones who have the data, so it’s like Shell and Exxon where you create the pollution but you privately profit, the harm shows up on the balance sheet of the society, and the only way that we’re going to know what those harms are accurately is if we have access to the data. It’s as if Shell and Exxon own the observatory satellites.
Clearly, from a regulatory perspective, this has to change. The easiest thing to change, the thing that fundamentally has to change, is that we’re moving from an extractive attention economy that treats human beings as resources …
As fuel, right. For our data, for our attention, to a regenerative attention economy where we just don’t drill. Why in the world would we say, “Let’s profit off of the self-esteem of children”?
This is so wrong. And we used to protect this.
In the next section, we’re going to get to how we’re going to do that. But one of the things that is fascinating to me, one is, these companies now do feel victimized. You’ve picked that up, haven’t you, from them?
Yes, very much so, because they’re trying hard, they’re doing lots of good, and they’re going to be victimized from their past behavior, and I totally get it. I totally get why they are feeling that way.
Right. I had someone saying, “We’re still paying for 2016,” and I’m like, “Yeah, you are. You haven’t paid it off yet. Sorry, you’re going to have to keep paying. In fact, again, you may have to shut it down.” You may not have to do that.
One of the things that’s really interesting, when I talk to a lot of these people, they’re like, “Well, this is the way it works,” and I’m like, “Maybe you don’t get to grow. Maybe you don’t get to do this.” That’s what you were just talking about: Why would you want to benefit off of the self-esteem of children? Maybe you don’t get to do that.
We’re going to talk about that in the next section, but one of the things, I’ve used this example many times, I use it again and again because I want to repeat it over so people get it, there’s two things I say all the time, or three things. One is, everything that can be digitized will be digitized, so be mindful of what that means. The second one is that Russians didn’t hack Facebook, they used it as a customer, which I think I like to say over and over again.
The third thing, which I think is most accurate because everyone is trying to look for metaphors of what is happening here: I like to think of these platforms as cities that own everything and you pay the rent for being there. Even if it’s free, it’s not free in any way whatsoever, and they decide not to provide police and garbage and signs, anything. And they still get the rent.
That’s the society you get, and they don’t like to see themselves that way.
Right. The urban planning metaphor is the best one. I totally agree. We’ve been saying similar things for years. Marc Andreessen obviously has this quote, “Software is eating the world.” If you think about what that means in the context of who is running the software, these billion dollar corporations, what that really means is that private incentives are eating the world. Private companies are eating the world.
Also, we don’t regulate those private companies, so it means that deregulation is eating the world. Take an example; Saturday morning was a protected area of the attention economy, for children. Say, we have these rules that govern what you can and can’t do.
What you can advertise, etc. Then YouTube for Kids gobbles up that area of the attention economy. Now you have a private company governing a public part of life.
Of children’s mental health in Saturday morning. Whatever protections we had there, guess what, they’re gone.
They’re not there. They never were there.
Right. From a regulatory standpoint, just a framework to use, a very simple way of thinking about this is, what were protections on these different areas that we had that we simply lost because we let private incentives eat it up? And let’s ask, what were the principles behind those protections?
Initially, and let’s bring those protections back, the way that they make sense. That’s just an easy one.
Another example is elections. We used to regulate that if Hillary Clinton and Donald Trump want to put an ad at 7:00 pm on a Tuesday, it should be an equal price, and we can see who did it, and there are certain rules about an equal price, etc. Then you let Facebook gobble up election advertising, that area of the attention economy …
Without any regulation.
Without any regulation, and suddenly it’s an auction. Suddenly it’s like, “Wait, who decided that? Why are they the government of how now 2 billion people’s elections are governed?” Moreover, even if they have good intentions or they hire ethicists, it’d be like, do you want Coca-Cola governing the public square, but then they hire some ethicist to be better? No, you don’t want Coca-Cola governing the public square.
I think that’s the interesting part, is that they become the de facto public squares without being public.
They’re private. They’re the private public squares, and then they get to hide behind First Amendment stuff. “It’s First Amendment!” I’m like, “You’re not public. You’re not a government.”
This is where the metaphor of urban planning is so important, because it shows they are environments that we inhabit. They’re environments, they’re not products we use. This is the thing we were trying to say years ago. People say, “Oh, it’s just a tool, it’s just this neutral thing, I can use it to post stuff to my friends.” It’s like, no, when you’re creating the habitat that 2 billion people live by — we spend about a fourth of our lives now in artificial social systems, meaning in these digital environments. That’s just on-screen. When you’re off-screen, you’re still thinking thoughts that came from that artificial social environment.
We have to govern them as public spaces. They’re …
They have to be governed.
They have to be governed, exactly. This is a challenge, and just to be clear here, it’s not like there’s these easy answers of like, “Well, it’s easy, you just have to do X, Y, and Z.” There’s different governments, there’s authoritarian governments in developing countries, and different languages, and we never had a problem like this before. That’s the conversation we need to have is, how do we govern …
Let’s talk about what we need to do. Like you talked about, everything has been so disparate, whether it’s technology addiction or AI or political unrest or hate speech, they don’t get linked together.
Linking them together is critical. I just wrote an essay talking about how they are all linked together. It’s not one problem.
They all inform each other.
Right, and they’re all part of self-reinforcing feedback loops, much like climate change, like carbon has a self-reinforcing feedback loop with methane and oceans, etc. Similarly, in the attention economy, is it easier to say short, brief, soundbite-y things, or long, complex, nuanced things? Short soundbite-y things. When you say short things, what tends to work better is outrage. Outrage gets 17 percent higher retweet rates. If outrage gets that more often, then we polarize people more often; more polarization means we’re more isolated, living in our own chambers; when we’re more isolated, we’re more vulnerable to conspiracy-theory thinking, and so these things sort of self-reinforce.
We need a name for this connected system. We just used the phrase “human downgrading” because it gets at the heart, which is that while our data and our attention are used to upgrade the machines, to build better and better avatar models of us, it’s downgrading humans, it’s downgrading our mental health, our children’s attention spans…
What needs to be done in the immediate term first? Obviously the realization is happening. I think there’s … The techlash is here and going strong. What has to happen?
On the techlash thing, first, I would like to see us get calmer and more nuanced about, let’s just solve the problem. There’s clearly, like you said, a lot of rage and frustration about the past, I agree with a lot of that. These issues were not hidden, it was not impossible to foresee some of these consequences, but now we are where we are and we have to talk about how we can fix it. The thing that we’re most trying to say is that we have to have a common language and framework for addressing these problems.
Human downgrading comes from three things. One is unergonomic or artificial social environments. That’s what I mean by we’re contorting ourselves to fit into this …
Into the product.
Fit to the product, as opposed to the product wanting to fit around our friendships, fit around our society, our public square, our civility. It should be asking, “How do I fit and strengthen those things?” not, “How do I debase them and replace it with my synthetic version?” That’s the first diagnosis.
The second one is from these overpowering AIs where they have a voodoo doll avatar version of each of us that’s more powerful and they can use that to manipulate us.
By the way, that’s a Will Smith movie about to come out, Gemini Man, but go ahead.
They build a clone of himself and he goes back to kill … they’re both assassins, and so he can’t really kill himself because he knows all his moves … anyway.
Right. Interesting. I’m very excited to see this. I mean, this gets really dangerous when you realize that, again, everyone is looking out … this is a big point that we made in the beginning of our new agenda last week in this big presentation, we think of it as like the Inconvenient Truth for tech. The point we made was, up until now, the biggest milestone point that people talk about in the future of tech is when does it get better than human intelligence? When does it get better than human strengths?
We’re like dolphins, right?
Right. Exactly. When are they taking our jobs because they can do everything that we can do better than us? There’s a much earlier point, imagine there’s that timeline, here’s the graph of technological progress going up, up, up, and there’s an earlier point, before it crosses human strengths, all it has to do is cross human weaknesses, and from that … It’s like a magician. If you’re a PhD, as a magician, I don’t care. I don’t need to know that. I just need to know your weaknesses, not your strengths.
When technology exploits our weaknesses, it gains control. The point of this second diagnosis, going from these overpowering AIs, avatar voodoo dolls of each of us, those have to be a fiduciary to our interests, because that’s a dangerous kind of asymmetric power. We have a model for this in every other form of asymmetric power. Lawyers, who you hand all of your personal information over to so they can best represent you.
Doctors, you want to hand them as much information about you so they have more to use to diagnose you. So it’s fine to have asymmetric power insofar as it is in our interest. It represents our interest. The joke about this is that Facebook is like a priest in a confession booth whose business model is to sell access to the confession booth to anybody who wants to pay them, but it’s worse than that because they listen to 2 billion people’s confessions.
That’s a very good metaphor.
And they have a supercomputer next to them that’s calculating 2 billion people’s confessions so that, as you’re walking in, it can predict the confessions you’re going to make before you make them. Then, it actually sells access to the prediction about you to an advertiser. So that is a dangerous kind of power, where it’s like we could have priests in confession booths, although the joke is maybe we shouldn’t have those. But you don’t want …
Yeah. Let’s not go into that part.
Let’s not go into that. But you don’t want priests whose business model is to sell access to someone else. So the fiduciary thing is critical because it immediately says you cannot have a business model based on exploiting the psychology of the person that you’re seeing when you have asymmetric knowledge about them.
So what do we do?
Well, that’s a whole area for legal scholars and policymakers to think about, and we want to assist that conversation. Again, we’re …
What are some of the good ideas, what are some of the bad ideas in this?
Good ideas about what?
What to do about this bad priest.
Yeah. So Jack Belkin, I think at Harvard Law School, has written this paper about information fiduciaries, but I think that was written in 2004, it’s more outdated. I think we need a newer model that represents the power of AI and prediction.
As a computer scientist, I know where this is going. Take Cambridge Analytica. They needed to get access to your data to predict your big five personality traits. Once they know your big five personality traits, they can tune political messages to you. Okay, but they had to get those 150 Facebook likes from you, right? And so that was that whole scandal.
There’s a paper out by Gloria Mark at UC Irvine, that based on your click patterns and how you click around on a screen, with 80 percent accuracy, I can get the same big five personality traits.
So, you don’t even need to do that.
I don’t need your data. I can predict everything about you. And guess what? The more I downgrade you to acting into dopamine and fear, the more predictable you become. Because there’s two ways to make you predictable: One is, I build a bigger supercomputer and I can predict a fuller and fuller space of things that you might do next.
The second way to make you more predictable is to simplify you, is to make you outraged, because when you’re outraged, how does that feel? You act in a more predictable, reactive way. Technology is doing both those right now. That’s why we say human downgrading is an exponential threat, because it’s downgrading our choice, making capacity to not fall into …
And not even know that it’s happening.
And not even know that it’s happening. We have to fix this.
I want to go to a solution. What is that? Who fixes it? Is it regulators that say, “You cannot do this anymore?” Because it’s so complex, what they’re doing.
So who is responsible? Besides, what they seem to have done is, “Hey, you can turn it off.” You can’t turn it off.
No. It’s like saying you can turn off the environment you live in. You can’t turn off your public square, your electricity, or your water. You need it. We live in these environments now. So we have to make them habitable to us, and they have to be the governor of the public interest.
So who does that? The government.
The government has a role.
Which has done it before with chemicals, banks, cars.
And Roger makes this metaphor all the time. We used to have the chemical industry that they just do whatever they want. Once we realize there’s some bad externalities, we’ve got to regulate it. Cars, same thing, seat belts, etc. Airplanes, FAA.
At the very least, if you consider that we’re at the beginning of an accelerating trend, not at the beginning, we’re at the tip of the exponential curve as it’s starting to go up. These issues will only get crazier. Technology is going to go faster. So at the very least, we used to have something called the Office of Technology Assessment, which basically was a non-partisan group in government, to do at least an analysis and generate quick policies and consult with the experts.
Right now, we’re a nonprofit civil society group. A lot of this work is being done by people like Renée or Guillaume, people who stay up until 3:00 in the morning, independently scraping data sets because they get Mozilla fellowships and can barely scrape by, and they’re the ones providing the accountability structure right now. This is not an effective system. We need to have well-funded observations of all these harms and then generating policy proposals much faster.
Just the way the government does weather.
Yeah. EO Wilson, who’s the father of sociobiology, he has this quote that says, “The fundamental problem of humanity is that we have these ancient paleolithic instincts. We have medieval institutions and we have godlike technology.” Paleolithic instincts, medieval institutions, and godlike technology. The point is, you can’t have chimps with nukes and regulate it with medieval 17th-century, 18th-century institutions.
At the very least, we need to bootstrap the institutions to have faster OODA loops.
To keep up with it.
To keep up with it.
Which is hard.
Which is hard, and I’m not saying that’s easy. I’m just saying, we’ve got to do that.
Because these companies are nation-states. They just don’t have anyone …
They don’t have any … Exactly.
Anybody being able to vote them out.
Right. But they could also, I mean, for all the harms, they could be the thing that gives us exponential common ground, exponential ability to solve climate change.
So don’t they?
They could be the things that help.
Why don’t they?
Their business models and the fact that they’re competing against one another and the fact that they don’t even see the issues as we describe them. I don’t think that even the framing that we’ve laid out today is the common understanding.
Just three of them. Let’s be honest. Maybe four.
There’s not that many. So this is the …
Amazon, Facebook, Google, and maybe Alibaba and WeChat, right?
This is the thing. The negative side of it.
Is that all of them? That’s the …
Yeah, I mean, there’s only like five.
Apple is not.
It’s not in there quite the same.
But Apple has a role to play.
Role to play.
But they’re not creating the problem. They have a role to play in the solution.
But this is the good news. This is what we said in our presentation. Human downgrading is the climate change of culture. Like climate change, it can be catastrophic. Unlike climate change, only about 1,000 people, among like five companies, need to change what they’re doing.
Now, in saying that, I’m not trying to disempower the thousands and millions of us outside of these systems that are like, “Well then, I guess I’m not included.” That’s not it at all. This is going to take everyone.
The policymakers, the shareholder activists to put board resolutions on these companies’ board meetings. The media, guiding the conversation. Policymakers. The government’s job is to protect citizens from all these things. Everyone has a role. We are trying to simply facilitate and accelerate that work by providing that common language and understanding.
We asked about policy. One simple thing. The best ethics is the ethics of symmetry. Do unto others as you would do unto yourself. For the kid stuff, imagine a world where you designed products in such a way that you happily endorse and have your own children use those products for hours a day. That neutralizes about half the hearts immediately, because notice that none of the Silicon Valley executives have their own children use these products. The CEO of Lunchables …
That’s not true, I’ve seen, they use them. I’ve been around a lot of these children.
Well, when I say that, it’s not like Google Search box or YouTube at all. I mean, more like social media. Like a lot of them do not use social media, at all. It’s just such a simple shift to make. And the CEO of Lunchables, food, did not let his own children eat Lunchables. You know you have a problem when you are not eating your own dog food.
There needs to be skin in the game. Another principle is that the people closest to the pain should be closest to the power. There are groups that are trying to bring these ethnic minorities in these developing countries most affected by these things, with no public representation.
Here we are in the free world where Renée and Guillaume and others do this hard-to-do research and they publish it in the Washington Post and New York Times. Then, in Nigeria and Cameroon and Sri Lanka, they don’t have that same level of accountability. We need those groups to have a seat at the table. They should be included. There needs to be much more diversity, obviously, in these conversations, but especially where we know, sorted by the harms, by the tensions that are being produced.
But there doesn’t seem to be any movement that way. They’re hoping it goes away.
They’re hoping it goes away because they created a Frankenstein. It’s very hard.
I mean, what it looks like to me from yesterday’s F8 is Mark’s now trying to create the greatest encrypted privacy organization on the planet now. He’s just trying to encrypt it and hide it. I mean, am I missing something? Like, he’s like, “Oh no, the jig’s up over here. I’m going over…”
Right. A lot of that, I’m assuming … I always want to be as charitable as possible and give the benefit of the doubt. I’m sure there are some good reasons for doing that based on, again, they’re the only ones who have access to who knows. Whatever decision making they’re doing, they’re the only ones deciding. That’s a huge problem.
Let’s assume there’s some good reasons for doing that. Besides that fact, there’s still also the fact that this is the best way in the world to escape liability because one of the things that happened with the Russia investigation is, they don’t want to look with children’s mental health. As soon as they look, they’re responsible. When it’s all private and these decentralized channels, suddenly it’s all happening in the dark. There are many of us who are concerned about what means for disinformation when there’s no way to track what’s happening.
These are thorny problems. There’s no easy solutions. We need complexity and nuance more than ever. We need thoughtfulness, not just naive techlash. But I do believe that through seeing what’s happened with Time Well Spent, with the race to hijack our minds, with the power of shared understanding, if people can see these the same way, if they can see the problem the same way, that the race to capture attention combines addiction, teen isolation, mental health, polarization, outreach education, the vanity-ification of our culture, that these are connected issues and we call that human downgrading.
The question is, how do we harness all the market forces, all the policymaking forces, to as rapidly as possible reverse human downgrading?
So, who are the key players? The companies?
The companies and, by the way, again mentioning Apple. Apple can play such a huge role because they can incentivize that race. They’re not incentivized to maximize attention on device and as people wake up to these issues, as they started to with the addiction things, they’re rewarded by consumers to say, “Who’s going to better protect my kids? Should I buy an Android phone or should I buy an iPhone?”
We’ve just got to elevate the race to the top from the first one, which is, “Who can show me a better chart and graph where I’m spending my time?” to a higher bar, which is, “Who can reverse human downgrading?”
So, Apple, yep. That’s design changes. That’s App Store economic changes. There’s some deeper conversations to have there. Policymakers. There’s a whole slew of policy. We have a new person working with us.
I think they should have an API for mental health at Apple. You know what I mean?
They need to enable access for these researchers because that’s the other thing I said in my presentation. We don’t have time for a decade of research on this. It’s very clear when you understand the mechanics of a developing mind.
I was just with Jonathan Hyde, who wrote The Righteous Mind. He did this huge literature review. You can look it up online. It’s like 50 pages. It is so clear that for teen girls between 10 to 14 years old, the social media is toxic. Self-harm, depression, suicides have all shot up in the last basically five years.
Oh, you don’t even have to do research.
Yeah, you don’t have to do it. First of all, it’s common sense. Second of all, the research does confirm it. We don’t have time to wait. The thing is, just like climate change, you can have people throwing doubt and dismissing all this stuff and saying, “Well, it’ll take a long time. It’s really complicated. Who’s to say if it’s really polarizing people? Is it issue polarization or relationship polarization or affect polarization?”
Then they’ll use academic status games to disinclude you from the conversation. It’s like, no, we know it’s causing polarization. Obviously, there’s lots of polarization that’s already existed. The birth of cable news and Fox News and these kinds of things that are magnifying it, but it’s clear that technology has amplified it, and at the very least, how can they all be in a race to create what Jack — what Twitter says, like healthy conversations, civility, open mindedness, dialogue.
How do you assess that with him? I’m sorry. He doesn’t care. I’m sorry.
You don’t think he cares?
No, I don’t. I do not. I do not. I don’t think he thinks it’s a problem. I think he thinks it’s an irritant sometimes, but I don’t think he thinks it’s a problem.
Well, I don’t know. I think …
I like him personally, but honestly, I’ve got to say, this is too long, too long now.
I think that for all these companies …
And they’re doing really well.
Facebook. I know they’re doing really well. That’s the thing. It’s like, they could hire so many more people, anthropologists, social scientists. A lot of people who are doing the hard work in the research now, by the way, who have thought about this, who are not inside the companies. They could do so much more and this obviously eats into their bottom line, but let’s take the example of Twitter. Back when you and I spoke in 2017.
Like January, February 2017.
Mm-hmm. Right after the election.
There was a study out then that 17 percent of Twitter’s users, estimated, were bots. Seventeen percent. What happens when you’ve been telling Wall Street, “I’ve got 200X million users,” and they’re anchoring your stock price on that number, “This is how many users we have,” because that’s what makes you that valuable. You can’t just shut down 17 percent of your accounts.
And also, just to make sure I’m speaking to all the audiences, some of those bots are fine. They’re just telling you the weather. It’s not that they’re all Russian or whatever. But still, they’re a problem and they’re not going to go shut them down.
Now, what happened exactly about half a year ago, August 2018, Twitter shut down 70,000,000 accounts finally, and that’s a great move.
Exactly, finally. It took a long time, but when they did, Wall Street punished them as opposed to rewarding them for actually doing the long-term regenerative …
Which would make it a better …
Which would make it a better system. I agree with you. In each of the companies’ cases, they’ve acted too little, too late. Zuckerberg saying, “It’s a crazy idea that fake news impacted the election.” It’s ridiculous.
According to …
Or, “We’re going to stop children’s comments.” I’m like, “Did pedophilia just occur to you? That …”
That just happened like two to three months ago that the advertisers … By the way, YouTube only tends to respond when, not just public pressure and media, but when their advertisers say, “We’re going to pull the money out.” That’s when they really respond.
These are not effective accountability systems. We need democratic accountability systems. We can’t just wait for whether we wake the advertisers up.
So what would be a shock to their system? The removal of the immunity clause from the Communications Decency Act?
I think CDA 230 is critical.
To remove it, saying, “Good luck with the lawyers.”
Well, we need …
Because I know, when I say it to them, they’re like, “We’ll be finished.” I’m like, “I’m good with that.” I’m teasing them, but I’m like …
I think this is where the debate has to center. I think there needs to be a massive review of CDA 230. For those who don’t know, the Communications Decency Act, Section 230, is basically what birthed the internet because it says the platforms aren’t responsible for the content that appears on them.
But, as Renee and I both have said, freedom of speech is not the same thing as freedom of reach.
That these platforms, when you’re recommending Alex Jones 15,000,000,000 times, it’s not that people type in “Alex Jones” 15,000,000,000 times with their keys, with their hands. It recommends that. If it’s recommending them, think about like how big is the New York Times and the Guardian and all these people combined? It’s nowhere close to 15,000,000,000.
They’re governed by these laws. So, they have to be responsible for recommendations of especially what we know to be hate speech or inciting violence, or these things that are causing people to take up arms around the world. We need a really deeper view of Section 230, and that’s something that’s a bigger conversation, but we need to engage with policymakers on that.
All right. So, Tristan, where do we go from here? What do you need? I’m with you. I’m in the Tristan Army. Someone asked me if I thought I was on your side or not. I’m like, “There’s not a side.”
There’s no side here. I mean, it’s like … This is also not like our side.
Please. Are my children addicted?
This country ripped apart at the seams. It’s my goal.
Exactly. This is just Team Humanity and we’re not at the center of it. We’re simply trying to articulate a shared frame. We want to help all the actors in the space, because if you look at how big this is, every country, every election, hundreds of languages, hundreds of countries, for issues ranging from mental health to polarization. We have to solve it on such a huge scale.
Every government is involved. Every shareholder activist is involved. We’re trying to help the researchers accrue their research and show it to policymakers. We need policymakers engaged. We have this new head of mobilization, David Jay, who’s working on coordinating working groups on these topics. Some of the work is in public and in public events. We’re going to do a conference next year, but a lot of the work’s behind the scenes.
We’re launching a podcast called Your Undivided Attention, where we’re, ironically, we’re not … I mean, the whole point is …
How about I Leave You to Your Own Devices?
Yeah, exactly. That’s another good pun, but we’re interviewing magicians…
Magicians are a good idea.
Well, it’s people who understand there’s a subtlety to the human nervous system that tech designers are not the most in touch with. They’re just writing the code. They’re not thinking about their own nervous system.
Especially the social nervous system, like how do these things connect together. Those are the expertise that we need to rapidly accelerate that and hand it to the companies to be able to … We need to help them. As much as they’re also the problem.
I don’t want to help Mark Zuckerberg.
I hear you on that.
Is he listening? Do you talk to him a lot?
You know, we ran into each other at the Macron Summit last year, but we haven’t really sat down and chatted. I would love to chat.
Yeah, he doesn’t like to see me. I was over there and I brought some people over there and they let me in.
Yeah, yeah. It’s hard. I can’t say how many times I’ve gone back to the Upton Sinclair line, which is, “You can’t get someone to question something that their salary depends on them not seeing.”
Or, another way of saying it is, “You can only have the ethics that you can afford to have.” Right now, the price for many of these companies is too high. That’s why we need policy to make it more affordable.
Right. But you know the other thing? They’re all so poor all they have is money.
Anyway, Tristan, your work is amazing. I think it’s great. I think there should be more research out and policymakers …
We need everyone’s help. We need everyone’s help, too.
Policymakers really need to step in here, really smart across the country and across the world.
I think it’s really important. I think it’s come to that, I guess, that’s how it …
Well, I will say, there’s been more interest than ever from world governments who are affected by these issues, radicalization. I mean, this has to get stopped, right now.
Well, good. In two years, I hope we will have better answers.
Exactly. Let’s do this again.
All right. Absolutely. One hundred percent again. Again, this is Tristan Harris. He’s the co-founder of the Center for Humane Technology. Thank you for coming on the show.
Thank you so much for having me.
Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.