The Race for Consideration on YouTube

This article is part of the On Tech newsletter. You can sign up here to receive it on weekdays.

Why do we do this when we are involved in heated arguments with our neighbors on Facebook or in politically charged YouTube videos? That is the question my colleague Cade Metz would like to ask us and the companies behind our favorite apps.

Cade’s most recent article is about Caolan Robertson, a filmmaker who for more than two years helped create videos featuring right-wing YouTube personalities that he says were deliberately provocative and confrontational – and often deceptively edited.

Cade’s coverage is an opportunity to ask us tough questions: Do the rewards of internet exposure encourage people to post the most dangerous material? How much should we trust what we see online? And do we tend to look for ideas that fuel our anger?

Shira: How much blame does YouTube earn for people like Robertson who make videos that emphasize conflict and social divisions – and in some cases have been manipulated?

Cade: It’s tough. In many cases, these videos became popular because they confirmed some people’s prejudices against immigrants or Muslims.

But Caolan and the YouTube personalities he worked with also learned how to play or make up conflict. You could see these types of videos caught her eye on YouTube and other websites. And YouTube’s automated recommendations have sent lots of people to these videos too, encouraging Caolan to do more of them.

One of the executives at Facebook recently wroteIn part, that his company is mostly not to blame for pushing people into provocative and polarizing material. That’s exactly what people want. What do you think?

There are all sorts of things that add to our propensity for sensational or outrageous things, including talk radio, cable television, and social media. But it’s irresponsible to say that some people are like that. We all have a role to play in not inciting the worst in human nature, and that includes the companies behind the apps and websites we spend our time on.

I’ve thought about this a lot in my coverage of artificial intelligence technologies. People try to distinguish what people do and what computers do as if they were completely separate. You are not. People decide what computers do, and people use computers in ways that change the way they work. That’s one reason I wanted to write about Caolan. He takes us behind the curtain to see the forces – both human nature and engineering design – that affect what we do and how we think.

What should we do about it?

I think the most important thing is to think about what we are really seeing and doing online. When I get scared, I think about new technology, including deepfakes, which can produce counterfeit, misleading, or outrageous material on a much larger scale than people like Caolan could ever do. It becomes even more difficult to know what is real and what is not.

Isn’t it also dangerous if we learn to distrust what we see?

Yes. Some people in technology believe that the real risk with deepfakes is that people learn not to believe anything – including what is real.

How does Robertson feel about making YouTube videos that he believes are polarized and misled people?

In a way, he regrets what he’s done, or at least wants to distance himself from it. But he’s now essentially using the same tactic he used to create right-wing extremist videos in order to create right-wing extremist videos. He does the same political side as he does on the other.

Comments are closed.