Approximately one-quarter of Americans get their news from YouTube. With billions of users and countless hours of content, YouTube is one of the largest online media platforms in the world.
In recent years, there has been a popular belief in the media that highly partisan, conspiracy-driven YouTube channels are making young Americans more extreme, and that YouTube's recommendation algorithm is leading users towards increasingly extreme content.
However, a new study from the Computational Social Science Lab (CSSLab) at the University of Pennsylvania has found that users' own political interests and preferences play a major role in the content they choose to watch. In fact, even if the recommendation feature has any impact on users' media consumption, that impact is mild.
"On average, relying solely on the recommender leads to consuming less partisan content," said Homa Hosseinmardi, the lead author of the study and a research associate at CSSLab.
To determine the true impact of YouTube's recommendation algorithm on users' viewing content, researchers created bots that followed the recommendations and others that completely ignored them. For this purpose, the researchers created a set of 87,988 bots with YouTube viewing histories of real users collected between October 2021 and December 2022.
These bots were assigned personalized YouTube accounts to track their viewing histories and estimate the partisan tendencies of the content they watched using metadata associated with each video.
In two experiments, each bot with its own YouTube account went through a "learning phase" where they watched the same sequence of videos to ensure they were all exposed to the same preferences of YouTube's algorithm.
Next, the bots were divided into several groups. Some bots continued to track the videos from the real users' histories they were trained on, while others were designated as experimental "counterfactual bots" - bots designed to separate user behavior from algorithmic influence based on specific rules.
In the first experiment, after the learning phase, control bots continued to watch the videos from the users' histories, while counterfactual bots deviated from the users' actual behavior and only selected videos from the recommended list, disregarding user preferences.
Some counterfactual bots always chose the first ("next") video in the sidebar recommendations; others randomly selected one of the first 30 videos listed in the sidebar recommendations; and some randomly selected one of the first 15 videos listed in the homepage recommendations.
The researchers found that, on average, counterfactual bots consumed less partisan content compared to the corresponding real users - a result that was more pronounced for individuals who consumed more partisan content.
"This gap corresponds to users' inherent preferences for this type of content relative to the algorithm's suggestions," said Hosseinmardi. "This study shows similar moderation effects for bots consuming extremely left-leaning content or when bots subscribe to channels on the extreme end of the political spectrum."