Ethics of YouTube’s Recommendation Algorithm

Jack Goldberg
14 min readMay 16, 2021

--

1 — Introduction

1.1 Background

The Capitol insurrection that occurred on January 6, 2021 may have initially looked like a sudden eruption of right wing anger at the result of the 2020 presidential election. However, upon closer inspection it becomes clear that the violence was really the culmination of a drawn out process of misinformation. The rioters were motivated by election conspiracies spread through the internet. The biggest bullhorn for the conspiracies was the incumbent president, Donald Trump. His tweets both spread misinformation to a new, rather large, audience as well as emboldened the already radicalized. Even though the allegations of election fraud were not based in reality, there was a physical (5 deaths) and civic cost to these conspiracy theories.

Of course, online radicalization did not begin on the night Joe Biden was elected President of the United States. For example the Qanon movement began in the early days of the Trump administration. While the movement has parroted a number of other conspiracy theories, its central ethos remains the same: it asserts Donald Trump as a messianic figure who will root out the “Deep State” and bring a new golden age to the United States of America. Many of the movement’s views percolated into mainstream conservative thought. The manufactured battle between the righteous Trump and evil Deep State agents inspired both Qanon followers and right-wing extremists to engage in the type of violence that was witnessed on January 6th.

The question remains how so many people fell victim to such a conspiracy theory. The internet provides a low friction medium for spreading ideas. In a positive sense, the information revolution has democratized knowledge; curious minds may seek out anything of interest. There are also consequences: social media platforms have been the main vectors for conspiracy dissemination. Conspiracy theories are not strictly a right-wing phenomenon. The internet is the host to theories of all kinds, including flat earth and 9/11 denial. The structure and economic model of major social media platforms makes them veritable breeding grounds for conspiracy movements. Social media companies are thus placed in a moral bind when weighing the risks of online radicalization against the profit motives of the company.

1.2 Literature Review

Corporations are often faced with the decision to choose either financial benefit or public good. The field of business ethics often deals with this exact question. One example of a corporation choosing profits over people was the Ford Pinto case study. In the early 1970s Ford produced a car that was supposed to compete with the rising Asian automobile market. While testing Ford discovered that the gas tank was vulnerable to leakage in rear-end collisions of 20 miles per hour (Matteson & Mettevier 2021). However, the company did a cost-benefit analysis and determined that the cost to fix the gas tanks was not justified. Instead, they allowed their customers to die in avoidable accidents. Business ethicist Richard T. De George not only argues that Ford was not acting ethically in ignoring the safety concerns (a claim that most people would make in hindsight) but he also says that Ford engineers were morally obligated to whistleblow in that situation (De George, 1997).

Youtube’s dilemma is also steeped in the ethical debate on hate speech. Slagle provides a comprehensive review of ethical arguments relating to hate speech (Slagle, 2009). John Milton made the earliest libertarian argument in stating that speech is a battlefield where the pious truth will prosper over sinful falsehood (Milton, J., 1965). In a more secular approach, John Stuart Mill argues that hate speech is necessary to provoke a dialogue within a community (Mill, 1966). Conversely, the doctrine of critical race theory rejects the libertarian view (Slagle, 2009). This framework recognizes inequalities within society and interprets hate speech through those structural differences. For example, Mari Matsuda argues that truth does not inherently drive out falsehood and thus attempts to ethically justify the restriction of hate speech (Matsuda, 2018).

1.3 Social Media

In a world where artificial intelligence is thrown at every problem, it is no surprise that social media companies are using it to increase their bottom line. Safiya Umoja Noble in her book, Algorithms of Oppression: How Search Engines Reinforce Racism, argues that the private interests of social media companies shape the way content is presented on websites (Noble, 2018). Companies such as YouTube earn revenue through time spent on their sites. More eyeballs on their sites for longer periods of time results in more advertising revenue. So, YouTube will generally do anything they can to lengthen user sessions. At first instinct, artificial intelligence appears to be a novel solution; neural networks can learn the preferences of users and feed them similar, enticing, content. Companies like YouTube and Facebook are very successful in their goal. The issue remains whether this technology is ethically justified.

Issues with deep learning have been well established. One vein of criticism is with algorithmic bias. Non-representative training datasets have caused bias against minorities in computer vision models (Buolamwini & Gebru, 2018). Another is the environmental cost of language models: Timnit Gebru illustrated the amount of pollution caused by language models and was subsequently forced out of her role at Google (Hao, 2020). The issues with artificial intelligence in content delivery does not appear to be a bug, however. Rather, radicalization is a feature of the model. Radicalized users spend more time on the platforms going deeper down the “rabbit hole” in their given conspiracy. Without a human in the loop, algorithms are incentivized to feed users more extreme content. Zeynep Tufecki recounted her experience on YouTube, beginning from innocuous content and gradually being fed extremist content, both far-right and far-left (Tufekci, 2018). Further, Ribeiro et al. find support for the existence of a radicalization pathway to extreme content (Ribeiro, Ottoni, West, Almeida, & Meira Jr, 2020). Their work is supported by many other researchers in the field. So, YouTube users may be led towards extremism, even if they did not initially desire such content.

2 — Ethics and Algorithms

2.1 Moral Actors and Dilemma

The issue of algorithmic content delivery lies within the realm of ethics. Because the recommendation algorithm is such an important part of the platform, the policy decision must be made by a major player in YouTube’s hierarchy. When a major executive declares the company’s new direction, it is much more difficult to undermine that decision. Once the ultimate ethical decision is determined, it is important that the policy is actually carried out. Therefore, the ethical dilemma of algorithmic content delivery must be made by Alphabet CEO, Sundar Pichai.

The previously mentioned moral issue is that of radicalization through algorithmic recommendations. YouTube’s recommendation system leads individuals towards extremist ideologies. Those ideologies have grave consequences. As mentioned, extremism results in physical violence. Conspiracy movements like Qanon have noninsignificant body counts. Further, there are social consequences to radicalization. Victims of radicalization are often ostracized from their family, emotionally harming both the extremist and their former loved ones. The previously mentioned rabbit hole describes the process of users going deeper and deeper into extremist content. The rabbit hole is causing harm to those who The existence of this rabbit hole compels Pichai to make an ethical decision.

No decision is made in a vacuum. So, Pichai’s decision regarding the recommendation algorithm will have ramifications in other forms. The most apparent is the economic importance of the recommendation system. YouTube’s revenue model has been shaped around advertising. As discussed previously, video recommendations increase advertising revenue through prolonged user sessions. Increased revenue strengthens the fundamentals of the company and guarantees a longer lifespan for YouTube. That longer lifespan is beneficial to the 10,000 employees who work at YouTube. A strong YouTube improves the livelihood of all those who depend on it.

Evidently, there is a moral dilemma in this situation. Pichai must either put some sort of limit on the algorithmic recommendations and harm the economics of the company, or allow the status quo to continue and further facilitate extremist violence. There is no perfect decision here. Either decision will violate an established moral principle. Therefore, ethics must be used to mete out the ideal outcome.

2.2 Affected Parties

Given the wide penetration of YouTube into the public’s internet usage, any decision made in this ethical dilemma will affect YouTube users, their social sphere of influence, and the public at large. According to Pew Research, 74% of American adults use YouTube (Shearer & Mitchell, 2021). Any change in the way content is recommended to YouTube users will have an effect on that huge user base. Despite the clear issues with YouTube’s recommendation system, it is effective in its goal; it enhances the user experience through relevant video recommendations. The elimination of the recommendation algorithm will harm YouTube users by degrading their experience on the site. Even though Pichai’s decision will affect everyone generally, different outcomes will have particular implications for certain groups of people.

Those who are the subjects of conspiracy theories will benefit from reduced algorithmic content delivery. The victims of conspiracy theories are put in danger by extremist movements. By being vilified in such a way, these people potentially become the targets of violence. For example, the Tree of Life synagogue shooting in 2018 was motivated by antisemitic conspiracy theories that can be found on YouTube. The shooter killed eleven people and injured six. Conspiracy theorists often target minority groups to vilify like Jews, Muslims, immigrants, etc. These groups will benefit from the reduced propagation of extremist content on YouTube.

3 — Possible Decisions

3.1 Status Quo

Of course, one option is to continue the status-quo. YouTube has been hugely successful in their business model. To change their course due to externalities like extremism would be antithetical to their traditional business practices. Economist Milton Friedman states, “There is one and only one social responsibility of business — to use its resources and engage in activities designed to increase its profits so long as it … engages in open and free competition, without deception or fraud.” (Milton, F., 1962). YouTube’s recommendation algorithm is undeniably useful in increasing the profits of the business.

Should Pichai choose to keep the recommendation algorithm in its current state, society can expect to see continued extremist related violence. Without roadblocks in place, YouTube users will continue to be led towards extremist content. The low friction of YouTube’s user experience makes it an ideal gateway platform for potential extremists. Radicalization on YouTube leads users to sites with even less content regulation. Users who first learn conspiracy theories from YouTube are then led towards sites such as 8Chan, which actively encourage extremism and violence. Violent events like the Capitol Insurrection will continue to occur in some form. This course of action will harm the victims of the violence, of course. Widespread extremist violence will also have a terrorist-like effect on the public’s mental state. Potential victims will live with a constant sense of fear that they will be the victim of an attack.

Keeping the recommendation algorithm will also maximize user experience on the platform. With such a robust system, YouTube users will reap the most joy out of the platform as possible. Users, both extremist and not, will continue to be fed content that suits their desires. This benefit will reach a large number of people; significantly larger than the victims of extremist violence.

3.2 Algorithm Elimination

Half-measures are not suitable in this dilemma. New conspiracy theories are constantly developing. It is a fool’s errand to try to censor the content as it develops; as soon as one movement is tamped down, another emerges. For example, the Boogaloo Bois movement provides a case study for how extremist groups can deftly evade social media bans by simply changing their display name (Kriner & Lewis, ). The algorithm has no idea that it is promoting new extremist content, and the cycle begins again. Therefore, another option that Sundar Pichai has is to eliminate the algorithmic recommendations entirely. This does not preclude YouTube from using recommendations. Rather, videos can be curated by YouTube employees for recommending to users based on their demonstrated preferences. Regardless of how Pichai decides to pivot after elimination, the most important aspect of this decision is the fact that a black box neural network is no longer recommending potentially extremist content to users.

Eliminating the current recommendation system will result in fewer radicalized YouTube users. It must be noted that extremism will not end with the end of YouTube’s recommendation algorithm. There will always be other outlets to feed a desire for extreme content. For example, websites like Gab and Parler were created in response to Facebook’s crackdown on extremist content. The goal here is not to end extremism as a whole. Rather, eliminating YouTube’s recommendation algorithm will end the pipeline of extremism. Users would no longer unwittingly delve into the rabbit hole. Extremist movements will most likely no longer be able to capture the attention of large swaths of the public, such as it did in the case of Qanon.

A consequence of eliminating recommendations would be a worsened user experience on YouTube, and a corresponding injury to the business health of YouTube. Ending the content rabbit hole, regardless of whether it is extremist or not, will reduce advertising revenue on the site. While it is impossible to forecast the economic future of YouTube, it is very reasonable to expect that a loss in advertising revenue will hurt the company. Pichai could seek revenue in other areas. He could require a paid subscription for the site. Alternatively, Pichai could reduce YouTube’s workforce in an effort to cut costs and strengthen the margins of the company. Those fired employees will lose their livelihood and suffer as a result.

4 — Ethical Frameworks

4.1 Kantianism

Kantian ethics provide a reasonable framework for analyzing Pichai’s dilemma. The core law in Kantian ethics is the categorical imperative. The first aspect of the categorical imperative is that an action must be universalizable. Therefore, all people must be able to take part in that behavior without a logical contradiction occurring. In the case of YouTube’s recommendation algorithm, we are concerned with whether it is logical to spread misinformation through the platform. When misinformation is universalized there is no way to discern real information from fake. The inability to identify reality from fantasy devalues the entire concept of information. Thus, there is a logical contradiction in the universalization of misinformation. Even though YouTube does not create the content, the corporation does have a role in facilitating the spread of misinformation through their recommendation algorithm.

The second main aspect of the categorical imperative is the assertion that people should not be used as a means to an end. Under this framework, humans must be treated as autonomous, rational, beings. It is clear that YouTube’s recommendation policy is mainly motivated by the revenue potential of a user instead of the humanity of the user. Concern for the employees of YouTube does not override the fact that YouTube users are being manipulated to use the platform. Thus, Pichai is treating YouTube users, especially those who are being radicalized on the platform, as a means to further enrich himself and advance the interests of the corporation.

Under a Kantian framework Pichai would be ethically compelled to cease their use of algorithmic content delivery. The scourge of misinformation is already endemic in society. Ethical frameworks are not concerned with the market capitalization of a company. All beings are treated equal under Kantian ethics. Pichai’s use of algorithmic recommendations results in the unethical spread of misinformation as well as the objectification of the YouTube users.

4.2 Social Contract Theory

The ethical framework of social contract theory is especially pertinent in the YouTube algorithm dilemma because there are wide societal implications at play. Social contract theory generally states that people can live ethically by agreeing to a set of acceptable behaviors. Limiting certain actions restrict freedom, but social harmony is maximized through universal adoption of the contract. Contracts can be either implicit or explicit. Explicit contracts are codified documents that are made public. Implicit contracts are more nuanced in that they are not directly stated, but rather understood generally through repeated social practice. Assuming the validity of social contract theory, we can analyze the way that YouTube’s algorithm fits within existing social contracts. If it can be shown that YouTube’s recommendation algorithm does not contradict an accepted social behavior, then it is morally acceptable within society. Conversely, if the algorithm facilitates the violation of an accepted behavior, or prevents the enforcement of the contract, then the technology is not morally acceptable.

In most societies there are explicit and implicit social agreements to not spread lies. The truth is integral to the harmonious function of society. The United States legal system, for example, has rules regarding libel and defamation. People can be held legally accountable for spreading damaging lies through American society. Beyond the legal system, there are implicit agreements to not lie. It is generally understood that lies lead to confusion within society. Because lies are so damaging to the functioning of society, liars are often ostracized from their social group. The YouTube recommendation algorithm, however, does not allow for the enforcement of the social contract. Because the algorithm is a neural network, there is no discernment between lie and truth. Videos are only recommended based on the calculation that the user will watch it. The recommendation system thus subverts enforcement of the social contract. Under this ethical framework, algorithmic content delivery is not morally justified.

5 — Conclusion

Deontological ethical reasoning is particularly appropriate to use in this dilemma. Morality by rules is useful when decisions have society-wide implications. They provide a roadmap to universal adoption and enforcement of ethical norms. It has been demonstrated that Pichai’s decision will have far-reaching effects; YouTube is a huge part of society and extremism is damaging to social harmony. Considering that both Kantian ethics as well as social contract theory lead to the conclusion that algorithmic recommendations are unethical, Sundar Pichai is morally compelled to end YouTube’s algorithmic recommendation policy.

It must be noted that ending the practice is not a restriction on free speech. Content creators are still allowed to make whatever content they would like (that ethical dilemma is not addressed here) and users may watch what they please (again, that is an entirely different discussion). Instead, YouTube will no longer be actively promoting extremist content. There is a big difference between restricting speech and choosing to not actively propagate it throughout society.

Through the elimination of algorithmic content delivery we can expect to see a precipitous decrease in extremist content on YouTube. Without the rabbit hole, extremist content creators will have their supply of misled YouTube users cut off. Without a large audience to preach their lies to, creators will no longer have an economic or ideological incentive to stay on the platform. They can be expected to leave YouTube for smaller platforms, thus reducing the potential for extremist ideologies to spread through society. While it is not a panacea, reducing algorithmic recommendations will be a positive development. Perhaps it will signal an end of extremist attacks on the scale of what occurred at the United States Capitol on January 6th, 2021.

References

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Paper presented at the Conference on Fairness, Accountability and Transparency, pp. 77–91.

De George, R. T. (1997). Ethics and automobile technology: The pinto case. Technology and Values, , 279–293.

Hao, K. (2020). We read the paper that forced timnit gebru out of google. here’s what it says. Retrieved January, 21, 2021.

Kriner, M., & Lewis, J. The evolution of the boogaloo movement.

Matsuda, M. J. (2018). Words that wound: Critical race theory, assaultive speech, and the first amendment Routledge.

Matteson, M & Metevier, C. Case: The Ford Pinto. University of North Carolina Greensboro 2021.

Mill, J. S. (1966). On liberty. A selection of his works (pp. 1–147) Springer.

Milton, F. (1962). Capitalism and freedom. University of Chicago, 634

Milton, J. (1965). Areopagitica Рипол Классик.

Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism nyu Press.

Ribeiro, M. H., Ottoni, R., West, R., Almeida, V. A., & Meira Jr, W. (2020). Auditing radicalization pathways on YouTube. Paper presented at the Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 131–141.

Shearer, E., & Mitchell, A. (2021). News use across social media platforms in 2020.

Slagle, M. (2009). An ethical exploration of free expression and the problem of hate speech. Journal of Mass Media Ethics, 24(4), 238–250.

Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times, 10, 23.

--

--