Preventing Aggressive Ignorance – The A.I. We Don’t Think About

When UNC professor Zeynep Tufekci began watching Donald Trump rallies on YouTube, she noticed something odd. [1] YouTube began recommending and auto-playing extremist content—white supremacist videos, Holocaust denials, etc. In 2015 Twitter began sorting its timeline by algorithmically-determined “relevance” rather than chronological order, which angered many users who felt it was undemocratic for certain content to be amplified. [2] And after a conspiracy-theory-promoting “March For Our Lives 2018 Official” group was discovered, Facebook received criticism for how Groups radicalize individuals and enable extremism to flourish. [3] Despite receiving so much information from online sources, we want to think that our beliefs and knowledge are developed by us. But our perspectives are increasingly shaped by the Internet’s recommender systems—which show content not based on quality or discourse, but monetary incentives: clicks, likes, watch-times.

The Jamaican social/political philosopher Charles Mills is one of the most influential theorists of how social surroundings and individual social positions affect our knowledge (a philosophical field known as standpoint epistemology). Mills’ theories, though focused on race, shed light on why exactly recommender systems can be dangerous, and what we can do about them.

In his landmark essay, White Ignorance, Mills provides a case study of how individuals’ social standpoints affect their acquisition, learning, and justification of beliefs. He argues that this social component of knowledge causes privileged social groups—such as white individuals—to have a lack of knowledge, or false knowledge, regarding their privileged social positions and minorities’ needs. In fact, because advantaged groups are shielded from and not held accountable for negative aspects of their social system, they are incentivized to remain ignorant and refuse to accept opposing views—just as how historically, whites dismissed inequality (exhibiting white ignorance) to continue benefiting from exploited minorities. Other examples of motivated ignorance are ubiquitous: consider how many non-minority individuals weren’t aware that nonviolent drug possession laws were created to incriminate blacks, how some men believe there is no workplace glass ceiling and deserving women are promoted just as frequently, or how big businesses deny climate change and pollution. Mills argues white ignorance (and any type of motivated ignorance) is unfair—creating a world in which the group in power doesn’t need to care about the truth, while subordinated groups must understand the social practices, narratives, and mindsets of the group in power to survive.

Although ignorance can spread through family or friends, it’s also reinforced by widespread narratives reinforced through media and society, which recommender systems often perpetuate. For example, many middle-class individuals and immigrants believe the poor are lazy and don’t work hard enough; likewise, the American education system teaches that Columbus and Ponce de Leon “discovered” a new country when it was already inhabited by Native Americans. Recommender systems that optimize for user attention at all costs risk perpetuating widespread narratives—both causing and reinforcing users’ motivated ignorance. But the kicker is that individuals are often unaware of their own biases and ignorance, just as how we aren’t always sure why recommender systems promote certain content for us. We want to avoid cases where recommender systems perpetuate our biases and ignorance without our knowledge or consent. So how do recommender systems decide what to promote in the first place? And how can users become aware of the risks of relying on recommended content to form their beliefs?

Recommender systems are primarily grouped into three main categories: collaborative filtering methods, content-based systems, and hybrid systems (which combine aspects of collaborative-filtering and content-based systems). Collaborative filtering methods estimate user preferences using the user’s past searches, previous ratings, and historical commenting or liking activity, while content-based systems recommend content that is similar to what users have liked in the past. Hybrid recommender systems will typically combine both approaches (for example, using deep learning to directly predict user ratings from item attributes as well as past user behavior). 

When any kind of recommender system nudges us to consume content, it nudges content that is likely to entice us (whether that’s determined on the basis of our past behavior, similarity with previously liked content, or some other metric). Mills argues that because individuals’ beliefs reflect their own social perspective, people are amenable to information that is emotionally easier for them to understand. For example, it’s easy for rich individuals to ignore systemic biases and believe anyone can be wealthy if they work hard enough. So companies will not promote content that is harder for individuals to accept (i.e. anything that might oppose one’s viewpoint); rather, they will show content that reinforces beliefs they think you have, or may be attractive to you. So YouTube shows alt-right propaganda after Trump rallies, and videos about veganism after researching plant-based diets.

Recommender systems may profile one’s social position or purported viewpoints using demographics or past behavior, and then recommend videos or Facebook groups based on this. Empirically, recommending radical viewpoints has been successful for increasing user engagement. But this has dangerous outcomes. First, it prevents us from hearing opposing beliefs, or alternative narratives, about the issues at stake. Second, the recommended content or groups often reinforce our beliefs, because they are full of, or represent the views of, individuals who share these beliefs. This constant confirmation-bias strengthens our motivated ignorance to the point where we may not trust other contrary viewpoints despite evidence supporting them. [4]

This “profiling” of users, which is done by both hybrid and collaborative filtering algorithms to determine recommendations, also raises privacy concerns. Even if users consent to their data being collected, they may not know exactly how that data is used to inform their recommendations. While recommendations can be helpful for removing irrelevant information, they can also encroach upon the user’s autonomy, and privacy policies need to be more clear about how data will be used so users can consent appropriately. The impact of profiling users is also exacerbated, as the collaborative filtering recommender system model not only uses users’ own background to provide recommendations—it also enables “similar” users to influence each other’s preferences. In fact, the name, “collaborative filtering,”,  refers to how similar users can filter down each other’s recommendations by virtue of their own behavior. This presents another privacy issue, — in which recommender systems can still make inferences about the behavior of users that do not consent for their data to be used, because such users are similar in some other way to other current users on the platform.

The problem of how to combat the “nudges” of recommender systems, and the echo chambers that may form as a result of them, is thorny. First, it gets into free speech concerns (are there certain types of content we need to be more careful about recommending?), issues of privacy (the travel agency Orbitz recommended Mac users higher-priced hotels than PC users—should that be allowed?), and any solution would face the challenge of regulating user-generated content on technology platforms. Second, recommender systems are not entirely bad. They can be helpful—in fact, one might think radicalization isn’t always harmful (if I’m exploring a healthy habit, the jump from watching 5k’s to 50-mile ultramarathons could be motivating!). Third, many solutions emphasize actively incorporating diversity of viewpoints (especially when it comes to content recommendations on a site like YouTube)—but can’t too much viewpoint diversity be unnecessary or irrelevant? On what issues should we mandate diversity of viewpoints, and what does that say about our own values? Lastly, do private tech companies have a right to generate revenue for their shareholders? If so, it would be difficult to balance that right with the moral responsibility they owe to their users. Ultimately, regardless of the philosophical conclusion we come to, all parties will still try to game the system to their advantage.

Regarding potential solutions to this issue, further research could be conducted to see how other algorithmic objectives (i.e. besides watch-time or clicks) affect user enjoyment and revenue. Companies emphasize short-term, “addictive” happiness metrics (YouTube, for example, targets maximizing expected watch-time and minimizing time elapsed since watching a certain type of video), but they might not consider retention metrics like visits per week, or trends in visits over time. Although the viability of this alternative is unclear, if optimizing for a healthier kind of user engagement led to revenue-maximizing long-term retention, this would align companies’ incentives with their moral obligations.  

Recommender systems should also prioritize diversity and quality of information. Recommendations leaning one way (e.g. videos of Democratic candidates) can be minimally balanced with the other viewpoint (e.g. a Republican campaign ad). This may make recommendations less relevant, but informational diversity outweighs inconvenience. Furthermore, recommender systems should diversify the quality of information. Content from legitimate news or academic sources could be promoted if users’ recommended content currently comes from illegitimate sources. This is different from simply fact-checking sources, which may be too invasive of user’s privacy and content preferences—we would simply be recommending content from FOX News and the New York Times alongside Breitbart, rather than evaluating the accuracy of the user’s content. We may be stepping into the free speech debate (would these policies give recommender systems too much agency to censor content to promote their own agenda?), but these systems should be balancing all kinds of information without some underlying preference. Similar ideas have taken root in machine learning literature — there is a growing body of research on enhancing the diversity of content from recommender systems. One interesting proposal is the design of an “information neutral recommender system” which has a neutrality function that quantifies the neutrality of content from the current user’s perspective, and optimizes for maximizing the neutrality function while minimizing its traditional loss function. [5] The researchers discovered that their information neutral recommender system didn’t sacrifice relevance, but may not scale. 

Lastly, motivated ignorance reinforces and provides new justifications for the importance of diversity in the technology industry. Diversity in general is seen as an inherent good. But diversity is also valuable because individuals in non-privileged positions (like minorities in White Ignorance) can see problems with the system and the privileged group that individuals with privileged standpoints cannot easily access. A lack of diversity is part of the reason why recommender systems have been problematically radicalizing individuals, since the engineers developing and implementing these systems didn’t realize the inherent biases in their algorithms andrecommendations. Teams of engineers who each have access to different information from different social standpoints—representing diverse genders, races, geographies, classes, etc.— will be able to anticipate and detect more problems.

Recommender systems are currently used most prominently in the e-commerce and social media sectors, and are on track to be used in even more high-stakes scenarios. Artificial intelligence is used to inform jury decisions in criminal court cases and determine which individuals receive job interviews. Before we deploy recommender systems to more crucial arenas, we must better understand and refine the mechanisms that they are founded on.

About The Author

Harvard '21

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You don't have permission to register