Motus
  • Home
  • Contact
  • Join Our Team

How Does Misinformation Work?
A deep dive into the two ways that misinformation thrives.

Picture
By Alexander P | 9/20/20
               
​                  
        Misinformation is everywhere around us in today’s world. It can take on many forms and is present in just about anything found online or in the real world. Topics ranging from Trump’s presidency to the COVID-19 pandemic are surrounded in misinformation, which consequently can have extremely damaging results. With low media literacy rates among US adults and a trending public disbelief of scientific consensus, various avenues for misinformation persist at both the individual and societal levels. Two factors — selective exposure and online media bots — provide these avenues of misinformation. Selective exposure at the individual level preserves misinformation by reinforcing false ideologies and creating close-minded individuals, while the immense quantity and adaptability of online bots allow for misinformation to be extended at the societal level. 


        Selective exposure is a major factor in how misinformation persists at the individual level because it blocks out alternate, correct information while pushing people further into their false beliefs when they encounter differing viewpoints. Selective exposure is defined as “the act of choosing to read or view belief-consistent information over belief inconsistent information (when given the choice)” (Scheufele and Krause 2019). If information doesn’t align with an individual’s worldview, they are more likely to deem the source non-credible, regardless of the truth backing the source. Also, the individual may become deeper invested in their flawed values, and, in the future, may be less receptive to alternative viewpoints that oppose their own. This process is known as the “backfire effect”, or when “attempts to correct misinformation about both political and scientific topics among individuals with the most strongly held beliefs can instead backfire, entrenching them further in their false views” (Nyhan and Reifler 2010). For example, in an experiment, people with differing political views were presented with facts about climate change and its effects on health. The results from this study show that, on average, Democrats were more supportive of climate change legislation than they were before hearing this information, while Republicans were less supportive of new legislation than they initially had been (Hart and Nisbet 2011). Selective exposure helps fuel the backfire effect because individuals reinforce their viewpoints with evidence, whether true or not, causing them to be more defensive and dismissive of a differing idea. This defensive nature at the individual level caused by selective exposure, and consequently the backfire effect, has had detrimental effects in America, specifically regarding politics. The growing bipartisanship has been in part caused and worsened by this idea of selective exposure, as “strong partisans who are especially knowledgeable about politics... gravitate more toward news sources that mirror their preexisting views” (Iyengar and Hahn 2009). 
 


         At the societal level, the sheer number of social media bots worsens the proliferation of misinformation, and countermeasures taken by media platforms are ineffective, thus continuing the spread of misinformation. Social media bots are defined as “automated accounts impersonating humans” that can increase “the spread of fake news by orders of magnitude” (Lazer et al. 2018). They are widespread across most media platforms: a recent study estimates that “between 9 and 15% of active Twitter accounts are bots [and] Facebook… [may have] as many as 60 million bots” (Lazer et al. 2018). Their prevalence, combined with the speed and accessibility of information that social media provides, make bots extremely influential and dangerous, as seen in the 2016 election. Before the election, Russian users deployed bots on social media platforms such as Facebook and Twitter that constantly spread pro-Trump sentiment. A study shows that these online bots influenced voters, estimating they are accountable for an increase of “3.23 percentage points of the actual vote for Trump” (Smialek 2018), showing how effective bots can be at spreading information. If Hillary Clinton’s popular vote margin was widened from +2.1% to +5.33%, it is highly likely that she would have won the Electoral College. Curbing the spread of bots poses a tremendous challenge for social media platforms. Bots are constantly evolving, as measures taken by media sites to eliminate them are eventually circumvented by those who initially program the bots (Lazer et al. 2018). Therefore, it is extremely unlikely that bots and their influence will be eliminated in the near future. In conclusion, though bots spread both inaccurate and accurate information equally (Vosoughi et al. 2018), their adaptability and widespread nature allow them to be the perfect reservoir to fuel the spread of misinformation globally. 
 


         The perpetuation of misinformation can worsen when selective exposure and online bots interconnect and fuel each other. Bots can provide the information which aligns with an individual’s preconceived incorrect view of a topic, thus providing knowledge that can form the foundation of one’s selective exposure. Similarly, selective exposure can dictate what information bot programmers decide to spread throughout the internet, influencing those that come into contact with the bots. While elements such as selective exposure and online bots worsen the spread of misinformation, there are some preventative measures that the country as a whole can take to slow this spread. One such measure would be to provide better education to improve overall information literacy. For example, part of the reason pseudoscience (a form of misinformation) persists is due to a lack of understanding in the general public of how the scientific method works. If the public better understood how proper science is done, pseudoscience would not have as large of a foothold as it currently does in our society (Sagan 15). This idea isn’t just limited to stopping the spread of pseudoscience: increasing education on how information is gathered and portrayed is important in literacy for any topic. Overall, though the persistence of misinformation (caused by factors such as selective exposure and online bots) is extremely harmful to our society, it can be combated by increasing education to create more informed, responsible, media-literate citizens. 


Works Cited
Iyengar S, Hahn KS (2009) Red media, blue media: Evidence of ideological selectivity in media use. J 
Commun 59:19–39.
Hart, P. S., & Nisbet, E. C. (2011). Boomerang effects in science communication: How motivated 
reasoning and identity cues amplify opinion polarization about climate mitigation policies. 
Communication Research. Advance online publication. doi:10.1177/0093650211416646
Lazer, David M.J., et al. “The Science of Fake News: Addressing Fake News Requires a 
Multidisciplinary Effort.” Experts@Syracuse, American Association for the Advancement of 
Science, 9 Mar. 2018, experts.syr.edu/en/publications/the-science-of-fake-news-addressing-fake-news-requires-a-multidis.
Nyhan B, Reifler J (2010) When corrections fail: The persistence of political misperceptions. Polit 
Behav 32:303–330.
Scheufele, Dietram A., and Nicole M. Krause. “Science Audiences, Misinformation, and Fake News.” 
PNAS, National Academy of Sciences, 16 Apr. 2019, www.pnas.org/content/116/16/7662. 
Smialek, Jeanna. “Twitter Bots Boosted Donald Trump's Votes by 3.23%: Study.” Time, Time, 21 May 2018, 
time.com/5286013/twitter-bots-donald-trump-votes/. 
The Demon-Haunted World, by Carl Sagan, Ballantine Books, 1997. 
Vosoughi, Soroush, et al. “The Spread of True and False News Online.” Science, American Association for the Advancement of Science, 9 Mar. 2018, 
science.sciencemag.org/content/359/6380/1146. 

  • Home
  • Contact
  • Join Our Team