Blog Posts

Facebook & YouTube – Misinformation

META / FACEBOOK 

1. Summary of Efforts:

To give Facebook the benefit of the doubt, we can first look at how Facebook evaluates themselves on this issue of misinformation. On Meta’s website, they say they want to provide accurate information to their audience by:

“Working to fight the spread of false news in three key areas: disrupting economic incentives because most false news is financially motivated; building new products to curb the spread of false news; and helping people make more informed decisions when they encounter false news.”


Adam Mosseri, VP – Meta

According to Gary Rosen, Meta’s VP of Integrity, they are trying to comprehensively combat misinformation, by addressing “several [misinformation] challenges, including fake accounts, deceptive behavior and misleading and harmful content.” 

  1. Fake accounts

Facebook blocks millions of fake accounts each day, in addition to investigating and taking “down covert foreign and domestic influence operations that rely on fake accounts.” They have removed more than “100 networks of coordinated inauthentic behavior (CIB).” Facebook defines CIB as, “When we find domestic, non government campaigns that include groups of accounts and pages seeking to mislead people about who they are and what they are doing while relying on fake accounts, we remove both inauthentic and authentic accounts, pages and groups directly involved in this activity.” 

  1. Deceptive behavior

They have built teams to detect and disrupt the economic behaviors behind misinformation because from their research, financial profit is one of the major factors in the spread of misinformation. They have also started using artificial intelligence in addition to their teams to “detect fraud and enforce our policies against inauthentic spam accounts.” 

  1. Misleading and Harmful content

People, in good faith, still have the ability to post misinformation content. Facebook has built a network of more than “80 independent fact-checkers, who review content in more than 60 languages.” Then after identifying the misinformation and classifying it as false, Facebook has the power to reduce its distribution and add a warning label which allows fewer people to see the information. The warnings on the misinformation have reduced the amount of people who click the information by 95%

Additionally, in Facebook’s Transparency Center, they convey what kind of specific misinformation they remove.

  • Physical harm or violence
  • Harmful health misinformation
    • Misinformation about vaccines
  • Misinformation about health during public emergencies
  • Promoting or advocating harmful miracle cures for health issues
  • Voter or Census Interference
  • Manipulated Media

The article by Niam Yaraghi embedded below argues that “these companies should take some responsibility over the content that is published on their platforms and suggest a set of strategies to help them with dealing with fake news and hate speech.”

2. Examples :

The best example Facebook is currently enforcing to help stop the spread of misinformation on their platform is the use of warnings on misleading content and banning incorrect content. Below is an example of what a flag and ban of Facebook content looks like.  

Screenshot from Forbes Article

According to this study which looks into The Effect of Warning Labels on Likelihood of Sharing False News on Facebook, this tactic makes a difference in stopping the spread of misinformation. Their results show that the “flagging of false news on social media platforms like Facebook may indeed help the current efforts to combat sharing of deceiving information on social media.” 

3. Evaluation :

When looking at how Meta, specifically Facebook, combats misinformation, we see some discrepancies between what they say they do and what they actually accomplish. Obviously, Facebook isn’t going to admit that they are doing a bad job at combating misinformation, but other companies are calling them out. An article by The Conversation states, “Leaked internal documents suggest Facebook – which recently renamed itself Meta – is doing far worse than it claims at minimizing COVID-19 vaccine misinformation on the Facebook social media platform.” 

According to Forbes, “Facebook spreads fake news faster than any other social website.” However, this article is actually two years old which could indicate that Facebook has done a better job since then at detecting and combating misinformation. 

Personally when using Facebook, I have rarely noticed warnings or flags on posts and sites that could possibly be misinformation. It seems that even though Facebook claims they flag lots of posts, they may not be reporting as many as they claim. 

Do the policies work?

The policy of flagging information does seem to have credible research to back it up. However, an article by the Washington Post suggests, “Fact checks actually work, even on Facebook. But not enough people see them.” This research coincides with my personal observation while using the Facebook app. I’m sure Facebook and Meta are flagging misinformation, but I rarely see those flags which is a problem. 

4. Suggestions :

As we have learned, misinformation is greatly decreased when people are taught what it is and how to detect it themselves. From my perspective, Facebook needs to work more actively in this field of prevention and education

Yes, they have teams built for checking and detaining misinformation once it’s posted on their site, but they need to start focusing on the heart of the matter. The goal is to get to the source, and teach people how to catch misinformation so that they aren’t adding to the spread. 

Facebook should continue their efforts to fact-check, flag, and ban misinformation. In addition, Facebook should educated their users on what misinformation is and how they can spot it. This would establish everyone coming together to stop the spread of misinformation. 


YOUTUBE

1. Summary of Efforts

According to YouTube’s website, they combat misinformation by increasing the good and decreasing the bad. They address misinformation based on their 4 R’s principle

  • Remove content that violates our policies
  • Reduce recommendations of borderline content
  • Raise up authoritative sources for news and information
  • Reward trusted creators.

When people now search for news or information, they get results optimized for quality, not for how sensational the content might be.” 

YouTube states they have policies combating misinformation and deceptive content. They are working towards combating misinformation challenges such as:

  1. Catching new misinformation before it goes viral
  2. The cross-platform problem – addressing shares of misinformation
  3. Ramping up our misinformation efforts work around the world

Additionally, YouTube has misinformation policies regarding elections, Covid-19, and vaccine misinformation

Elections – “This includes certain types of misinformation that can cause real-world harm, like certain types of technically manipulated content, and content interfering with democratic processes.”

YouTube : Community Guidelines – Election Misinformation

Covid-19 – “YouTube doesn’t allow content that spreads medical misinformation that contradicts local health authorities’ (LHA) or the World Health Organization’s (WHO) medical information about COVID-19.”

Vaccine “YouTube doesn’t allow content that poses a serious risk of egregious harm by spreading medical misinformation about currently administered vaccines that are approved and confirmed to be safe and effective by local health authorities and by the World Health Organization (WHO).” 

YouTube : Community Guidelines – Vaccine Misinformation

Anything that goes against YouTube’s community guidelines will be taken off the platform. 

2. Examples :

YouTube’s support center has some examples of misinformation that goes against their policies. 

  • Suppression of census participation
  • Manipulated content
  • Misattributed content
  • Harmful remedies, cures, and substances
  • Contradicting expert consensus on safe medical practices

3. Evaluation

From personal experience, YouTube is filled with many different types of misinformation. Many types of misinformation are not necessarily harmful, even if you believe it or repost it. However, sometimes there are extremely false claims that continue to circulate throughout the platform. For example, the earth is flat. While outrageous claims such as this seem to not necessarily cause dramatic harm, “the scholarly literature finds that conspiratorial thinking often colonizes the mind.” What’s interesting is Youtube has the authority to restrict and ban content that they ultimately deem misinformation. 

YouTube curves the spread of misinformation by changing the algorithm to highlight more credible and truthful content. For example, there was a YouTube creator, Mark Sergent, that posted about the earth being flat and for a while his content was promoted and suggested to many YouTube users. This algorithm promotion resulted in his content gaining many subscribers. However, YouTube updated their algorithm to be more restrictive of misinformation and conspiracy theories. Suddenly, when Mark Sergent posted about the Earth being flat, this time his view count dwindled. YouTube didn’t delete his content, but they did take it off the list of suggested videos which ultimately made the video rarely viewed. 

4. Suggestions :

Overall, like Facebook, YouTube says they are working hard to combat and detain misinformation on their platform by being strict with their community guidelines. However, The Guardian suggests that they actually aren’t doing enough. They convey that “YouTube is a major conduit of online disinformation and misinformation worldwide and is not doing enough to tackle the spread of falsehoods on its platform, according to a global coalition of fact checking organizations.” 

YouTube policies rarely touch on flagging misinformation. Mainly, they focus on either taking down the content or having the algorithm hide it from most users. I think there is proven research that suggests flagging misinformation content helps stop the spread of that misinformation. YouTube needs to add and even mass produce the ability to find and flag misinformation. 

Moving the needle from misinformation to information that is true and credibly sourced can be difficult, but I would suggest YouTube use their platform for good. As they said in their commitment statement, “they want to increase the good, and decrease the bad.” YouTube could easily generate more good by educating their audience about misinformation. They could create a YouTube series on their own channel asking different creators to share about misinformation, and how we need to take the responsibility into our own hands instead of letting fact-checkers do it for us. Let us learn how to spot misinformation and report it.


CONCLUSION

Overall, both Facebook and YouTube have a ways to go in the world of misinformation. They are heading in the right direction, however, many more proactive steps need to be taken before misinformation is under control. Honestly, misinformation will probably never be completely under control, but we can get better as a society in learning how to read and catch misinformation before it spreads and goes viral. Platforms like Facebook and YouTube can help in this area by spreading information about how to spot and detect misinformation. Ultimately, stopping the spread creates a better world where misinformation doesn’t thrive.