The following essay is covered in The Conversation, an online publication.
The 2016 U.S. election exposed the dangers of political misinformation on social media. Social media companies know how to identify and counter misinformation during election cycles. The nature of the threat misinformation poses to society is shifting. A big lie about the 2020 presidential election has become a major theme, and immigrant communities are increasingly in the crosshairs of disinformation campaigns.
Social media companies plan to deal with misinformation in the upcoming elections, but they vary in their approach and effectiveness. We asked experts on social media how prepared they are to do the job.
The assistant professor of communication is from the University of Arizona.
Social media is an important source of news for most Americans, but it could also be used to spread misinformation in the future. The plans for dealing with misinformation announced by major social media platforms are similar to their 2020 plans.
Users don't have to use just one platform. It is possible that one company's intervention will backfire and promote cross- platform spread of misinformation. Social media platforms may need to work together to fight misinformation.
C is for Facebook.
During the 2016 presidential election campaign, Facebook was blamed for failing to fight misinformation. During the 2016 presidential election, engagement on Facebook peaked at 160 million per month, but in July of last year, it reached 60 million per month.
Recent evidence shows that Facebook still needs to do more when it comes to managing accounts that spread misinformation, flagging misinformation posts and reducing the reach of those accounts and posts. 59 accounts that spread misinformation about COVID-19 were notified by fact-checkers. 31 of them were still active at the end of the year. Chinese state-run Facebook accounts have been spreading misinformation in English about the war in Ukraine.
It's B on the social media site.
It's not clear if the measures it has put in place to combat misinformation are enough. During the 2016 presidential election, shares of misinformation increased from 3 million per month to 5 million per month on the social media site.
Over 300,000 Tweets included links that were flagged as false after fact checks. There were only a small number of warnings or pop-up boxes presented. The process of putting warnings on misinformation is not automatic, uniform or efficient, as shown by the fact that only a minority of people displayed these warnings. It was announced that it redesigns labels to make it easier to click for more information.
D is the name of the song.
As the fastest-growing social media platform, TikTok has two notable characteristics: its predominantly young adult user base regularly consumes news on the platform, and its short videos often come with attention- grabbing images and sounds. These videos are more likely to be remembered and evoke emotion than text-based content.
Major improvements need to be made to TikTok. In September 2022, a search for prominent news topics yielded user-generated videos, 20% of which included misinformation, and videos with misinformation were often in the first five results. For example, when neutral phrases were used as search terms, TikTok suggested more phrases that were charged. Reliable sources are presented along with accounts that spread misinformation.
There is a video on the internet called "B-."
There were 170 false videos flagged by a fact-checking organization over the course of a year. Half of them were presented with panels that said "Learn more" without being flagged as false. After checking the content of the video for misinformation, YouTube may have added information panels by automatically detecting certain controversial topics, but not necessarily after checking the content of the video for misinformation.
There is a challenge of reviewing uploaded videos for misinformation. An experiment collected a list of recommended videos after a user with no viewing history watched a video that was marked as false. Three of the top 10 recommended channels had a mixed or low factual reporting score from Media Bias/Fact Check.
Professor of information systems at Michigan State University.
Misinformation researchers are concerned about the prevalence of false narratives about the 2020 election. The false claim that there was widespread election fraud in the 2020 election was the subject of a study by a team of misinformation experts. The Washington Post found that most of the accounts they identified as the biggest spreaders of misinformation about the 2020 election were still active on all four social media platforms.
None of the platforms have addressed these issues effectively.
Meta: B- on Facebook.
Politicians are not exempt from fact checking rules. Political ads are not banned by them. Meta hasn't released any policies about how it protects against misinformation, which has left observers questioning its readiness to deal with misinformation during the upcoming elections.
A congressional candidate ran an ad campaign on Facebook that alleged a cover-up of "ballot harvesting" during the 2020 election in order to benefit from microtargeting. When the company's content moderation resources are primarily allocated to English-language media, they are at risk of being used to target minority communities.
It's B on the social media site.
The most effort has been made to reduce misinformation related to the election. Prebunking is an effective way to reduce the spread of misinformation.
In fact, it has been inconsistent in its enforcement. Civil rights advocates warned of potential intimidation at polling stations after an Arizona gubernatorial candidate asked her followers on social media if they would be willing to monitor the polls for voter fraud.
D is the name of the song.
Microtargeting from election-related misinformation is not a problem on TikTok because it is not allowed. Many researchers have highlighted TikTok's lack of transparency, unlike platforms that have been more receptive to efforts from researchers. According to TikTok, questionable content will not be amplified through recommendations.
Video and audio content can be difficult to moderate. Once a misleading video is taken down by the platform, a manipulated and republished version can easily be found on the platform. Facebook uses artificial intelligence to detect misinformation at scale. TikTok has not said how it will address misinformation related to the election.
TikTok has come in for a lot of criticism for not being able to tell the truth about election-related misinformation. TikTok accounts were used to impersonate prominent political figures.
There is a video on the internet called "B-."
The channels that receive three strikes in 90 days will be terminated. This may be effective in controlling some misinformation, but it has the potential to be very harmful to the electoral process. There is a movie called 2000 mules on the platform.
Observers said that YouTube didn't do enough to address election- related misinformation. Telegram has become a popular way to spread misinformation in Brazil. This shows that the U.S. may be at risk of organized election-related misinformation.
Scott Shackelford is a professor of business law and ethics.
Internal divisions fed by inequalities, injustice and racism have caused many of the threats to American democracy. From time to time, these fissures have been deliberately widened and deepened by foreign nations who want to cause trouble in the U.S. The advent of cyberspace has led to the spread of stories across national boundaries and platforms and caused a proliferation in the types of traditional and social media willing to run with fake stories. At the moment, some social media networks are more capable than others.
C is for Facebook.
Despite moves to limit the spread of Chinese propaganda on Facebook, there seems to be a bipartisan consensus that the social network has not learned from the 2016 election cycle. It still allows political ads, including one from Republican congressional candidate Joe Kent, who claims "rampant voter fraud" in the 2020 election.
Though it has taken some steps towards transparency, it has a long way to go to win back consumer confidence and uphold its social responsibility
It's B* on the social networking site.
It was before other leading social media firms that banned political ads on its platform. Hoaxy is a tool used by the Indiana University Observatory on Social Media that allows real-time searches for a wide array of misinformation.
The concern for this grade lies in the future efforts to fight misinformation given the potential acquisition by Musk.
TikTok is a song.
The fact that TikTok does not allow political advertising on the surface boded well for its ability to root out misinformation, but it has been seen that its ability to do so is very limited. The other social media networks have been able to see the problem of deep fakes on TikTok.
Its efforts to stand up an election center, ban deep fakes and flag misinformation are welcome, but they are too late, with voting already underway in some states.
It's C+ on the video.
As we see in the "Stop the Steal" narrative from the Brazilian election, misinformation continues to flow freely, despite new steps to crack down on it.
The conversation published this article. The original article is worth a read.