Story by Laura Fields and Chelsea Nguyen Fleige
For eight months now, billions of people have quarantined in their homes and lost their sense of normalcy – their screens are now their lifelines.
According to a July 19, 2020 NPR article, dystopian stories related to the coronavirus pandemic, combined with stay-at-home orders, have enabled the binging of bad news.
A massive uptick in time spent online even created a new word for an overload of social media news: doomscrolling.
News consumption increased enormously during the coronavirus pandemic according to a March 2020 study by Andreu Casero-Ripolles, a professor and head of the Department of Communication Sciences at Jaume I University, a university in Castelló, Spain.
The study stated that 32% more adult Americans have regularly accessed news outlets during quarantine compared to before the pandemic.
“The most important discovery is that the COVID-19 has served those citizens furthest away and less interested in the news to reconnect with the information about public affairs,” said Casero-Ripolles in the report.
The report revealed the largest increase in news consumption and positive assessment of the media coverage on the pandemic occurred in those who weren’t regular news consumers, such as young people, uneducated people, and those who rarely consumed news.
This seems like good news — more Americans are looking for ways to stay informed. But as more people spend more time online, scrolling for information, there’s a bigger risk lurking.
In 2016, an expansive Russian campaign influenced the feeds of more than 100 million Americans, by flooding it with fake news in an attempt to tamper with public opinion of the presidential candidates running according to a
May 27 NPR article.
Four years later, experts are concerned that social media platforms have still not done enough to safeguard against fake news.
What is fake news?
Definitions for fake news can run the gambit but most can be split into two categories –“disinformation” and “misinformation” according to San Jose State communication studies professor Carol-Lynn Perez in an Oct. 3, 2019, Spartan Daily article.
According to a Merriam-Webster definition, disinformation is defined as “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.” It defines misinformation as simply “incorrect or misleading information.”
Internet Research Agency, a Russian company, created thousands of social media accounts to pose as Americans supporting radical political groups.
Examples of fake news heavily circulated in 2016 include a story of the Clintons running a child-sex-slave ring out of a pizza parlor in the infamous “Pizzagate” story whose origins are still unknown, as reported in a Rolling Stone Magazine article on Nov. 16, 2017.
The same Rolling Stone article suggested planting false information on anonymous chat boards brings real people to pick it up and add their ‘human touch’ to the story, making it even more believable.
Wild stories such as this might be fun to share with friends and family online occasionally, but most Americans think that fake news is creating more confusion.
As of March 2019, 67% of U.S. adults surveyed in a Statista survey said that made-up news and information caused a great deal of confusion, and a further 24% said there was some confusion about basic facts of current affairs.
Your social media friends and followers can unknowingly filter misinformation into your timelines.
The likelihood that the average person has shared fake news is shockingly high. According to a Dec. 15, 2016, Pew Research Center article, 23% of Americans surveyed said they had shared fake news online, whether they were aware of it or not.
What have social media companies done to change since 2016?
“[Social media companies] are continuing to evolve based on what happened in 2016 when there was a lot of misinformation put on social media platforms by foreign governments,” SJSU media law professor Larry Sokoloff said. “Because they’re private companies, they can do what they want but there has been an attempt to make them
more responsible.”
In an attempt to deter the spread of “synthetic or misleading media” on its site, Twitter introduced a new policy of labeling such information accordingly in February. In a Feb. 4 blog post, the company said it would apply a label to a suspicious tweet, show a warning to people before they retweet or like it, reduce the visibility of the tweet and/or prevent it from being recommended and provide additional explanations through a linked page.
Moving forward during the pandemic, Twitter began labeling tweets containing misleading information about COVID-19 in March, according to a May 11 company blog post.
These actions would turn out to be more than necessary. On May 20, researchers at Carnegie Mellon University announced more than 90 million tweets from January to May were likely sent by accounts that behave more like computerized robots than humans.
Researchers collected more than 200 million tweets between January and May discussing COVID-19. Of the top 50 influential retweeters, 82% were bots; of the top 1,000 retweeters, 62% were bots, the researchers revealed in a university blog post.
Additionally, in May, Twitter kicked its fact-checking up another notch when it labeled a May 26 Twitter thread from President Trump about mail-in ballots as potentially misleading and offered a link with more accurate information.
The president promptly lashed out against the company, tweeting, “Twitter is now interfering in the 2020 Presidential Election,” and “Twitter is completely stifling FREE SPEECH, and I, as President, will not allow it to happen!”
The president isn’t the only one challenging the legitimacy of Twitter’s fact-checking.
Some experts are questioning the accuracy of such labels and if the labels actually fulfill its purpose in stopping disinformation.
"These kinds of labels have a very limited, marginal impact on influencing the opinion of the people who consume that content," said Dipayan Ghosh, co-director of the Harvard Kennedy School's digital platforms and democracy project, in an Oct. 18 ABC News article.
According to an Oct. 6 BBC article, “Shortly after Twitter put a warning label on his posts for the first time in May, Trump signed an executive order to repeal Section 230.”
As part of the United States Code under the House of Representatives in Title 47, section 230 provides protection for private blocking and screening of offensive material under a “Good Samaritan” principle. In other words, companies use their best judgment.
With social media companies’ ethics being in the national spotlight for so long, viewpoints about the ramifications are starting to polarize as well.
In the book, “Anti-Social Media: How Facebook Disconnects Us and Undermines Democracy,” Siva Vaidhyanathan, a University of Virginia professor and author, analyzes the idea that Facebook is a threat to democracy.
Vaidhyanathan said in his book, “One of the keys to the success of ‘fake news’ is that often these pieces were designed expertly to play both to the established habits of rapid sharers on Facebook content and to Facebook’s EdgeRank algorithm.” EdgeRank was an algorithm Facebook used that allowed the newsfeed to be customized by posts relevance to users and users’ reactions to posts.
Speaking specifically about Facebook’s algorithm, Mark Zuckerberg posted on Jan. 11, 2018 on his Facebook account that the algorithm would show more friends and family posts rather than sponsored ads and news articles of
mysterious origin.
In a Sept. 3 Facebook post, Mark Zuckerberg promised to work with state election officials to take down false voting information and restrict messages that resemble spam on the Messenger app.
A June 30 Facebook blog post proclaimed Facebook would prioritize original news reporting saying, “When multiple stories are shared by publishers and are available in a person’s News Feed, we will boost the more original one which will help it get more distribution.”
The blog post went on to say standards for original reporting are complex, so Facebook is continuing to work on refining their methods with publishers and academics.
Facebook is still facing serious criticism, even from governmental regulatory bodies.
During a July 2020 congressional hearing, David Cicilline, a Democratic congressman from Rhode Island and the U.S. House antitrust chair berated Zuckerberg for the five hours it took Facebook to remove a Breitbart News video falsely claiming hydroxychloroquine was a cure for COVID-19.
Censorship
In a Senate Commerce meeting on Oct. 28, Sen. Ted Cruz, a Republican from Texas, pointed out Twitter’s censorship of political ideas was “the most egregious” out of Twitter, Facebook and Google.
“So you're testifying to this committee right now saying that Twitter, when it silences people, when it blocks people, when it censors political speech, that has no impact on elections?” Cruz said.
In response to Cruz’s accusations, Twitter CEO Jack Dorsey said, “People have a choice of other communication channels.”
“Not if they don’t hear information,” interjected Cruz.
The hearing was about accusations Twitter faces regarding the censorship of a New York Post article regarding Democratic candidate Joe Biden’s foreign dealings and blocking the New York Post’s Twitter account from posting the story and users from linking it to their Twitter accounts.
“The enforcement action, however, of blocking URLs in both tweets and in direct messages, we believe is incorrect. And we changed it.” Dorsey said in response to Cruz’s questioning.
“[It was changed] today. The New York Post is still blocked from tweeting, two weeks later,” Cruz shot back.
Divided in an election year
Despite the controversy of fake news influencing elections, social media content is protected under the First Amendment as free speech. The exception to this is speech that is “inciteful.” Merriam-Webster defines the word meaning “to call to action, to move or stir up.”
Right-wing Americans claim that despite this, social media platforms have a history of banning conservative viewpoints and continue to do so.
According to an Aug. 19 Pew Research Center article, about 90% of Republicans polled said they think social media sites censor political viewpoints and are “accusing tech firms of political bias and stifling open discussion.”
The same article states that Democrats are more likely to approve of social media companies labeling posts from elected officials and ordinary users as “misleading” or “inaccurate.” But overall, most Americans, according to the article, are skeptical about the flagged posts.
Building partisan mistrust in social media platforms may be a part of disinformation strategies.
In a 2018 public presentation, Facebook’s internal researchers wrote on a slide, "Our algorithms exploit the human brain's attraction to divisiveness.”
A 2019 Stanford study found the use of Facebook correlated with how polarized a person is and how open they are to understanding the views or ideas of the opposition party.
For some people, doomscrolling during these last few months have only confirmed their fears of a repeat of the 2016 debacle and a worsening of the social media zeitgeist.
“I thought that as the misinformation and the propaganda would get worse, I thought our sophistication would sort of grow from it commensurately,” said Matthew Record, SJSU political science professor.
He said the media consumption habits of his cohort are troubling.
“[White nationalist rally in] Charlottesville, those are people my age [on] 8Chan. All the people active on that, those are people my age,” Record said. “Now I no longer have faith in my generation in terms of making systematic changes.”