THURSDAY, OCTOBER 22ND, 2020

Background

“In times of social turmoil, our impulse is often to pull back on free expression. We want the progress that comes from free expression, but not the tension.”

And so, Facebook won’t pull back.

In a lengthy address at Georgetown University last October, Facebook CEO Mark Zuckerberg spoke philosophically about the trade-offs between free speech and censorship, and in lofty language seemed to reject the “impulse” to censor. In hindsight, it seems that Zuckerberg was testing the waters ahead of the company’s even stronger Jan. 7 declaration that it will continue its policy of running political ads without screening for accuracy — a decision that shocked many, especially after rival Twitter’s move last fall to ban all political advertising from its platform.

Despite Facebook’s own Instagram introducing fact checks on photos, and Twitter eliminating political advertising altogether, Zuckerberg, while acknowledging that the issue of false advertising looms over the upcoming presidential election, is standing firm. While he believes letting all political ads run without review is the best way to support America’s democratic process, the deeper question of how to find an appropriate balance between free speech and censorship will continue to underscore every post, comment, advertisement, video and story on social media, and will undoubtedly shape politics in America and beyond for years to come. From Bay Area bloggers to presidential candidates, people around the country — even the world — are struggling with how to weigh the hunger for free expression against the importance of ensuring accuracy.

Facebook’s decision has left many grappling with the fundamental question: Who, if anyone, should regulate and take responsibility for suspect content?

“It’s definitely not the government’s job, and it’s also not the job of the person who posted it,” junior Rohin Ghosh said. “I don’t think it’s (the company’s) job either.”

Expert Perspective

Long before Facebook, and certainly long before Facebook had 2.37 billion global users, the issue of accuracy in media was already contentious. Decades ago, Congress and the courts addressed the importance of accuracy in ways that regulated broadcasters’ fairness and held news outlets liable for false and defamatory content. Today, some believe that same thinking should apply to social media.

Fortune Executive Editor Adam Lashinsky is among them. He believes that social media platforms act as publications and therefore ought to be legally considered as one and assume responsibility for the content on their forums. Currently, social media companies are treated more as a newsstand, which are not held responsible for the information found there.

“To me it’s braindead obvious that they’re publications, and they should be treated as such,” Lashinsky said. “That would put the burden of publishing and (assumption) of both First Amendment rights and responsibilities.”

Lashinsky acknowledged that if Facebook adhered to journalistic standards, it would face huge challenges.

“If Facebook was regulated the way Fortune or CBS was regulated, it would be devastating to Facebook’s business,” Lashinsky said. “But Facebook is devastating to my business. (Facebook) doesn’t want to be censors, but we need a better solution than what they’re doing.”

Whereas Lashinsky believes that in an ideal world all social media companies would be liable for the content they host, Saratoga City Councilman Rishi Kumar has no interest in placing any constraints on social media companies.

“Silicon Valley has done well not because it was tied down by regulations,” Kumar said. “I believe we have to let innovation happen. But social responsibility is something that needs to be part of the innovation culture.”

Even though it has been shown that prior to the 2016 presidential election a Russian agency flooded Facebook with ads, some of them false or misleading, on politically sensitive topics, U.S. officials don’t see a clear path to stopping a possible repeat this year.

According to U.S. Rep. Anna Eshoo, who represents Palo Alto, social media companies are currently not liable for the accuracy of the majority of the content posted on their platform, having been protected by a small section of the 1996 legislation that treats them like intermediaries, not publishers.

“As a legal matter, social media companies and other tech platforms are generally not liable for most of the speech of their users because of Section 230 of the Communications Decency Act, which many call the most important law protecting internet speech,” Eshoo said in an interview with The Campanile. “Currently, this immunity extends to political ads, recommendation algorithms and deciding which creators are monetized.”

Because of the protection  granted by the Communications Decency Act, platforms are able to run advertisements with potentially false information, as they argue they are simply housing the advertisement.

To combat this, Eshoo said she has considered proposing  changes to the Communications Decency Act but fears it would be detrimental to smaller companies.

“While I’m open to amending Section 230, I do worry that many existing proposals would be costly to startups and would chill speech more than curbing problematic content,” Eshoo said.

The power granted to companies by Section 230 showcases the inevitable trade-offs between freedom of speech and accuracy.

With freedom of speech comes the danger of misinformation; with a full commitment to accuracy comes the danger of silencing selected voices, even without just cause. The two ideals cannot coexist.

Zuckerberg wants no part of deciding which voices should be heard, saying Americans must hear them all. Yet Facebook recognizes the delicacy of the issue and has taken various steps to increase “transparency” of its political ads, including requiring disclosure of the sponsor of ads, which it says should halt foreign interference, and archiving all political ads.

Even still, some Facebook employees don’t think the company has gone far enough in monitoring political ads.

In October 2019, shortly after Zuckerberg’s Georgetown speech, hundreds of Facebook employees signed a letter addressed to Zuckerberg criticizing Facebook for allowing unregulated political advertising on the platform.

The employees told Zuckerberg that they had concerns about the apparent regression in online integrity evident in his beliefs.

“Misinformation affects us all,” wrote the employees. “Our current policies on fact checking people in political office, or those running for office, are a threat to what FB (Facebook) stands for.”

The employees felt that Zuckerberg’s decision exploits the trust people have put in Facebook.

“We strongly object to this policy as it stands,” they wrote. “It doesn’t protect voices, but instead allows politicians to weaponize our platform by targeting people who believe that content posted by political figures is trustworthy.”

Facebook employee Bruce Arthur, who works as an engineering manager at Facebook-owned WhatsApp and whose views are his own, not the company’s,  believes Facebook should check the accuracy of political ads.

“I think it would be great if there was some way to indicate the truth of various ads,” Arthur said.

According to Arthur, Facebook is uniquely positioned to influence people’s minds through false advertising because unlike broadcasters, there is no liability for untruths.

“It is odd that Facebook is a news space where it reaches in fact more people than traditional broadcasting but isn’t covered by the same fact-checking customs,” Arthur said.

An employee of a Bay Area tech giant who agreed to be interviewed if his name and company were kept anonymous due to fear of employer reprisal agrees that social media is a field which ought to be governed in some way, yet he, like just about everyone else, doesn’t have a clear notion of how the territory is uncharted.

“(Regulation) is something that every society has to decide on,” he said. “When you have these private corporations recreating some sort of public forum on their websites, you’re presented with a situation that we haven’t had in the past in human society.”

He suggests that regulation should correlate to the popularity of the platform.

“When these online forums become universal, it seems like maybe we need to (regulate them) because in order for the government to assert an individual’s right of expression they can’t say, ‘Oh, hey, you can go around and talk to people,’” he said. “It’s, ‘Oh, you need to be on this forum,’ if you can’t do it on this forum, then basically you can’t do it at all, and you’re excluded. It’s the ubiquity of the forum.”

Eshoo agrees that some sort of regulation is necessary when social media platforms blunder.

“The government certainly has a role in preventing these kinds of harms when companies fail to do so,” Eshoo said.

She thinks that in situations where, for instance, content is misleading or triggering for its audience, government intervention is necessary.

“I’m extremely troubled by some of the online content today,” Eshoo said. “The anti-vaxxer movement is worsening a public health crisis; vulnerable individuals are being radicalized by extremists and deep fakes are distorting our sense of what’s true and what the facts are.”

Indeed, most social media companies, Facebook included, now employ thousands of people to screen content for violence, hate, pornography and other offensive content outside of paid advertisements.

Eshoo said that regulating social media can also be made difficult because of the First Amendment.

“Legislating in this area is particularly difficult,” Eshoo said. “Because a law forcing companies to remove certain kinds of content could violate the First Amendment.”

Kumar, who hopes to unseat Eshoo and win her congressional spot, shares his opponent’s concern for the First Amendment and its role in letting all viewpoints be expressed.

“I am a firm believer in free speech,” Kumar said. “And I think that the internet ought to be a place where citizens can speak without fear.”

Still, Kumar believes social media companies should take some responsibility for their decisions.

“We need to create accountability in our system,” Kumar said. “Social media is a powerful tool of today and it behooves the providers of this platform to iron out any kinks to ensure the tool is not misused or manipulated.”

With so much uncertainty about what corporations or the government might do to enforce accuracy, much of the responsibility of discerning what’s accurate and what’s not falls to individuals and the choices they make — if they can — in what they view.

Arthur, the engineering manager at WhatsApp , believes some responsibility for falsehoods or censorship lies with the individual, and not the company.

Arthur said potentially untrue WhatsApp forwards can be just as harmful as false political ads on Facebook, but he sees an important distinction between the two.

“(WhatsApp forwards) are messages targeted to a group,” Arthur said. “And, in general, people choose to join groups; sometimes they get added by other people and aren’t happy about it. But in general to be in the groups, they are signed up for the information. I mean, ads are targeting people who may or may not be interested in this stuff. It’s a little different.”

Junior and Tech for Good club president Jonny Pei agrees with Arthur on the important distinction between having the ability to choose to view content — or not.

“If you look on a social media like Reddit, you can follow a certain forum to look at the content, and if the content includes some stuff that you do not want to see, you can just click unsubscribe,” Pei said. “But it’s not the same as advertisements, because you can’t opt out of those.”

In part because of the difference between a person choosing to view a post or being forced to, Pei said social media posts should not be censored. From his perspective, only advertisements should be screened.

“Advertisements are seen by people who don’t want to see them,” Pei said, “Whereas content, you have to follow it. In that situation, I think only advertisements should be screened.”

In response to this type of reasoning, Facebook recently announced that it plans to allow users to choose to some extent what types of political advertisements they view. Unlike Google, however, it will not limit microtargeting of political ads, which campaigns use to reach narrow groups of voters.

“We have based ours on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public,” Rob Leathern, director of product management at Facebook, said in a recent corporate post.

Facebook’s evolving policies, like those of all social media companies, are a work in progress that will require flexibility.

“The line between censorship and free speech has to be a conversation,” the anonymous employee said. “Because no matter if we come up with any rule, if we said, ‘Hey, this is the rule for what you’re allowed to say,’ this rule is not going to reflect reality; it’s just a rule. It’s going to become detached from the way that people actually live, and there are cases where that rule is going to be justified, and there’s going to be cases where it’s an injustice.”

While big tech companies and politicians are struggling with defining the line between censorship and free speech, the ones feeling the drawbacks of their deliberations can oftentimes be teenagers.

Student Perspective

Social media influencer Eve Donnelly hangs her head in frustration as she finds out yet another one of her viral Autonomous Sensory Meridian Response Instagram posts has been taken down. Resigned to the fact that restoration of the post is unlikely, she prepares for potential repercussions of a censored post, including loss of income or followers.

Although social media censorship or lack thereof has become infamous for influencing nationwide issues such as helping decide elections, its impact can be felt locally by Bay Area teenagers.

Donnelly, a former Paly student with over 300,000 followers on Instagram, is just one of these teenagers and has had many of her ASMR videos censored by the platform, for reasons she believes are unjust.

In her videos, Donnelly eats inedible household objects such as deodorant in order to produce noises that some find calming.

Donnelly said her posts are usually taken down automatically because some of her followers reported her ASMR videos for containing threats of violence or encouraging suicide, which she did not at all intend.

“If enough people report it, they’ll just take it down,” Donnelly said of Instagram. “It doesn’t even really matter what it is they don’t really like.”

Instagram and Facebook did not respond to The Campanile’s request for comment.

Like Donnelly, popular Bay Area Instagram meme page SiliconValleyProbs faced a deleted post because of reports that it too contained violence and suicide.

SiliconValleyProbs’ administrator would only agree to an interview if she were not identified because of her desire to not be associated with her meme page.

“(The post) was believed to be involved with suicide or violence,” she said. “It was a post about the weather changing and dressing for cold weather.”

While the administrator of SiliconValleyProbs said that the meme was not explicitly violent, she acknowledged that the post contained an image of a gun, sparking a brouhaha which culminated in the post’s removal.

Similarly to how Donnelly and other content creators have struggled to get content restored, SiliconValleyProbs’ admin was forced to find another solution.

“We reposted the meme reported for suicide but we changed the template to not include a gun,” she said. “I think it’s fair to want to stay away from violence.”

SiliconValleyProbs’ admin has an ambivalent attitude towards censorship.

“I think censorship is generally not great,” she said. “But when it’s an attempt to make a safer overall app, it’s beneficial.”

Donnelly also conceded that some of her posts may have deserved censorship.

But she  said the reports of her posts being too violent eventually led to not only her main account being deleted by Instagram multiple times, but also her personal one, despite only containing innocuous content.

“I have a personal account where I just post photos of me,” Donnelly said.

Her followers “reported that account so many times that it just got taken down, even though it’s literally just photos of me. There’s nothing, you know (violent or suicide-related), but enough people report(ed) it.”

Although Donnelly expressed frustration over Instagram’s removal of her posts, Pei recognizes all information posted to a social media platform is done so at the company’s prerogative.

“I think at the end of the day, the company should be monitoring (content on their platforms) because they’re the ones that are setting the regulations for their own services and website,” Pei said. “And they’re the ones that are ultimately allowing this information to be posted online. So naturally, it’s the responsibility of the companies themselves.”

Pei said a certain level of screening of content is necessary, especially because many teenagers frequent social media websites.

“We are still pretty young, and since our brains aren’t fully developed, we can be easily influenced and even manipulated by social media such as fake news and influencers,” Pei said.

Donnelly wanted her posts reinstated on Instagram and went through several debacles in order to do so, but was ultimately unsuccessful. According to Donnelly, although she was able to restore her accounts after deletion, 90% of her content was gone.

“The only way that I was able to get (my accounts) up is by working with people from Facebook that I have connections to,” she said. “(Someone I know) works at Facebook and she files appeals for me. (Because she works there), they actually looked at them and they actually went through.”

Donnelly said she is especially frustrated with Instagram’s help department.

“You don’t get a say,” Donnelly said. “Reporting stuff to Instagram help is like screaming into a black hole.”

The Future

Amid Facebook’s controversial decision to continue allowing potentially false advertisements on its platform, it does not appear that this issue will disappear from public discourse anytime soon.

However, with companies like Twitter making noticeable policy changes regarding content veracity, many hope false advertisements will eventually recede from the social media community but remain wary of potential misinformation that exists online.

Although Facebook is maintaining its stance on not fact-checking political ads leading up to the November election, others continue  to offer alternatives.

“I would want a process for identifying claims that look false,” Arthur said. “I don’t necessarily want to stop them, but I would want to flag them to say that this isn’t necessarily true or it’s flat-out false. It lets people say what they want but points users to further investigate.”

But Arthur acknowledges fact-checking ads is difficult because many claims are not black and white.

“Some political ads may be obviously false, you can say 2 + 2 is 5 that is a clearly false statement, but most of the ads are grandiose or misleading, ‘A vote for me will help the economy’ that’s not provably true or false,” Arthur said. “You wouldn’t want to stop people from saying that because you want voters to see that and choose for themselves.”

Many people believe the problem doesn’t lie within the companies and neither does the solution. Instead, it is up to each user to distinguish truthful information themselves and not rely on Facebook, becoming in essence their own researchers and arbiters.

In a sense, Facebook agrees. It lets the company off the hook — it aims to avoid being attacked for unfair policing by political groups on both the right and the left. The determination of accuracy would rest with the ad creators and the ad readers.

“We think people should be able to see for themselves what politicians are saying,”  Zuckerberg said at Georgetown as he explained his reluctance to screen political ads.  “As a principle, in a democracy, I believe people should decide what is credible, not tech companies.’’

Even as the debate continues in the United States, Facebook and other social media companies will face even more challenges overseas as they continue to grow exponentially and expand their global reach.

The challenges in lesser-developed countries, where residents are not exposed to sophisticated debates about accuracy, will be especially large, Arthur believes.

He is concerned many of these non-U.S. residents may have difficulty understanding some of the content posted on social media because of different cultures and fears that less affluent users may employ social media as their only form of news, which could lead new users to feel overwhelmed.

“Some (potential new users) are marginal with reading. Some of them are going to be accessing devices and, you know, $5 phones,” Arthur said. “And you know that that’s going to be their version of the Internet.”

Already, it’s clear that different societies view the internet in different ways. Jennifer Pan, an assistant professor of Communications at Stanford University who studies censorship in the United States and China, notes  government regulation in China differs drastically from that in the United States.

“In China, (social media companies) receive directives from the government telling them what to censor,” Pan said. “China’s much more oriented around removing content related to discussions offline real world protests.”

While in the United States companies still have most of the power regulating information on their websites, and in China the government has authoritarian control over social media, Japan has had yet another approach.

In Japan, there’s a history of limiting campaign spending on media; political ads  paid for by candidates are banned on TV.

If countries around the world begin seeing the expected spike in users predicted by Arthur, then they will be faced with the decision of whether they will approach the issue like the United States, China or Japan — or yet another model of their own making.

No matter what the practices are in a country today, the social media and political landscapes are so fast-moving  in the 21st century that nothing is more constant than change. And so no matter how defiantly Zuckerberg defends his company’s embrace of free expression today, tomorrow could be different.

“If Facebook saw that the public is susceptible to fake news, Facebook would definitely have to add some sort of censorship methods,” Pei said. “At this point I do not think Facebook will do much about it.”

From the everyday student to the president of the United States, all people have come to rely on social media  in their everyday life. However, all the time it saves and the efficiency it brings is mirrored in magnitude by the misinformation misleading the people and plaguing the democracy.

Leave a Reply

Your email address will not be published.