PANEL WATCH VIDEO RESTORING SPEECH <strong>The</strong> First Amendment meets the digital era— and so the battles begin. PANELISTS READ BIOS Renée Diresta RESEARCH MANAGER STANFORD INTERNET OBSERVATORY Jeff Kosseff ASSISTANT PROFESSOR US NAVAL ACADEMY <strong>The</strong> First Amendment is a cornerstone of U.S. democracy, yet Sinan Aral believes there is a need to draw lines between free and harmful speech in the age of social media. At the center of the debate is Section 230 of the Communications Decency Act, which provides immunity for website platforms from third-party content. Section 230, which has roots well before the advent of the internet, also must police companies and individuals that distribute speech others have created, such as bookstores or newsstands. Jeff Kosseff, an assistant professor at the U.S. Naval Academy, has written extensively about the legislation. He notes that when online services such as Prodigy and CompuServe emerged in the early 1990s with new business models, Congress and the courts took notice and overhauled telecommunications laws. In 1996, Congress passed Section 230, which treats interactive computer services differently than publishers and exempted them from liability—or at least that’s how the courts interpreted it. “Section 230 has created very broad protections,” says Kosseff— protections that generally favor giant platforms. NO ABSOLUTE RIGHTS Richard Stengel, former managing editor of TIME, isn’t certain that America needs a hate speech law as much as it needs a hate speech debate. “<strong>The</strong> First Amendment is not an absolute right to free speech,” said Stengel, who served as an under secretary of state for Public Affairs and Diplomacy. “<strong>The</strong>re are many examples of speech that is not protected, such as false advertising, violations of copyright, and child pornography.” At the same time, Stengel says that Section 230 needs to be reformed to make platforms more liable for the content that they publish. “Regulation has to incentivize platforms to take responsibility for illegal content just as TIME magazine was,” he said. ”I would argue that they actually want to be regulated because they don’t like being in that gray area of subjective decisions. <strong>The</strong>y want to be able to say, ‘<strong>The</strong> government made me do this.’“ Yaël Eisenstat, a Future of Democracy Fellow at the Berggruen Institute and researcher-in-residence at Betalab, noted that society must understand the difference between speech and the way a social media company handles that speech. Eisentstat, who also served as a CIA officer and global head of elections integrity operations at Facebook, believes that while content moderation is important, there are other things to consider. “<strong>The</strong> bigger issue is really about the tools that the platform companies are using, as well as the intentional business decisions that platforms make on what to enforce, and when to enforce their policies,” she said. Accountability is also a pressing issue. Recommendation engines often lead users down a political path, yet users are often blamed for their actions without considering that they are vulnerable to online manipulation. “No accountability exists right now for this industry,” Eisenstat noted. “But if they are not acting as legal and good stewards of democracy, there should be mechanisms to hold them accountable.” Eisenstat says if we want to promote a healthy democracy, we should argue and debate, but we shouldn’t be served totally different versions of paid political speech. “Courts are over-interpreting Section 230 to give Facebook a free pass,” she added. SPEECH AND REACH Should free speech be limited in its reach, especially if the content is potentially misleading or harmful? Renée Diresta, research manager at the Stanford Internet Observatory, first wrote about the distinction between “speech and reach” in 2018 when social media had reduced barriers—first for creation of content and then for dissemination of content. As content proliferated, recommendation engines and other algorithmic curators began to filter content in ways that incentivized users to engage or remain on the platform. Now, deliberate user engagement and inadvertent algorithmic amplification pose new dilemmas. “<strong>The</strong>re’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people,” said Diresta. “<strong>The</strong>re’s value to being able to express your antivaccine views, but the platform doesn’t necessarily need to boost it.” Diresta suggests designing interventions in the UX, such as graying the share button until a user views all content or clicks on a URL. She also recommends that fact checkers perform their work earlier in order to assess misleading or malicious content before it goes viral. SOLUTIONS 24 25 Yaël Eisenstat FUTURE OF DEMOCRACY FELLOW, THE BERGGRUEN INSTITUTE RESEARCHER-IN-RESIDENCE, BETALAB Richard Stengel FORMER UNDERSECRETARY OF STATE FOR PUBLIC AFFAIRS & DIPLOMACY & FORMER MANAGING EDITOR OF TIME MAGAZINE “<strong>The</strong>re’s always been this division between your right to speak and your right to have a megaphone that reaches hundreds of millions of people.” RENÉE DIRESTA Distinguish between speech and reach--the right to speech and the right to amplification of that speech. Limit corporate lobbying; enforce stricter campaign finance rules. 14
15