The 2023 Social Media Summit@MIT Event Report
Create successful ePaper yourself
Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.
REWIRING<br />
SOCIAL<br />
MEDIA<br />
How to make platforms<br />
BETTER FOR EVERYONE<br />
<strong>2023</strong> SOCIAL MEDIA SUMMIT@MIT
Panelists Offer Solutions<br />
to <strong>Social</strong> <strong>Media</strong>’s Ills<br />
In order to save platforms, we may need to fundamentally<br />
blow up the model and start over<br />
It’s been more than a decade since social<br />
media platforms have transformed our lives in<br />
unimagined ways. During that time, social media<br />
has become intertwined and synonymous with<br />
all media—a dominant force for information flow<br />
and a major channel for social interaction, political<br />
communication, and commercial marketing.<br />
This rise has fundamentally changed the world’s<br />
information landscape from a comparatively small<br />
number of producers (such as news networks) to a<br />
world where everyone is accessible, everyone is a<br />
content receiver, and everyone is a producer. It’s also<br />
an environment where content can be influential,<br />
regardless of its source.<br />
But along with the massive volume of information<br />
and access comes growing concern about how to<br />
tame it. While the “democratization” of content can be<br />
celebrated for its reach and global resources, social<br />
media has advanced far faster than most people<br />
realized or wanted. Misinformation has become so<br />
commonplace, gatekeepers and content moderators<br />
are required. But now—with the gradual removal of<br />
these gatekeepers—the misinformation and malicious<br />
actors that were starting to decline may be on the rise<br />
again and hard to stop.<br />
<strong>Social</strong> media’s core circuits need rewiring.<br />
Global citizens seem to be collectively waking up and<br />
asking questions: How did we get here? How do we<br />
dial back the speed and strength of this proliferation<br />
of misinformation, hate speech, addictive behavior,<br />
and privacy invasions? Whom do we trust, if there are<br />
no guardrails and gatekeepers? In short, can social<br />
media be saved? And if so, what will the new social<br />
media environment look like?<br />
In response to these and other questions, the <strong>2023</strong><br />
MIT IDE <strong>Social</strong> <strong>Media</strong> Summit (SMS@MIT) was held<br />
virtually on May 23, attracting an audience of more<br />
than 17,000 live viewers. Session topics included the<br />
role of AI, combating misinformation, and achieving<br />
the promise of social media.<br />
WHAT’S INSIDE<br />
4 Platforms at a Crossroads<br />
6 <strong>Social</strong> <strong>Media</strong> Drills Deep<br />
8<br />
Is the Tide Turning for<br />
Misinformation?<br />
10 Ethical AI: A Work in Progress<br />
12 Tomorrow’s Platforms Today<br />
CLICK ON VIDEO ICONS THROUGHOUT<br />
THE REPORT TO VIEW SESSIONS<br />
FROM THE EVENT<br />
2<br />
3
Platforms at<br />
a Crossroads<br />
<strong>Social</strong> media still<br />
rules, but players<br />
are shape-shifting daily<br />
WATCH VIDEO<br />
Sinan Aral<br />
Director, MIT IDE; David Austin Professor of<br />
Management and Professor of Information<br />
Technology and Marketing, MIT Sloan School<br />
of Management<br />
“From public health to teen entertainment, from<br />
finance to education, social media platforms<br />
are shaping opinions and our understanding of<br />
everything,” Sinan Aral, director of the MIT Initiative<br />
on the Digital Economy (IDE) and professor at MIT<br />
Sloan School of Management, told attendees of<br />
the <strong>Social</strong> <strong>Media</strong> Summit. We may have heard that<br />
before, but now, Aral noted, the rules, dominant<br />
players, and business models are a moving target.<br />
Twitter of a year ago isn’t the Twitter of today. Meta,<br />
Google, TikTok, and Spotify, among others, are<br />
shape-shifting to accommodate more sophisticated<br />
algorithms, AI, ChatGPT, and decentralized<br />
communities. New ecosystems are forming to<br />
counter established networks.<br />
<strong>The</strong> stakes have never been higher. <strong>The</strong>y include<br />
fair elections, free speech, global currencies, and<br />
personal privacy. <strong>The</strong>re’s new urgency and public<br />
awareness, but so far, neither the tech industry nor<br />
the U.S. government have taken strong leadership<br />
roles, despite recent testimony from OpenAI and a<br />
statement from AI experts that something needs<br />
to be done.<br />
Aral noted that in many ways, social media is at a<br />
crossroads. “Large platforms have gone private,<br />
policy changes are underway at Twitter...and new<br />
Generative AI, and deep learning—in development<br />
for many decades—have burst onto the scene for<br />
public use last year,” Aral said.<br />
“<strong>The</strong>se changes,” Aral added, “have an immense<br />
impact on text, images, video, and synthetic<br />
generation of information that influences how we<br />
think about our world.”<br />
Aral said that the IDE is conducting scientific research<br />
into social media, AI, decentralization and Web3 to<br />
assess political, social, governmental and business<br />
fallout, and how to stay ahead of the curve.<br />
We can’t put the genie back in the bottle, as Renée<br />
Richardson Gosline of MIT said during her AI Ethics<br />
panel. But speakers throughout the day pointed to<br />
three key remedies to revive the ailing social media<br />
environment: responsible AI, interventions against<br />
misinformation, and new platform ecosystems.<br />
Panelists disagreed about centralized versus<br />
decentralized platform design, about content<br />
moderation techniques, and about the role of<br />
laws and regulation. <strong>The</strong>y said the race by platforms<br />
to implement ChatGPT, prediction algorithms, and<br />
advanced AI ahead of competitors could cause<br />
further harm. Yet in many ways, the day’s speakers—<br />
coming from legal, behavioral science, technology,<br />
and economic perspectives—were each essentially<br />
saying, “We can and must do better.” In this report of<br />
the day’s discussions, we offer concrete action items<br />
and recommendations.<br />
Users also have to be rewired to relinquish the<br />
comforts and crowds of their social platforms in favor<br />
of more discerning, open platforms, such as Mozilla,<br />
T2, and Mastodon. And if they do, tomorrow’s social<br />
media may offer decentralized, shared, and safer<br />
spaces for public discourse.<br />
We know what to do. It’s time to act.<br />
3 METHODS TO REVIVE<br />
SOCIAL MEDIA<br />
RESPONSIBLE AI<br />
Educate programmers and the<br />
public about AI bias; regulate<br />
platforms, and penalize offenders.<br />
MISINFORMATION<br />
INTERVENTIONS<br />
Disincentivize click-bait, keep content<br />
monitors on platforms, and add<br />
friction or nudges to encourage truth<br />
NEW PLATFORM<br />
ECOSYSTEMS<br />
Revamp the business model<br />
to include decentralized, open<br />
platforms that protect users.<br />
4<br />
5
Christopher Graves, the keynote speaker at the <strong>2023</strong> <strong>Social</strong><br />
<strong>Media</strong> <strong>Summit@MIT</strong>, offered a revealing look under the hood<br />
of social media marketing tools. He described the growing<br />
danger of what he calls “personality-based marketing,” a<br />
rising—and somewhat disturbing—approach that matches<br />
individual personality traits to social media habits.<br />
How Behavioral Science<br />
Can Decode Us<br />
Graves knows more about digital marketing than most. <strong>The</strong><br />
founder and president of the Ogilvy Center for Behavioral<br />
Science, part of Ogilvy Consulting, he pointed out that<br />
companies are now correlating users’ online behavior—<br />
even photographs of their faces—with specific personality<br />
characteristics. Organizations then use these sophisticated<br />
results to build a data “genome” and deliver content aimed at<br />
encouraging specific actions.<br />
Personality Traits<br />
COGNITIVE STYLE<br />
People have a series of filters that either sharpen or distort<br />
their beliefs, preferences, and decisions, Graves told<br />
SMS@MIT attendees. “I can now test individuals at large<br />
scale to better ‘decode’ them,” he added.<br />
“When you use this for good, it could possibly lead to<br />
rapprochement or resonance,” Graves said. “But if you use<br />
it for ill will, it could be very malicious manipulation.”<br />
WORLDVIEW & AFFINITIES<br />
While marketing has long used personality-driven<br />
techniques to find, segment, and sell to customers,<br />
today’s AI-based approaches are far more sophisticated<br />
and accurate. Also, where earlier approaches attempted<br />
to change or convert ideas, today’s behavioral approaches<br />
focus on changing behavior.<br />
Graves said that personality inference can be done in six<br />
distinct ways, all of which can be accelerated with AI for<br />
pattern recognition: eye tracking, text parsing, images<br />
(especially photos of faces), music and sound, behavior<br />
on a mobile phone, and social engagement.<br />
SOCIAL MEDIA<br />
DRILLS DEEP<br />
Olgivy’s cognitive scientist<br />
Christopher Graves describes<br />
sophisticated new marketing<br />
tools to ‘decode’ online behavior<br />
WATCH VIDEO<br />
Christopher Graves<br />
President & Founder, Ogilvy Center<br />
for Behavioral Science<br />
“Companies have<br />
great interest in<br />
figuring out who you<br />
are based on your<br />
digital footprint.”<br />
Christopher Graves<br />
President & Founder, Ogilvy Center for Behavioral Science<br />
Graves described three lenses that marketers use to<br />
gain insights into individuals: personality traits, cognitive<br />
style, and worldview/identity. <strong>The</strong>se factors come into<br />
play when a social media platform—or a user or advertiser<br />
on a platform—wants to persuade others to take a specific<br />
action. People with different personalities, cognitive styles,<br />
and worldviews will respond better or worse to different<br />
kinds of messages.<br />
‘Decoding’ Personality Traits<br />
In the past, personality profiling was typically done with<br />
surveys and tests. But today’s social media platforms can<br />
remotely “decode” people by observing their preferences<br />
and behavior online.<br />
One approach claims to correlate personality types with eye<br />
movement; another purports to do the same through your<br />
use of language. Yet another technology claims to use a<br />
photograph of faces to determine personality types or even,<br />
according to one researcher, a person’s sexual orientation.<br />
It’s not clear whether this kind of tracking is being used<br />
now by the leading social media platforms, Graves said.<br />
But patent applications around personal inference have<br />
been filed by companies that include Facebook, Google,<br />
Microsoft, and Spotify. <strong>The</strong>se companies, Graves added,<br />
“have great interest in looking at your digital footprint and<br />
digital breadcrumbs to figure out who you really are—not<br />
who you say you are.”<br />
Getting to Know You<br />
<strong>The</strong>se analytics could also be used to empower AI chatbots<br />
for good purposes. For example, in the case of vaccine<br />
resistance, an empathetic approach might hear out the<br />
person’s reasons for concern and then match a pro-vaccine<br />
storyline to that person’s personality type.<br />
Another example: People could also opt-in to use personalityinferring<br />
chatbots to help them quit smoking. “If you know<br />
how somebody’s wired and respectfully approach them on<br />
their terms in ways that make sense to them, you probably<br />
have a slightly better chance at helping them,” Graves said.<br />
Despite the promise, personality-aware AI could make the<br />
spread of misinformation even more pernicious. “Imagine<br />
I’m trying to whip you up into anger, into further polarization,”<br />
Graves said. “I will have better tools and a much more finely<br />
nuanced understanding of your hot buttons.”<br />
6<br />
7
WATCH VIDEO<br />
IS THE TIDE<br />
TURNING FOR<br />
MISINFORMATION?<br />
Truth is making progress on social media.<br />
But with platforms regrouping and facing<br />
tough times, can the gains continue?<br />
Tom Cunningham<br />
Economist<br />
Kate Klonick<br />
Associate Professor of Law, St. John’s University<br />
David Rand (Moderator)<br />
Professor, MIT Sloan, and Research Lead,<br />
Misinformation and Fake News, MIT IDE<br />
While the threat of online misinformation remains serious,<br />
some experiments with intervention seem to be working<br />
and turning the tide. <strong>The</strong> proliferation of fake news peaked<br />
in 2016, and since then, growing concern has kept online<br />
misinformation in check. This panel, led by IDE leader David<br />
Rand, considered what should—and should not—be done to<br />
keep it there.<br />
Greater awareness and vigilance are helping to keep social<br />
media truthful. Still, it seems unlikely that misinformation and<br />
fake news can be eliminated entirely, especially as platform<br />
companies face earnings pressure, panelists said. Debates<br />
now center on how social media platforms and others should<br />
combat the spread of misinformation and prevent backsliding.<br />
Since the 2016 U.S. presidential elections, social media<br />
platforms have experimented with interventions to flag or<br />
remove false information. This type of content moderation<br />
has been effective and represents the best way forward,<br />
said Tom Cunningham.<br />
Cunningham, who previously worked as an economist and<br />
data scientist for Facebook and Twitter, believes that content<br />
moderation can reduce misinformation. But there are tradeoffs,<br />
he added, such as inadvertently taking down accurate<br />
information and infringing on free speech. Identifying<br />
misinformation isn’t as straightforward as it might seem.<br />
Rand agreed, noting that when fake news first began to<br />
spread on Facebook, he believed it would be easy to tackle.<br />
“<strong>The</strong>re are things that are true and things that are false,” he<br />
recalled. “If the platform wanted to do something about it, it<br />
could just get rid of the false stuff.” However, Rand said that<br />
once he started working in the field, “I changed my mind.<br />
This is way more complicated than I’d appreciated.”<br />
Rand now sees online information as a “continuum” of truth<br />
and falsity, with most content somewhere in the middle.<br />
“That doesn’t mean we’re in a post-truth world,” he said.<br />
“It’s just that accuracy is hard to measure with certainty.”<br />
Research shows that most social media users don’t actually<br />
want to share misinformation, Rand explained. Instead, many<br />
are either confused or not paying enough attention.<br />
Gold Standard<br />
Professional fact-checking by dedicated employees is “still the<br />
gold standard,” Rand said. But even fact-checkers don’t always<br />
agree on what’s true or false.<br />
Cunningham suggested that decentralized social media<br />
platforms accelerate the spread of misinformation because<br />
they’re “bad at assessing the quality of content...people<br />
tend to click whatever is shiny, whatever looks good, but not<br />
what is good.” This can amplify sensational and superficial<br />
ideas. By contrast, centralized media—newspapers, radio,<br />
and television—have traditionally been more cautious, fearing<br />
they’ll lose readership and ad revenue, or risk litigation.<br />
[Read more about decentralization on page 12.]<br />
Kate Klonick noted that Facebook and Twitter are mostly<br />
centralized, which may be good for creating what she<br />
called “one place for intervention.” But over the next couple<br />
of years, she expects to see more decentralization, where<br />
“misinformation is breeding, and it’s going to be very difficult<br />
to track.”<br />
Cunningham noted that while misinformation is still too<br />
prevalent, it has declined by a factor of five or 10 since<br />
peaking in 2016. <strong>The</strong> reason? Platforms, he said, have<br />
“added friction on politics, on sharing, on viral content, on<br />
certain types of reactions, on content from new producers,<br />
on content from fringe producers, and on content that looks<br />
like clickbait.” While this has significantly decreased the rate<br />
of misinformation, Cunningham added, “Anything that looks<br />
a little bit like misinformation is going to get suppressed<br />
as well.”<br />
Platforms Under Pressure<br />
Cunningham also maintained that social-media companies<br />
should strive to be platforms that distribute high-quality<br />
“In a free society, the law does not necessarily have a<br />
role to play in trying to determine truth.”<br />
Kate Klonick<br />
Associate Professor of Law, St. John’s University<br />
content, not just platforms that maximize retention, likes, and<br />
time spent. And yet, in the current cost-cutting environment,<br />
many social media companies have been laying off their<br />
ethicists and fact-checkers.<br />
Klonick said that while “lots of work has been done to raise<br />
awareness about malicious actors and misinformation,” the<br />
tradeoff may be increased public skepticism about what is<br />
or is not fake news.<br />
She doubts that legal interventions will help to clarify online<br />
truth. “One of the very unpopular things that I have said over<br />
the last five years, but I think is ultimately true, is that in a<br />
free society, the law does not necessarily have a role to play<br />
in trying to determine truth,” Klonick explained.<br />
Sometimes, she added, it’s best to let the private sector be<br />
the gatekeepers. Platforms know how to create friction and<br />
slow things down. “Markets,” Klonick said, “respond faster<br />
than the law to changing norms.”<br />
For these reasons and more, Klonick doesn’t see legislative<br />
remedies coming to the rescue anytime soon. Policy<br />
proposals, she said, are “a disincentive, a stick...but they’re<br />
not that helpful.” What’s more, she said, substantial bans on<br />
certain types of content, based on subjective judgments, are<br />
“not going to be in line with First Amendment principles at all.”<br />
<strong>The</strong> Cost of Regulation<br />
In many countries, social media platforms already comply<br />
with content regulations, Cunningham noted. Much of that<br />
content would be removed anyway, he said: “<strong>The</strong>y’re going<br />
to take down terrorist stuff, child porn; they’re going to take<br />
down hate speech.”<br />
Before the current layoffs, platforms were aggressively<br />
addressing the problem. <strong>The</strong>y were spending 5% of their<br />
overall costs on data scientists, content raters, and different<br />
structures for moderating content, Cunningham said, even<br />
when they were not legally required to do so. <strong>The</strong> reason<br />
wasn’t altruism or free speech, but pressure from advertisers.<br />
Cunningham isn’t confident that most platform companies<br />
“have a super-clear North Star; they’re responsive to half a<br />
dozen different constituencies.” Those include the media,<br />
governments, advertisers, users, employees, and investors.<br />
“All of those,” Cunningham added, “have very strong opinions<br />
about what content should be on the platform.”<br />
Ideally, social media platforms will discover new ways to keep<br />
the good content while eliminating the bad. But, Cunningham<br />
warned, “I don’t think that we should be crossing our fingers<br />
for that business model in the very short term.”<br />
8<br />
9
WATCH VIDEO<br />
ETHICAL AI:<br />
A WORK IN<br />
PROGRESS<br />
Panelists explored tough challenges<br />
raised by new content-generating tools<br />
and urged awareness and regulation<br />
Anthony Habayeb<br />
Co-founder and CEO, Monitaur Inc.<br />
Paris Marx<br />
Newsletter and book author, and host of the Tech Won’t<br />
Save Us podcast<br />
Kalinda Ukanwa<br />
Assistant Professor of Marketing, University of Southern<br />
California Marshall School of Business<br />
Renée Richardson Gosline (Moderator)<br />
Senior Lecturer, MIT Sloan School of Management and<br />
Human/AI Interface Research Group Lead, MIT IDE<br />
Like most tools, generative AI can be put to use for many<br />
purposes. It can create impressive new text and images from<br />
existing content. But it also lacks oversight, and is guilty of<br />
bias and hasty corporate rollouts.<br />
<strong>The</strong> opportunities as well as the risks of new AI tools were<br />
discussed by the Responsible, Human-First AI panel at the<br />
<strong>Social</strong> <strong>Media</strong> <strong>Summit@MIT</strong>. <strong>The</strong> discussion, moderated<br />
by MIT’s Renée Richardson Gosline, offered perspectives<br />
from a business leader charged with implementing ethical<br />
AI, an academic who is developing systems and future<br />
programmers, and an outsider critiquing it all.<br />
“Generative AI models have gained widespread adoption,”<br />
Gosline said, citing reports that ChatGPT gained 100 million<br />
users in just two months after its November 2022 launch. “But<br />
what does this mean?”<br />
AI in the Spotlight<br />
AI isn’t actually new, noted Kalinda Ukanwa. “Whether we’re<br />
selecting a movie from Netflix or using a GPS app to find our<br />
way around a city, we’ve been interacting with AI all along,”<br />
she said. “Yet people did not perceive that as AI. It was always<br />
this thing that was running in the background.”<br />
That’s changed with the advent of generative AI tools.<br />
ChatGPT is now part of the “kitchen-table conversation,”<br />
said Anthony Habayeb.<br />
Concerns over the potential harms of generative AI have<br />
extended to governments. Sam Altman, CEO of OpenAI,<br />
the company that created ChatGPT, recently told the U.S.<br />
Congress that AI needs more regulation. U.S. President<br />
Biden, speaking to recent graduates of the country’s Air Force<br />
Academy, worried aloud about AI’s ability to “overtake human<br />
thinking.” And the European Union is considering a new<br />
legal framework, known as the AI Act, that would regulate AI<br />
development and use.<br />
While Ukanwa doesn’t expect “to get to a place where [AI] is<br />
not going to harm anybody,” she asked, “How do we minimize<br />
that harm in an intelligent way?” Ukanwa’s recent research<br />
paper, Algorithmic Fairness and Service Failures:<br />
Why firms Should Want Algorithmic Accountability, makes<br />
the case for third-party monitors of AI development, who<br />
don’t have skin in the game.<br />
Paris Marx sees a darker motive behind both the spread of<br />
generative AI tools and the explosion of interest they’ve<br />
generated. “<strong>The</strong>se technologies are inherently political, less<br />
in the sense of party politics and more in the term’s broader<br />
meaning,” he said. “When these technologies are developed<br />
and deployed, there are certain goals and desires.”<br />
Marx views the hype over ChatGPT “getting the venture<br />
capitalists excited again,” spurring investment, and getting<br />
Microsoft to challenge Google so that responsible AI isn’t a<br />
priority. But while platforms are publicly saying they’re going to<br />
implement ChatCPT responsibly, “they’re laying off and firing<br />
all of the AI ethicists,” Marx said.<br />
AI bias is baked in by its human programmers, panelists noted,<br />
whether consciously or not. In addition, generative AI tools rely<br />
on already existing content, and they have no way of filtering<br />
that content for bias or inaccuracies.<br />
Responsible Solutions<br />
Yet Habayeb sees solutions. “AI is built by people and it can<br />
be managed,” he said. For instance, insurance company data<br />
may be biased against a certain cohort causing higher rates.<br />
“<strong>The</strong>re might be other data that we can bring into an AI system<br />
to make the pricing outcome more equitable,” Habayeb added.<br />
Gosline said that while we can’t put the AI genie back in the<br />
bottle, we can use AI more critically and deliberately. Ukanwa<br />
sees the need for greater awareness, too. She recently<br />
participated in a town hall meeting on AI, sponsored by a<br />
Los Angeles radio station, that addressed generative AI<br />
and its potential impact on society. “Afterwards,” Ukanwa<br />
recounted, “people said, ‘I didn’t know these were the<br />
implications of AI.’”<br />
Marx clearly favors government oversight. He’s skeptical of<br />
corporate motives for self-regulation that put profits over<br />
social good. “<strong>The</strong>re are many ways these technologies can be<br />
rolled out in a negative way,” he said. “Unless there’s pressure<br />
from the government,” consumer rights are not considered.<br />
Could the profit versus ethics argument be false? Ukanwa<br />
thinks so. “<strong>The</strong>re’s this narrative that if I do something ethical,<br />
I’m going to lose profits, and I can’t have both,” she said. “But a<br />
lot of research is building a case that you can have both.”<br />
<strong>The</strong>re are good business reasons to develop fair algorithms,<br />
Ukanwa said, adding, “This is what shareholders need<br />
to understand.” Habayeb agreed: “Proactive governance<br />
messaging is good for business.”<br />
“A lot of research is building the<br />
case that you don’t have to sacrifice<br />
profits for business ethics.”<br />
Kalinda Ukanwa<br />
Assistant Professor of Marketing, University of Southern California Marshall School of Business<br />
10<br />
11
WATCH VIDEO<br />
TOMORROW’S<br />
PLATFORMS<br />
TODAY<br />
<strong>Social</strong> media pioneers share<br />
their visions for principled,<br />
decentralized platforms<br />
ACTION ITEMS FOR<br />
ALTERNATIVE PLATFORMS<br />
DECENTRALIZE<br />
THE ECOSYTEM<br />
STAND FIRM ON<br />
CORE PRINCIPLES<br />
USE OPEN<br />
PROTOCOLS<br />
BE PREPARED<br />
TO COMPROMISE<br />
Mitchell Baker<br />
CEO, Mozilla Corporation, Chairwoman, Mozilla Foundation<br />
Michael Masnick<br />
Editor, Techdirt, and CEO, Copia Institute<br />
Sarah Oh<br />
Co-founder, T2<br />
Renée DiResta (Moderator)<br />
Technical Research Manager, Stanford Internet Observatory<br />
“To build any kind<br />
of user-generated<br />
content platform<br />
today, trust and<br />
safety have to be<br />
fundamental pillars.”<br />
Sarah Oh<br />
Co-founder, T2<br />
Some experts believe that if social media is going to survive,<br />
it needs a major overhaul. Panelists looking at future models<br />
said that by learning from yesterday’s mistakes, we can fix<br />
today’s broken systems. In fact, promising alternatives are<br />
already emerging.<br />
Renée DiResta moderated a provocative panel which<br />
focused on decentralized web networks—also known as<br />
Web3–and the rise of social media platform alternatives.<br />
<strong>The</strong>se emerging ecosystems include Mastodon, Mozilla,<br />
and Bluesky <strong>Social</strong>, all of which DiResta said are part of a<br />
federated universe, or “fediverse.” (<strong>The</strong> SMS@MIT event was<br />
held prior to Meta’s release of Threads, an app to rival Twitter,<br />
which has attracted millions of users.)<br />
Most of the new platforms are decentralized by design.<br />
This means that instead of accessing the internet through<br />
services mediated by companies like Google, Twitter, and<br />
Facebook, users own and govern sections of the internet<br />
themselves. <strong>The</strong> new communities, DiResta said, can teach<br />
us about alternatives to the more centralized ecosystems<br />
that are now dominant.<br />
For example, Mastodon describes itself as free and opensource<br />
software developed by a non-profit organization. It<br />
supports microblogging features similar to those of Twitter.<br />
Bluesky, a similar platform, is now being beta tested.<br />
But decentralization isn’t a prerequisite for alternative<br />
platforms. T2, a nascent online community represented on<br />
the panel, will be centralized. But T2 still promises it will<br />
provide the high levels of user privacy, security, and social<br />
dynamics that today’s platforms lack.<br />
Decentralized or not, these models promise changes for<br />
both users and developers disillusioned by social media’s<br />
misinformation, vitriol, and marketing focus. However, these<br />
new online sites will also find that reaching the scale<br />
of current global communities will be a long, tough climb.<br />
Mozilla Leads the Way<br />
DiResta noted that Mozilla pioneered many of these concepts<br />
years ago. Mitchell Baker, who became Mozilla’s CEO in 2020<br />
and has been with the company since its origins, said that<br />
“decentralization has been a part of Mozilla’s ethos from<br />
the very beginning.” Founded in 1998, Mozilla was the opensource<br />
developer of the Firefox browser, and it has long used<br />
protocols that support open, one-to-many social engagement.<br />
Mozilla’s new effort, Mozilla.social, will be a social site that is<br />
not neutral to all content. “We don’t want to be neutral to hate,<br />
we don’t want to be neutral to racism, we don’t want to be<br />
neutral to misogyny,” Baker said. “Mozilla is about inclusion;<br />
it’s in our identity and manifesto.”<br />
<strong>The</strong> big challenge for Mozilla and others is how to maintain<br />
their ideals while scaling up. “Operating at scale has its own<br />
set of problems,” Baker said. “We’re trying to be very, very<br />
intentional about [reaching] scale, while operating under a<br />
different set of principles.”<br />
In 2019, when conversations around content moderation were<br />
heating up, Mike Masnick wrote an essay, “Protocols, Not<br />
Platforms: A Technological Approach to Free Speech,” for the<br />
Knight First Amendment Institute. At the time, people were<br />
asking how platform companies could achieve both content<br />
moderation and privacy. Concerns were also raised about trust<br />
in a market dominated by four large companies. Government<br />
regulation was a flashpoint then, as it is now.<br />
Decentralized, Open Protocols<br />
Masnick’s premise was that building open protocols for email,<br />
news, chats or searches—as in the early days of the Internet—<br />
would ensure integrity. “Indeed,” he wrote, “some platforms<br />
are leveraging existing open protocols but have built up walls<br />
around them, locking users in, rather than merely providing an<br />
interface.” Masnick hoped that decentralization “would push<br />
the power and decision making out to the ends of the network,<br />
rather than keeping it centralized among a small group of very<br />
powerful companies.”<br />
Masnick also anticipated that decentralization would lead to<br />
“more innovative features, as well as better end-user control<br />
over their own data,” adding, “it could help usher in a series<br />
of new business models that don’t focus exclusively on<br />
monetizing user data.”<br />
His earlier words seem prescient to many new ventures. <strong>The</strong><br />
pendulum seems to be swinging back to decentralized models<br />
that better protect both user privacy and free speech.<br />
Sarah Oh told attendees that she believes “the domination<br />
period” of the four large platforms “ended sometime last<br />
fall.” Oh is part of what she called a “small and mighty team”<br />
building a Twitter alternative with the familiar look and feel of<br />
a centralized platform.<br />
People want “to feel safer, less exhausted, have a higher<br />
quality product experience, really great UI, and have something<br />
that works fast,” Oh said. That’s why T2 won’t “diverge too<br />
much from the experience [users] had the last few years,” she<br />
explained. T2 is developing a short-text format that not only<br />
works well, but also upholds principles and values that mirror<br />
how users live and engage with information offline. “To build<br />
any kind of user-generated content platform today,” Oh said,<br />
“trust and safety have to be fundamental pillars.”<br />
Ethics vs. Profits?<br />
Moderator DiResta asked the panel about feasibility and<br />
tradeoffs: Can a social platform that upholds highly ethical<br />
values also be commercially successful?<br />
Oh replied that there’s “no playbook” for newcomers. In fact,<br />
she said, “there is pressure to build a business in a certain<br />
way, reach certain metrics, and raise the funding necessary to<br />
continue to grow your platform.”<br />
Masnick, while admitting “there is no silver bullet,” remains<br />
optimistic. Yet he acknowledged the difficulty of building trust<br />
and safety online, especially as a platform grows. “You always<br />
think there are easy solutions,” he said, echoing a comment<br />
made in another panel earlier in the day. “Just ban the bad<br />
people and help the good people. It turns out that it’s not<br />
easy...there is no perfect answer, no right way to do this.”<br />
Baker knows first-hand how difficult content moderation can<br />
be, especially at scale. It involves deciding what your audience<br />
wants, while also maintaining firm guidelines. For Mozilla, she<br />
said, “inclusion is not negotiable. And hate crimes, misogyny,<br />
stalking, and death threats...that is not inclusive.”<br />
Building a product that responds to both the market and<br />
philosophical implications can be a tough balancing act.<br />
As a pioneer, Baker said, “you have to be willing to take the<br />
arrows from all sides.”<br />
12<br />
13
THANK YOU<br />
A special thanks to our <strong>2023</strong> <strong>Social</strong> <strong>Media</strong><br />
<strong>Summit@MIT</strong> channel partners:<br />
CONNECT WITH THE IDE<br />
FIND US ON MEDIUM<br />
Read articles on the impact of<br />
digital technology on business,<br />
the economy and society. Read ><br />
BROWSE OUR WORK<br />
Check out cutting-edge books,<br />
research briefs, special reports<br />
and more. Browse ><br />
LISTEN<br />
Listen to the IDE podcast: <strong>The</strong><br />
Digital Insider with Sinan Aral.<br />
Listen ><br />
Content by Paula Klein and Peter Krass