24.05.2022 Views

The 2022 Social Media Summit@MIT Event Report

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

2022 SOCIAL MEDIA SUMMIT @ MIT

WHAT’S NEXT FOR

SOCIAL MEDIA?

5 GLOBAL TRENDS TO WATCH IN VOLATILE TIMES


SOCIAL MEDIA SUMMIT @ MIT 2022

A MINEFIELD OF

ONLINE VOLATILITY

The second annual Social Media Summit@MIT focused on the information

war in Ukraine, fake news, the need for greater algorithmic

transparency, and the importance of ethics in artificial intelligence.

“It’s extremely important to keep the conversation going,

and that’s exactly what we intend to do.”

SINAN ARAL

The Social Media Summit@MIT (SMS),

hosted by MIT’s Initiative on the Digital

Economy (IDE), was launched last year in the

midst of unprecedented upheavals sparked

and organized on social media platforms—

including the January 6, 2021, storming of the

U.S. Capitol. And the turmoil didn’t stop there.

In February, former President Donald Trump

launched his own social media platform,

Truth Social, after he was permanently

banned from Twitter and suspended from

Facebook for two years. In another shakeup,

The Wall Street Journal published “The

Facebook Papers,” a damning, multipart

report based on more than 10,000 documents

leaked by a company whistleblower.

We’re less than halfway into 2022, yet already

it’s shaping up to be another pivotal year for

social media, where we are witnessing Russia’s

brutal invasion of Ukraine with both real bombs

and fake news.

All of these developments resonated during

the 2022 SMS event—in particular, in a

conversation between IDE Director Sinan Aral

and the Facebook whistleblower, Frances

Haugen (see details, page 4). Calls for more

transparency from platform companies and

algorithm designers were dominant throughout

the day. Since the event, Twitter became a

takeover target by Elon Musk.

Discussions focused on the pressing

concerns of misinformation amplified by

social media and how to achieve the goals of

AI and algorithmic transparency and ethics.

Panels were led by top MIT researchers—

David Rand, Dean Eckles, and Renée

Richardson Gosline—who, according to

Aral, are engaged in “groundbreaking

research that is making meaningful inroads

into solving the social media crisis.”

The moderators were joined by a diverse

group of academics, social media pros, a

state senator, and others, providing a rich

day of contrasting views and opinions. One

obvious trend is that social media’s clout is

growing, and so is its scrutiny. “It’s extremely

important to keep the conversation going,”

Aral told SMS attendees, “and that’s exactly

what we intend to do.”

03 OVERVIEW

04 FIRESIDE CHAT WITH FRANCES HAUGEN

08 MISINFORMATION AND FAKE NEWS

12 ALGORITHMIC TRANSPARENCY

16 THE INFORMATION WAR IN UKRAINE

20 RESPONSIBLE AI

24 FINAL THOUGHTS & THANKS

Click the play icon throughout

the report to view session videos

On March 31, 2022, MIT’s Initiative on the

Digital Economy (IDE) hosted the second

annual Social Media Summit (SMS@MIT).

The online event, which attracted more

than 12,000 virtual attendees, convened

technology and policy experts to examine

the growing impact of social media on our

democracies, our economies, and our public

health—with a vision to craft meaningful

solutions to the growing social media crisis.

5 GLOBAL TRENDS

1Social

media’s

impact on

child

and adult

psychology2

The threat

of online

misinformation

and

the need

for systemic

solutions

The importance

of

algorithmic

transparency

—and how to

3achieve it

Expansion

of social

media’s

impact on

geopolitics

4and war

The formalization

of AI ethics

standards

and 5training

2

3


FIRESIDE CHAT SOCIAL MEDIA SUMMIT @ MIT 2022

FRANCES HAUGEN

SPEAKS OUT

The Facebook whistleblower says the company must acknowledge its

tremendous impact and become more transparent.

Frances Haugen is a former

Facebook algorithmic product

manager who today is better

known as the company’s chief

whistleblower. She joined Sinan

Aral to discuss Facebook’s

impact on society, how the

company has resisted efforts to

analyze its algorithms, and what

actions it can take in the future.

Haugen, who earned an MBA

from Harvard Business School

and worked as an electrical

engineer and data scientist

before joining Facebook in

2019, said no one intends to

be a whistleblower. “Living with

a secret is really, really hard;

especially when you think that

secret affects people’s health,

their lives, and their well-being,”

she said.

That’s why Haugen said she

left the company in 2021 and

provided more than 10,000

internal documents to The

Wall Street Journal. These

documents became the basis

for the newspaper’s series, “The

Facebook Files.” As the Journal

wrote, “Facebook knows, in acute

detail, that its platform is riddled

Frances

Haugen

provided

more than

10,000

internal

Facebook/

Meta

documents

to the

press.

with flaws that cause harm,

often in ways only the company

fully understands.”

One of Haugen’s biggest

criticisms of Facebook, which

was renamed Meta in 2021,

concerns the way the company

has, in her opinion, conflated

the issues of censorship and

algorithmic reach. Most social

media critics say that algorithms

promote dangerous and extreme

content such as hate speech,

vaccine misinformation, and

poor body image messaging to

young people.

Yet Facebook has been quick

to frame the issue as one of

censorship and free speech—not

its proprietary algorithms, Haugen

said. For example, the remit of the

company’s Oversight Board, of

which Haugen was a member, is

to censor those who don’t comply

with content policies. This charter,

Haugen noted, is deliberately

narrow. “Facebook declined to ever

let us discuss the non-contentbased

ways we could be dealing

with safety problems”–such as

building in some “pause” time

before someone can share a link.

“It sounds like a really small thing,

but it’s the difference of 10% or

15% of [shared] misinformation,”

she said. “That little bit of friction,

giving people a chance to breathe

before they share, has the same

impact as the entire third-party

fact-checking system.”

Aral agreed that there is a gap

“between free speech and

algorithmic reach,” and that

fixing one doesn’t infringe on the

other. He pointed to MIT research

showing that when social-media

users pause long enough to think

critically, they’re less likely to

spread fake news. “It’s a cognitive,

technical solution that has

nothing to do with [free] speech,”

Aral said.

Kids and Social Media

Haugen also described the

disturbing ways social media

and targeted advertising affect

teens and children. For example,

Facebook’s surveys, some

involving as many as 100,000

respondents, found that socialmedia

addiction—euphemistically

known as “problematic use”—is

most common among 14-yearolds.

Yet when The Wall Street

Journal gave Facebook a chance

to respond to these findings, the

company pointed to other surveys

with smaller sample sizes and

different results.

“I can tell you as an algorithm

specialist that these algorithms

concentrate harms...in the form of

vulnerability,” Haugen said. “If you

have a rabbit hole you go down,

they suck you toward that spot.

Algorithms don’t have context.

They don’t know if a topic is good

for you or bad for you. All they

know is that some topics really

draw people in.”

Unfortunately, that often means

more extreme content gets the

most views. “Put in ‘healthy eating’

on Instagram,” she said, “and in the

course of a couple of weeks, you

end up [with content] that glorifies

eating disorders.” (Meta has owned

Instagram since 2012.)

She’d like to see legislation to

keep children under 13 off most

social media platforms. Meta

documents show that 20% of

11-year-olds are on the platform.

She’d also like adults to have

the option to turn off targeted

ads. Similarly, she would like to

see a ban on targeted ads, such

as those for weight loss

supplements aimed at children

and teens under the age of 16.

Haugen also suggested that

Facebook dedicate more resources

to fighting misinformation, fake

news, and hate speech. “We need

flat ad rates,” she said. “Facebook’s

own research has said over and

over again that the shortest path

to a click is hate, is anger. And so

it ends up that angry, polarizing,

divisive ads are five to 10 times

cheaper than compassionate or

empathetic ads.”

“If we want to be safe,” Haugen

concluded, we need to have

open conversations about

these practices and “invest

more on transparency.”

3 WAYS

TO FIX

FACEBOOK

Ban targeted ads to

children under 16

Dedicate more

resources to fight

fake news and hate

speech

Keep kids under 13

off the platform

“Living with a

secret is really,

really hard,

especially when

you think that

secret affects

people's health,

their lives and

their well-being.”

FRANCES HAUGEN

4

5


FIRESIDE CHAT SOCIAL MEDIA SUMMIT @ MIT 2022

6

7


PANEL : MISINFORMATION SOCIAL MEDIA SUMMIT @ MIT 2022

FAKE NEWS,

REAL IMPACT

Is social media to blame for producing

and spreading misinformation—or is it

part of a broader problem?

David Rand Professor, MIT Sloan, MIT IDE group leader

Renée DiResta Research Manager, Stanford Internet Observatory

Rebecca Rausch Massachusetts State Senator

Duncan Watts Professor, University of Pennsylvania

Fake news and misinformation

headlines are rampant: The presidential

election was stolen. Vaccinations kill.

Climate change is a hoax. To what

extent is social media responsible?

That was the critical question raised by

expert panelists in the second session

of SMS@MIT 2022.

The dangers are undoubtedly real,

but Duncan Watts, Stevens University

Professor at the Annenberg School

for Communications at the University

of Pennsylvania, observed that

social media is one small cog in a

larger set of mass media gears. “For

the average American, the fraction

of fake news in their media diet is

extremely low,” he argued.

Research shows that most Americans

still get their news primarily from

television. And of the news they do

consume online, fake news represents

only about 1% of the total. “We need

to look much more broadly than

social media,” Watts said. “We need

to look across all types of platforms

and content” to determine the source

of fake news. Today, there are

interconnected ecosystems—from

online influencers to cable networks

and print media—that all contribute to

amplifying misinformation.

Small Groups, Big Impacts

Fellow panelist Renée DiResta,

research manager at Stanford Internet

Observatory, maintained that even small

groups of people spreading fake news

and misinformation can have outsized

reach and engagement online.

The literal definition of the word

propaganda means to propagate,

the idea that information must be

propagated,” she said. “And social

media is a phenomenal tool for this,

particularly where small numbers of

people can propagate and achieve

very, very significant reach in a way

that they couldn’t in old broadcast

media environments that were much

more top-down.”

Specifically, DiResta explained

how information goes viral and

echo chambers arise. “Influencers

have an amplification network,

the hyper-partisan media outlet

has an amplification network, until

[misinformation] winds up being

discussed on the nightly news,”

she said. “And it’s a phenomenally

distinct form of creating what is

effectively propaganda...it’s just

this fascinating dynamic that we

see happening with increasing

frequency over the last few years.”

Research upholds the idea that

familiarity with a topic “cues”

our brains for accuracy. In other

words, the more often you hear an

assertion—whether true, false or

neutral—the more likely you are to

believe it, according to moderator

David Rand, MIT professor of

management science and brain

and cognitive sciences, and a

group leader at the IDE.

Healthcare’s Unhealthy Messages

This repetition of false news

is “massively problematic” for

the public, said Massachusetts

State Senator Rebecca Rausch.

As an example, she cited reports

stating that 147 of the leading

anti-vaccination feeds, mainly on

Instagram and YouTube, now have

more than 10 million followers, a

25% increase in just the last year.

“A number of anti-vax leaders

seized the COVID-19 pandemic

as a historic opportunity to

popularize anti-vaccine sentiment,”

Rausch said. One result, she said,

is that vaccination hesitancy is

now rising, even for flu and other

routine shots.

Rausch also cited reports that

say 12 anti-vaccination sources

are responsible for 65% of all

anti-vax content online. Some

people also profit from their

misinformation by selling pills and

other supplements they claim

can act as vaccine alternatives.

Watts agreed that “small groups

of people with extreme points of

view and beliefs can indeed inflict

disproportionate harm on society.”

But, rather than saying, “we’re

all swimming in this sea of

misinformation and there’s some

large average effect that is being

applied to society,” we should be

looking at the broader context,

he said.

Algorithmic Influences

Watts said that people often

seek out certain content by

choice. “It’s not necessarily that

the platform is driving people into

a particular extreme position,”

he said. Evidence based on

YouTube, for example, shows

there is a lot of user demand

driving traffic—people search

for specific content, find it, and

share it widely. Unfortunately,

he said, “It’s shocking to be

confronted with that, but it’s not

necessarily a property of the

social media platform.”

Yet DiResta said we can’t

underestimate algorithmic

influence. “There’s an expression:

‘We shape our systems; thereafter,

they shape us,’” DiResta said.

“We’re seeing the extent to which

the network is shaped by the

platform’s incentives.”

Both DiResta and Rausch believe

some of the solutions rest with

legislation. But Rausch asked at

what point laws can supersede

algorithms that promote fringe

content. “What should we be

changing, if anything?” she asked.

“We are very far from knowing

what policies should be proposed

in terms of changing social

media, like platform behavior,

and regulating it,” said Rand.

“But we really need policy around

transparency and making data

available, breaking down the

walled gardens so people from the

outside can learn more about what

is going on.”

For Watts, solutions are complex:

“We can’t go back to a world where

we don’t have the technology to

communicate in this way...and it’s

not at all clear that you can say,

‘Well, you guys can talk to each

other about cargo bikes [online],

but you can’t talk to each other

about vaccines.’ I don’t deny that

it’s a terrible problem, but I feel

very conflicted about how we think

about solutions.”

FACT CHECK

1 %

of online content

is fake news

12

sources are

responsible for 65%

of anti-vax content

10mil

people follow top

anti-vax news feeds

8

9


PANEL : MISINFORMATION SOCIAL MEDIA SUMMIT @ MIT 2022

10

11


PANEL : TRANSPARENCY

SOCIAL MEDIA SUMMIT @ MIT 2022

SEEING THROUGH

SOCIAL MEDIA

ALGORITHMS

The software that drives social media

is top secret. But given platforms’

huge impact on society, should social

media companies provide greater

algorithmic transparency?

Dean Eckles Assistant Professor, MIT Sloan, MIT IDE group leader

Daphne Keller Director of Platform Regulation, Stanford University

Kartik Hosanagar Professor, The Wharton School

“Simply gaining

access to

social media

algorithms isn’t

the complete

answer.”

DEAN ECKLES

“Algorithmic transparency”

may not be an everyday phrase,

but what’s behind it is simple

enough: Social media platforms,

including Facebook, Twitter,

and YouTube, are having such

a significant impact on society,

researchers should be allowed

to study the software programs

that drive their recommendation

engines, content rankings, feeds

and other processes.

Transparency was a common

theme throughout the day, but

one session at SMS@MIT 2022,

focused entirely on the topic,

digging deep into how the goal

can be achieved, and what the

tradeoffs may be. Panelists

explained that many social

media companies have treated

their software code as a state

secret to date. Access to

proprietary algorithms is granted

to company insiders only—and

that’s a problem in terms of

verifying and testing the platforms

and their content.

“Platforms have too much

control,” said Kartik Hosanagar,

professor of operations,

information, and decisions at The

Wharton School, referring to the

relationship between algorithmic

transparency and user trust.

“Exposing that [information] to

other researchers,” he added, “is

extremely important.”

Complex Interactions

At the same time, simply gaining

access to social media algorithms

isn’t sufficient, said Dean Eckles,

the panel’s moderator and an

associate professor of marketing

at MIT Sloan School. He said his

own research shows “how hard it

is to quantify some of the impacts

of algorithmic ranking,” such as

bias and harm.

Eckles noted that algorithms and

consumers are in a feedback loop.

There is an interdependence of

sorts “because the algorithms

are responding to user choices,

and then users are making

choices based on the algorithm.”

Hosanagar added, “It’s a very

complex interaction. It isn’t

that one particular choice of

algorithm always increases or

always decreases filter bubbles.

It also depends on how users

respond.” The narrative that

algorithms cause the filter bubble

is too simple, he said, adding, “it’s

far more nuanced than that.”

Daphne Keller, director of the

Program on Platform Regulation

at Stanford University’s Cyber

Policy Center, would like to see

more academic research into

how social media platforms

moderate and amplify content

and what sorts of content

control—such as taking down

offensive or false posts—their

terms of service permit.

Unfortunately, “data scientists

inside the platform have all of

this [information],” Keller said.

“Lots of people outside platforms

have really compelling arguments

about the good they can do in the

world if they had that access. But

we have to navigate those really

significant barriers and competing

values.” Opening APIs to other

data scientists—as they are doing

in the EU—would be a helpful start

to more transparency. According

to panelists, less clear is what

data access consumers would

want or use.

Political Power

The panel also discussed the

difficulty of learning the exact

impact of social media on politics.

As Hosanagar—and Frances

Haugen, formerly of Facebook—

pointed out, the public sees

only reports the social media

companies make public. “We don’t

BREAKING

THE CODE

1

Most

2

3

Social

platform

companies continue

to treat their

algorithms as

classified secrets.

Greater access

to social media

algorithms would

allow researchers

to explore

platforms’ impacts

and intentions.

media

companies might

share their

code if offered

incentives.

know about the ones that are not

approved internally,” he explained.

Keller added: “We need to have

researchers and algorithm

experts try to figure out a ranked

list of priorities because we’re not

going to get everything.”

One way forward without stifling

innovation, Eckles suggested,

would be to incentivize social

media firms to share their

internal data with other

researchers. Those incentives

could include public pressure and

the threat of lawsuits. It’s already

happened with Facebook sharing

data with Social Science One,

an independent research

commission formed to study

the effects of social media on

elections—potentially a sign of

good things to come.

12

13


PANEL : TRANSPARENCY SOCIAL MEDIA SUMMIT @ MIT 2022

14

15


PANEL : INFOWARS

SOCIAL MEDIA SUMMIT @ MIT 2022

UKRAINE’S SECOND

BATTLEFIELD:

INFORMATION

While Russia’s military invasion of Ukraine

involves all-too-real soldiers, guns, and

tanks, the two nations are also fighting a

war of information. Their main weapon of

choice? Social media.

Sinan Aral Director, MIT IDE

Clint Watts Distinguished Research Fellow, Foreign Policy Research Institute

Richard Stengel Political Analyst, CNBC

Natalia Levina Professor, NYU Stern School of Business

WAR OF WORDS

In Ukraine, social media

is being enlisted for

grassroots organizing

and assistance.

The most widely used

social media platforms

include Telegram and

TikTok.

Video is the most popular

format for social

media during the war.

There was nothing fake about

Russia’s military assault on

Ukraine on February 24, 2022.

Soldiers attacked the country

with guns, tanks, and bombs,

and the war still rages. But

on a second front—a parallel

information war waged via social

media—the most dangerous

weapons were misinformation

and fake news, said IDE director

Sinan Aral as he introduced his

expert panelists.

Without downplaying the severity

of the violence committed

by Russia’s military against

Ukrainian civilians, panelists

considered the implications of

the shadow information war in

Ukraine, as well as how both

sides are using social media to

rally global support and spread

information and disinformation.

Natalia Levina, professor of

information systems at NYU’s

Stern School of Business, who

grew up in Ukraine’s secondlargest

city, Kharkiv, said, “My

day often starts with looking

[online] at what’s going on in

Kharkiv. And every day, I hope the

bombardment of the city is less.”

Russia’s designs on Ukraine

date back to at least 2014, the

year Russia annexed Crimea.

Panelist Richard Stengel, a

CNBC analyst and former U.S.

undersecretary of state, said

that’s also when Russia launched

an early and intense information

war. At the time, Russian

President Vladimir Putin denied

the very existence of the invasion,

even after Russian troops had

crossed the border. Other Russian

propaganda disseminated on

social media and elsewhere

falsely accused Ukrainians of

being antisemitic Nazis.

Eight years later, the situation has

shifted. This time, Stengel said,

Russia’s propaganda on social

media appears “antiquated,”

“clunky,” and “uncreative.” That’s

surprising, he said, since the

Russians follow a strategy stating

that four-fifths of war is not

kinetic but information. “They

have a sophisticated 30,000-foot

view,” Stengel said, but they don’t

seem to be executing it.

Levina was less sanguine,

saying, “it’s unfortunately

premature to say that Ukraine

has won the information war.”

Russia is disseminating its

messages via TikTok and its

own state-run media.

Video, Telegram Rule

TikTok and video have emerged

as top weapons in this new

information war, explained Clint

Watts, a research fellow at the

Foreign Policy Research Institute.

“Video is king now across all

platforms,” he said. “Over the

last decade, video-enabled

social media has become much

more available and ubiquitous…

everybody has a camera.”

Watts added that President

Volodymyr Zelensky, a

former actor, is a good video

communicator. In addition,

Zelensky and his government

have used the power of platforms

to crowdsource a virtual army.

While Ukraine doesn’t have many

physical resources, Watts said, it

has a “worldwide audience that

wants to help,” including people

to fight, provide materials and

donate through cryptocurrencies,

and more. “A decade ago, we

were talking about Twitter as

the distribution platform,” Watts

added. “The information battle

today is on Telegram.”

At the same time, false videos

are proliferating in Russia and in

China—the latter, home of TikTok.

As Aral noted, the messaging app

Telegram, which is widely

used in both Ukraine and

Russia, is owned by a Russian

who opposes moderating or

removing disinformation from

Russia or elsewhere.

There is a ton to learn from

what’s happening right now

that would be instructive for

democracies,” Watts said.

“All our norms around how

state conflicts run will change

because this is the first social

media-powered state conflict

I’ve seen.”

For example, he said, Ukrainian

soldiers are essentially

conducting psychological

operations on their adversary by

directly text messaging soldiers

on the front lines. “This is

remarkable,” Watts added.

Reviving Personal Networks

Social media is also being used

for positive ends. For example,

Levina uses social media to

check in with Ukrainian relatives,

some of whom were too old or

ill to evacuate.

Levina said that social media

is also reviving old-fashioned

personal networks within the

greater Ukrainian community.

“Somebody on Facebook—a

friend of a friend—may say,

‘We know that people in this

hospital...really need catheters,’”

she said. “So everybody in the

network is looking for medical

catheters of a particular size that

would then be shipped.” This kind

of strong, grassroots organizing

has been “amazing,” Levina said.

Still, Levina called for continued

vigilance. “We have to be active

skeptics, not cynics,” she said.

“We really need to keep checking

the information and not be lazy.”

In the new information war, that’s

a powerful command.

16

17


PANEL : INFOWARS SOCIAL MEDIA SUMMIT @ MIT 2022

18

19


PANEL : RESPONSIBLE AI SOCIAL MEDIA SUMMIT @ MIT 2022

WHO’S

RESPONSIBLE

FOR

IRRESPONSIBLE

AI?

Software does whatever it’s programmed to

do. The primary factor behind AI ethics is

the people who design and create it.

Renée Richardson Gosline Professor, MIT Sloan, MIT IDE group leader

Rumman Chowdhury Director, Twitter

Chris Gilliard Professor, Macomb Community College

Suresh Venkatasubramanian Asst. Director, U.S. Office of Science and Technology Policy

The talk about artificial intelligence

(AI) being ethical and responsible

can be a bit misleading. Software

itself is neither ethical nor

responsible; it does what it’s been

programmed to do. The greater

concern is the people behind

the software. Unfortunately, said

panelists in this SMS@MIT 2022

session, the ethics of many AI

developers and their companies

fall short.

Some irresponsible or biased

practices are due to a kind of

high-tech myopia, said Rumman

Chowdhury, Twitter’s director

of machine learning ethics,

transparency, and accountability. In

Silicon Valley, “people fall into the

trap of solving the problems they

see right in front of their faces,”

she said, and those are often

problems faced by the privileged.

As a result, she added, “we can’t

solve, or even put adequate

resources behind solving larger

issues of imbalanced data sets

or algorithmic ethics.”

The most fascinating part

of working in responsible AI

and machine learning is that

we’re the ones that get to think

about these systems, truly

as socio-technical systems,”

Chowdhury said.

There isn’t just one single thing that government or industry or academia

needs to do to address these broader questions. It’s a whole coalition of

efforts that we have to build together.”

SURESH VENKATASUBRAMANIAN

Myopia also can be seen in

business-school students, noted

Renée Richardson Gosline, the

panel’s moderator and a senior

lecturer in management science

at MIT Sloan School and a leader

at the MIT IDE. MBA students

“have all of these wonderful ideas

for companies that they’d like to

launch,” she said. “And the ethics

of the AI conversation oftentimes

lags behind other concerns that

they have.”

‘Massive Harms’

Panelist Chris Gilliard, Professor

of English at Macomb Community

College and an outspoken social

media critic, took a more direct

stance. “We should do more than

just wait for AI developers to

become more ethical,” he insisted.

Instead, Gilliard advocates for

stringent government intervention.

The tradeoff for having sophisticated

technology should not

be surveillance and sacrificing

privacy, in his view: “If we look at

how other industries work…there

are mechanisms so that you are

typically not allowed to just release

something, do massive amounts

of damage, and then perhaps

address those damages later on.”

Gilliard acknowledged that his proregulation

stance is opposed in

Silicon Valley, where unfettered

innovation is coveted. “Using

that as an excuse for companies

to perpetuate all manner of

harms has been a disastrous

formulation,” Gilliard said, “not just

for individuals, but for countries

and society and democracy.”

Chowdhury acknowledged the

responsibility corporations bear.

“In industry, doing responsible AI

means that you are ensuring that

what you are building is, at the

very least, not harming people at

scale, and you are doing your best

to help identify and mitigate those

harms,” she said. Beyond that,

she added, “responsible AI is also

about enabling humans to flourish

and thrive.” Chowdhury sees many

startups building on these ideas as

they develop their companies and

ethical AI may actually “drive the

next wave of unicorns,” she said

Working Together

Suresh Venkatasubramanian,

assistant director of the U.S. Office

of Science and Technology Policy,

a branch of the White House,

has a pragmatic perspective. He

maintained that “there isn’t a single

thing that government or industry

or academia needs to do to

address these broader questions.

It’s a whole coalition of efforts that

we have to build together.”

Those efforts, he added,

could include “guardrails” and

best practices for software

development, making sure that

new products are tested on

the same populations that will

ultimately use them. More rigorous

testing is also needed to protect

people from what he called

“discriminatory impacts.”

Chowdhury summed it up by

saying that “responsible AI is not

this thing you do on the side after

you build your tech. It is actually

a core part of ensuring your tech

is durable.” She urged companies

to “carve out meaningful room for

responsible AI practices, not as a

feel-good function, but as a core

business value.”

Venkatasubramanian agreed that

articulating ethical values and

rights is important. But once that’s

done, he added, it’s time to “allow

our technologists and our creative

folks to build technologies that can

help us respect those rights.”

3 GROUPS WORKING FOR MORE ETHICAL TECH

Startups & Society

Initiative promotes

the adoption of more

ethical and socially

responsible practices

in technology firms.

Parity Responsible

Innovation Fund invests

in innovation that

protects privacy and

security rights and the

ethical use of technology.

National AI Research

Resource Task Force is a

joint venture of the U.S.

National Science Foundation

and the Office of Science and

Technology Policy.

20

21


PANEL : RESPONSIBLE AI SOCIAL MEDIA SUMMIT @ MIT 2022

22

23


SOCIAL MEDIA SUMMIT @ MIT 2022

FINAL

THOUGHTS

The rise of social media has created new and

complex challenges, and there’s no silver bullet to

solve them all. That’s why the MIT Initiative on the

Digital Economy will continue to be a nexus for

ongoing social media research. It’s also why we

intend to keep our annual Social Media Summit@

MIT event free and open to the public.

The IDE serves as an important hub for

academia, industry, public policy, the economy

and society. Understanding the consequences

of social media, both intended and unintended,

is a mission we take seriously. The sometimes

contradictory goals are to protect privacy

while enabling democracy, and to prevent

abuse while providing a platform where

everyone can share their views.

We hope you’ll help keep the IDE at the

forefront of these issues with your support

and participation. To learn more about the

IDE, and to provide support to our important

research and events, please reach out to

Devin Cook or David Verrill.

The goal is to protect privacy while enabling democracy.”

SINAN ARAL

MANY

THANKS

Thank you to our SMS@MIT channel partners.

The MIT Social Media Summit was made

possible by a collaboration with the MIT Office

of External Relations.

CONTENT Peter Krass, Paula Klein SESSION ILLUSTRATIONS DPICT

24

25

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!