2024 MIT IDE Annual Conference Event Report
Transform your PDFs into Flipbooks and boost your revenue!
Leverage SEO-optimized Flipbooks, powerful backlinks, and multimedia content to professionally showcase your products and significantly increase your reach.
TABLE OF CONTENTS<br />
INTRODUCTION &<br />
3 HIGHLIGHTED RESEARCHERS<br />
8 GETTING GEEKY<br />
David Verrill, <strong>IDE</strong> Executive Director, and<br />
Albert Scerbo, <strong>IDE</strong> Associate Director<br />
Technology-Driven Organizations and Digital<br />
Culture Research Group<br />
4<br />
TESTING G AI IN THE FIELD<br />
EN<br />
9<br />
AI’s IMPACT ON JOBS,<br />
ACCESS AND INFLUENCE<br />
Generative AI and Decentralization Research Group<br />
Artificial Intelligence, Quantum and Beyond<br />
Research Group<br />
5<br />
GUARDRAILS FOR AI<br />
10<br />
FIGHTING MISINFORMATION<br />
& CONSPIRACY THEORIES<br />
Building a Distributed Economy Research Group<br />
Misinformation and Fake News Research Group<br />
6<br />
THE CASE FOR AI FRICTION<br />
11<br />
TAKING DATA ANALYTICS<br />
TO THE NEXT LEVEL<br />
Human-First AI Research Group<br />
Data and Analytics Research Group<br />
WATCHING AI LEARN<br />
7 12 KEY TAKEAWAYS<br />
AI Marketplaces and Labor Economics<br />
Research Group<br />
8 Big Ideas from the <strong>2024</strong> <strong>IDE</strong> <strong>Annual</strong> <strong>Conference</strong>
3<br />
INTRODUCTION<br />
“These topics are affecting pretty much everyone every day,”said <strong>IDE</strong><br />
Executive Director David Verrill in his opening remarks. That’s one truth<br />
that can be trusted.<br />
Access Video<br />
Albert Scerbo<br />
Generative AI is already answering<br />
your search queries and your<br />
customer calls, generating job posts<br />
and resumes, and collaborating<br />
with human teams on the job. But<br />
can we trust its answers? What<br />
regulations should be in place?<br />
And is AI development being<br />
dominated by commercial<br />
interests? These are among the<br />
topics being investigated by<br />
researchers at the <strong>MIT</strong> Initiative<br />
on the Digital Economy (<strong>IDE</strong>).<br />
David Verrill<br />
At the <strong>IDE</strong>’s <strong>2024</strong> <strong>Annual</strong><br />
<strong>Conference</strong>, an online membersonly<br />
event during the week of May<br />
20, speakers described AI projects<br />
about trust, policy, productivity and<br />
economics that they hope will<br />
provide practical guidance to those<br />
building and using GenAI. <strong>IDE</strong><br />
researchers also explained why, in<br />
the rush to monetize and<br />
implement AI, rigorous research<br />
and careful consideration are<br />
especially needed.<br />
While GenAI is top-of-mind, the<br />
conference also featured<br />
presentations on other topics<br />
vital to the digital economy. In<br />
all, eight research group leaders,<br />
postdocs and doctoral students<br />
presented their cutting edge<br />
studies. Their topics included<br />
quantum computing, digital<br />
culture, countering false<br />
conspiracy theories online, and<br />
the credibility of social media<br />
platforms.<br />
HIGHLIGHTED RESEARCHERS<br />
Nur Ahmed<br />
Michael Caosun<br />
Patrick Connolly<br />
Thomas Costello<br />
Harang Ju<br />
Shayne Longpre<br />
Sahil Loomba<br />
Robert Mahari<br />
Benjamin Manning<br />
Cameron Martel<br />
Alex Moehring<br />
Jonathan Ruane<br />
Ana Trisovic<br />
Emma Wiles<br />
Yunhao Jerry Zhang
Generative AI and Decentralization Research Group<br />
4<br />
TESTING G ENAI<br />
IN THE FIELD<br />
Experiments measure productivity, human collaboration and trust.<br />
To see how smart and reliable AI can be, tests are<br />
moving from the laboratory to the real world. These<br />
“road tests” can also help businesses estimate how<br />
much productivity to expect from their AI investments.<br />
Several efforts are underway at the <strong>IDE</strong> to explore AI<br />
productivity, human collaboration and trust. <strong>IDE</strong><br />
Director Sinan Aral told attendees at the <strong>Annual</strong><br />
<strong>Conference</strong>, “We really want to get a handle on applied<br />
GenAI and what it means for business.”<br />
“The rubber meets the road” with large-scale<br />
experiments, said Aral, who also heads the <strong>IDE</strong>’s<br />
Generative AI and Decentralization Group.”<br />
Two GenAI experiments are taking shape in Aral’s<br />
research group. One involves an AI platform called<br />
MindMeld developed by <strong>MIT</strong> Sloan postdoc Harang Ju<br />
and doctoral student Michael Caosun. This online<br />
platform pairs users and large language models (LLMs)<br />
on tasks that include writing ad copy. The humans<br />
work with either GenAI assistants or other humans to<br />
measure collaboration and productivity.<br />
The second experiment examines human trust in<br />
generative search. Search results generated by an LLM<br />
can include links to other references; sometimes, users<br />
provide feedback on whether the results were helpful.<br />
In the <strong>IDE</strong> field experiment, the key questions studied<br />
were: Can we trust GenAI? Should we trust it? And<br />
when do we trust it?<br />
Nearly 5,000 test users were randomly given either<br />
generative search results or conventional search results<br />
to some 50,000 online queries. They were then asked<br />
how much they trusted the results and how willing<br />
they were to share that information.<br />
Based on preliminary survey results, Aral reported that<br />
people generally distrust the generative results versus<br />
traditional search results when told which responses<br />
are generated by AI. This holds true even when<br />
traditional results and GenAI results are identical but<br />
arranged differently.<br />
Adding citations or references isn’t always the solution,<br />
either. While people tend to trust AI more when<br />
references are included, those additions can be<br />
inaccurate.<br />
“The veneer of rigor is more<br />
important than rigor itself,” Aral<br />
said. “It can give people trust in AI<br />
even when it's not warranted.”<br />
Sinan Aral<br />
Overall, Aral noted that there are clear pros and cons<br />
to generative information. On the one hand, GenAI can<br />
be adaptable, flexible, specific, responsive and rigorous.<br />
But GenAI can also “hallucinate,” making references to<br />
research and papers that don’t exist. “It looks<br />
authoritative,” Aral said, “but it’s really not.”<br />
Access Video<br />
Access Blog
Building a Distributed Economy Research Group<br />
5<br />
GUARDRAILS FOR AI<br />
Researchers describe efforts to audit AI datasets, treat regulatory compliance as<br />
a feature, not a bug.<br />
Two researchers working with Alex<br />
“Sandy” Pentland, the Faculty<br />
Director of <strong>MIT</strong> Connection<br />
Science and Lead for the <strong>IDE</strong>’s<br />
Building a Distributed Economy<br />
Group, spoke about AI policy issues<br />
at the <strong>2024</strong> <strong>Annual</strong> <strong>Conference</strong>.<br />
Shayne Longpre, a doctoral<br />
candidate at the <strong>MIT</strong> Media Lab,<br />
discussed a new AI topic known as<br />
data provenance. He maintains that<br />
the origin of a dataset’s ownership<br />
is vital to the accuracy of AI training<br />
data and important to those<br />
building AI models. “This<br />
information was not well<br />
documented or understood,”<br />
Longpre said. As a result, data may<br />
be inappropriate for a given<br />
application, or it may not represent<br />
the right tasks, topics, domains or<br />
languages. It may even be used<br />
illegally.<br />
That led Longpre and<br />
representatives from 10<br />
organizations, including <strong>MIT</strong>, to cofound<br />
the Data Provenance<br />
Initiative. It’s a collaborative effort<br />
to audit the datasets used to train<br />
large language models (LLMs). So<br />
far, some 1,800 datasets have been<br />
reviewed.<br />
Some of what the group’s audits<br />
turned up is disturbing.<br />
They found that on HuggingFace, a<br />
major platform that hosts datasets,<br />
nearly two-thirds of the datasets<br />
had incorrect or omitted licenses<br />
that state access permissions.<br />
To help, the team developed the<br />
Data Provenance Explorer. It lets<br />
users select subsets of languages<br />
and licensing constraints, submit<br />
their selection across different<br />
criteria, and see information about<br />
the underlying data.<br />
Alex ‘Sandy’ Pentland<br />
Another proactive AI project was<br />
described by Robert Mahari,<br />
Research Assistant at the <strong>MIT</strong><br />
Media Lab. Mahari’s approach is<br />
called regulation by design, and it<br />
involves embedding regulatory<br />
objectives directly into a technical<br />
design.<br />
Regulation by design would give<br />
people the confidence to use AI<br />
systems knowing that do not violate<br />
a law or regulation. To foster realworld<br />
implementations, Mahari’s<br />
group is working with the European<br />
Union, World Bank, U.S. Copyright<br />
Office and Singaporean Privacy<br />
Agency.<br />
“Compliance and<br />
regulation by design<br />
represent a riskmanagement<br />
paradigm<br />
that’s uniquely suited<br />
for AI,” Mahari said.<br />
“Through intelligent<br />
technology design, it<br />
can proactively prevent<br />
failures and risks.”<br />
Nearly 2/3<br />
Share of datasets on<br />
HuggingFace, a major<br />
platform that hosts<br />
datasets, found to have<br />
incorrect or omitted<br />
access permissions.<br />
Access Video
Human-First AI Research Group<br />
6<br />
THE CASE FOR AI FRICTION<br />
Research finds ‘beneficial friction’ keeps humans in the AI loop, boosting accuracy.<br />
Artificial intelligence was the top<br />
agenda item of the <strong>Annual</strong><br />
<strong>Conference</strong> session led by Renée<br />
Richardson Gosline, head of the<br />
<strong>IDE</strong>’s Human-First AI Group.<br />
Gosline discussed the importance of<br />
adding “beneficial friction”—<br />
essentially digital speed bumps—to<br />
AI systems to encourage users to be<br />
more deliberative and, when<br />
necessary, to change and correct<br />
direction.<br />
“Our goal,” Gosline told attendees, “is to amplify<br />
the benefits of AI and minimize any potential<br />
harm.” Adding questions or alerts, she added,<br />
“ensures that humans are in the loop.”<br />
As an example, OpenAI, the creator<br />
of the popular ChatGPT system, has<br />
added a pop-up message that says,<br />
“Check your facts…ChatGPT may<br />
give you inaccurate information.”<br />
Another presenter, <strong>MIT</strong> Sloan<br />
postdoc Yunhao “Jerry” Zhang,<br />
explained findings from an<br />
experiment he and Gosline<br />
conducted to examine how humans<br />
perceive AI. In their experiment,<br />
both humans and AI systems wrote<br />
advertising content for marketing<br />
campaigns. The content was created<br />
using four paradigms: human only,<br />
AI only, augmented AI editor (the<br />
human writes the first draft,<br />
Renée Richardson Gosline<br />
then edits using AI feedback), and<br />
augmented human editor (the AI<br />
writes the first draft, and then edits<br />
using human feedback).<br />
Some 1,200 participants were<br />
randomly assigned to view the<br />
content. Some didn’t know the<br />
creator; others were partially<br />
informed about the AI; and a third<br />
group was fully informed about the<br />
content’s origin.<br />
The upshot: When people didn’t<br />
know how the content had been<br />
generated, they generally<br />
considered the AI-generated<br />
content to be valuable. But when<br />
they did know, they favored the<br />
content created by humans. “There<br />
is evidence of human favoritism,<br />
but not AI aversion,” Zhang said,<br />
proving the value of human<br />
inclusion.<br />
Similarly, responsible AI was<br />
highlighted by Patrick Connolly,<br />
Global Responsible AI Lead at <strong>IDE</strong><br />
partner Accenture Research.<br />
Connolly, Gosline and four other<br />
researchers have co-written a<br />
paper, Nudge Users to Catch<br />
Generative AI Errors, featured in a<br />
recent issue of the <strong>MIT</strong> Sloan<br />
Management Review.<br />
At the conference, Connolly<br />
maintained that responsible AI is<br />
essential to competitive advantage.<br />
Technology alone won’t lead to<br />
successful AI. Winning firms, he<br />
said, will be those that “build<br />
[responsible AI] into their core.”<br />
“Traditional barriers of data, talent,<br />
budgets and scaling proof-ofconcepts<br />
are not holding back<br />
companies today,” Connolly said.<br />
Barriers now include intellectual<br />
property, AI hallucinations and<br />
cybersecurity. Responsible AI, he<br />
added, will overcome these<br />
concerns by being trustworthy,<br />
accurate and high-performing.<br />
Access Video
AI Marketplaces and Labor Economics Research Group<br />
7<br />
WATCHING AI LEARN<br />
Two experiments move us closer to understanding AI intelligence.<br />
The age of AI working side-by-side<br />
with humans could be here sooner<br />
than expected. Experiments at the<br />
<strong>IDE</strong> already find AI acting like<br />
humans—and in some cases<br />
outperforming them.<br />
AI as a writer and editor was the<br />
subject of linked presentations by<br />
John Horton, an Associate<br />
Professor at <strong>MIT</strong> Sloan and lead of<br />
the <strong>IDE</strong>’s AI Marketplaces and<br />
Labor Economics Group, and<br />
Emma Wiles, a doctoral candidate<br />
at <strong>MIT</strong> Sloan.<br />
In their experiment, a GenAI<br />
system was prompted to write the<br />
first drafts of job descriptions that<br />
employers could then post online to<br />
attract job candidates. Employers<br />
with access to the AI-written drafts<br />
were about 20% more likely to post<br />
the descriptions than those without<br />
the AI. The managers also spent<br />
about 40% less time writing or<br />
editing job posts than the control<br />
group.<br />
However, when it came to actual<br />
hires, employers with access to AIwritten<br />
job descriptions made<br />
nearly 20% fewer hires than the<br />
others.<br />
The downturn in hiring among the<br />
treatment group surprised the<br />
researchers; they expected AI<br />
would not only improve the<br />
descriptions, but also increase the<br />
number of hires.<br />
While experimental<br />
results don’t always<br />
work out as planned,<br />
Horton said, “it gives<br />
us a road map for how<br />
to improve these kinds<br />
of features, which we<br />
still think have an<br />
enormous amount of<br />
potential.”<br />
John Horton<br />
A presentation by <strong>MIT</strong> Sloan<br />
doctoral candidate and <strong>IDE</strong> affiliate<br />
Benjamin Manning explored<br />
whether a large language model<br />
(LLM) could complete the four<br />
high-level tasks of a social scientist:<br />
create a hypothesis, run an<br />
experiment, analyze the results, and<br />
then update the hypothesis.<br />
That may sound like too much for<br />
an AI system but, Manning said,<br />
“that’s exactly what we did.”<br />
To test the system, researchers<br />
simulated bidding at an art auction.<br />
The LLM hypothesized that the<br />
higher the buyers’ budgets, the<br />
higher the price of the final deal.<br />
After running the simulation more<br />
than 340 times, the LLM’s<br />
hypothesis was generally correct.<br />
When researchers fed the results of<br />
the simulations back into the LLM<br />
—the equivalent of a human social<br />
scientist analyzing the results of<br />
their experiment—the LLM<br />
adjusted its predictions just as a<br />
human would.<br />
With each iteration,<br />
“the model performed<br />
much better,”<br />
Manning said.<br />
“It improved based on<br />
experimentation on<br />
itself—which is pretty<br />
cool.”<br />
Access Video
Technology-Driven Organizations and Digital Culture Research Group 8<br />
GETTING GEEKY<br />
Digital culture transforms management; quantum computing shows promise.<br />
“A bunch of geeks” have figured<br />
out a better way to run a business,<br />
Andrew McAfee told attendees of<br />
his <strong>Annual</strong> <strong>Conference</strong><br />
presentation. McAfee is Co-<br />
Director of the <strong>IDE</strong> and the author<br />
of the bestselling businessmanagement<br />
book, The Geek<br />
Way.<br />
As McAfee explained, “geeky”<br />
companies including Netflix and<br />
SpaceX have developed new<br />
management techniques that let<br />
them overtake longer-standing<br />
competitors. For example, it was<br />
SpaceX—and not NASA or any<br />
other national space agency—that<br />
in 2022 managed 80% of all<br />
satellite launches from the Earth,<br />
McAfee said. Similarly, Netflix has<br />
a market value over $270 billion,<br />
more than double that of either<br />
Disney or Warner Brothers.<br />
Andrew McAfee<br />
The Geeks adopted four new<br />
norms, McAfee said:<br />
1.Ownership<br />
2.Openness<br />
3. Science<br />
4.Speed<br />
Geek companies, McAfee<br />
concluded, “move faster, are a lot<br />
more egalitarian, give a great deal of<br />
autonomy, and try to settle their<br />
arguments via evidence. This is a lot<br />
better than what we were doing<br />
before.”<br />
In a linked presentation, <strong>IDE</strong><br />
Research Scientist Jonathan Ruane<br />
discussed quantum computing’s<br />
progress. In contrast to<br />
conventional computing’s binary<br />
approach, quantum systems use<br />
quantum bits (better known as<br />
qubits) that exist in an<br />
indeterminate state. Work on<br />
quantum computing is intense.<br />
IBM, for one, expects to have a<br />
system with over 4,500 qubits by<br />
next year.<br />
While that’s impressive, Ruane<br />
doesn’t expect a fully functional<br />
quantum computer to become<br />
available anytime soon.<br />
$1.60<br />
Amount spent by U.S. companies<br />
on digital products and<br />
services for every dollar they<br />
spend on non-digital products<br />
and services. This reverses the<br />
ratio from 2007.<br />
“There’s this enormous<br />
chasm,” Ruane said,<br />
“between where we are<br />
today with the error<br />
rates and how low we<br />
need to be before we<br />
can get into practical<br />
applications.”<br />
$270+ billion<br />
The market valuation of<br />
Netflix, which surpasses the<br />
market valuation of either<br />
Disney or Warner Brothers – two<br />
of Netflix’s more traditional<br />
competitors.<br />
Access Video<br />
Access Blog
Artificial Intelligence, Quantum and Beyond Research Group 9<br />
AI’s IMPACT ON JOBS,<br />
ACCESS AND INFLUENCE<br />
Researchers study the economic and development implications of AI’s rapid pace<br />
of advancement. Who will dominate?<br />
With AI transforming business<br />
and society, three important<br />
questions are often overlooked:<br />
Is AI really going to take jobs? Is<br />
AI research being led by the right<br />
parties? And is AI accessible<br />
broadly enough?<br />
Researchers from the AI,<br />
Quantum and Beyond research<br />
group addressed these and other<br />
questions accompanying AI’s<br />
rapid advancements.<br />
Group leader Neil Thompson<br />
presented the findings of his<br />
recent study, which used AI<br />
computer vision to measure<br />
actual job replacement. The<br />
study indicated that some of<br />
what we’ve heard about AI and<br />
jobs is “a little overblown,”<br />
Thompson said.<br />
While Thompson believes<br />
excitement around AI is<br />
warranted, he expects to see “a<br />
much more gradual [adoption]<br />
as it takes longer for costs to go<br />
down and deployments to scale.”<br />
Short-term, he added, businesses<br />
will do cost-benefit analyses to<br />
determine which tasks make<br />
sense to automate with AI.<br />
In a related presentation, <strong>IDE</strong><br />
postdoc Nur Ahmed explained the<br />
results of his paper, co-written with<br />
Thompson and a third researcher,<br />
and published in Science. The<br />
authors assert that AI research is<br />
dominated by business interests,<br />
which should have policymakers<br />
worried.<br />
Their paper also warns that business<br />
influences could curtail both<br />
research outcomes and future<br />
products and applications. In his <strong>IDE</strong><br />
presentation, Ahmed described the<br />
policy issues at stake:<br />
commercialization, public interest<br />
and a concentration of power.<br />
Among the solutions he offered<br />
were the establishment of a national<br />
research cloud, the use of public<br />
datasets, and greater support for<br />
academic and international<br />
collaboration.<br />
In the session’s final presentation,<br />
Ana Trisovic, a research scientist at<br />
<strong>MIT</strong> Future Tech, asked whether AI<br />
is accessible and usable by a broad<br />
enough range of people and<br />
organizations. This question, she<br />
noted, has important implications for<br />
regulatory policy, research practices<br />
and societal equity.<br />
“There is unequal<br />
access to computational<br />
resources and<br />
technologies,” Trisovic<br />
said. “This influences<br />
who can participate in<br />
the AI-driven<br />
economy.”<br />
Limited access, she added, can<br />
restrict scientific benefits and<br />
innovation potential to just a few,<br />
well-resourced institutions. If that<br />
happens, Trisovic said, expect to<br />
see “a significant disparity in<br />
research advancement.”<br />
Neil Thompson<br />
Access Video
Misinformation and Fake News Research Group 10<br />
FIGHTING MISINFORMATION &<br />
CONSPIRACY THEORIES<br />
Misinformation flourishes online. Could technology help stem the tide?<br />
False conspiracy theories and other<br />
forms of misinformation spread<br />
easily online. Too easily.<br />
David Rand, an <strong>MIT</strong> Professor and<br />
leader of the <strong>IDE</strong>’s Misinformation<br />
and Fake News Group, explained<br />
during his <strong>Annual</strong> <strong>Conference</strong><br />
presentation that while the content<br />
may differ, most misinformationsharing<br />
is driven by three common<br />
factors:<br />
1.A lack of attention<br />
2.Message repetition<br />
3. Dissemination by<br />
partisan elites and<br />
political parties<br />
Given widespread concerns about<br />
misinformation online, the hunt is<br />
on for effective deterrents. One<br />
hope is that the spread can be<br />
curtailed with technology itself.<br />
That’s the subject of experiments<br />
described at the <strong>IDE</strong> conference by<br />
two researchers working with<br />
Rand: Thomas Costello, an <strong>MIT</strong><br />
postdoc, and Cameron Martel, a<br />
doctoral candidate at <strong>MIT</strong> Sloan.<br />
The problem is serious; the<br />
researchers cited a recent poll<br />
finding that fully half of all<br />
Americans believe in at least one<br />
conspiracy theory. What’s more,<br />
dissuading people from believing<br />
these false theories is extremely<br />
difficult.<br />
To help, Martel has been<br />
researching the efficacy of<br />
misinformation warnings. His<br />
experiments, involving thousands<br />
of participants, have found that<br />
warning labels do work. They<br />
lower users’ belief in<br />
misinformation, even among<br />
people who distrust human factcheckers.<br />
To test whether AI technology<br />
could combat misinformationspreading,<br />
Costello and his<br />
colleagues first asked human<br />
subjects to describe a conspiracy<br />
theory they believed to be true.<br />
The researchers then prompted<br />
ChatGPT to engage in a discussion<br />
with these human subjects.<br />
David Rand<br />
During the chats, the AI system<br />
would try to persuade people to<br />
change their views; it did this by<br />
showing their conspiracy theories<br />
are unsupported by facts. A control<br />
group also chatted with the AI<br />
model, but on a banal topic.<br />
To measure the results, the<br />
researchers asked subjects to rate<br />
their belief in their conspiracy<br />
theories on a scale of 0 to 100,<br />
both before and after the AI chats.<br />
Overall, the AI interventions<br />
decreased the subjects’ beliefs in<br />
false conspiracy theories by about<br />
20%. “Evidence and arguments can<br />
change your beliefs about<br />
conspiracy theories,” Costello said.<br />
“Needs and motives don’t totally<br />
blind you once you’re down the<br />
rabbit hole.”<br />
20%<br />
The amount people’s belief<br />
in false conspiracy<br />
theories dropped after<br />
chatting with a Generative<br />
AI system.<br />
Access Video
Data and Analytics Research Group 11<br />
TAKING DATA ANALYTICS<br />
TO THE NEXT LEVEL<br />
Access Video<br />
Researchers explore changes to ‘long ties,’ examine social media tradeoffs.<br />
How do we know if data is accurate<br />
and available to large populations of<br />
users? This question is being<br />
addressed by <strong>MIT</strong> Associate<br />
Professor Dean Eckles and his <strong>IDE</strong><br />
Data and Analytics group.<br />
Speaking at the annual conference,<br />
Eckles discussed new methods his<br />
group is developing to help<br />
organizations make better decisions<br />
using large-scale datasets. Eckles also<br />
conducts digital experiments that<br />
aim to make data more widely<br />
available to both researchers and<br />
businesses.<br />
Eckles and two co-researchers<br />
described three areas of study now<br />
underway: geographically<br />
aggregated network data; natural<br />
experiments in social media; and<br />
improving decision-making with<br />
interventions.<br />
The geodata research paper,<br />
published last year, analyzed data<br />
from postal codes in the United<br />
States and Mexico to describe and<br />
draw conclusions about<br />
interrelationships known as “long<br />
ties.” These connections, Eckles<br />
explained, can be “predictive of<br />
economic outcomes across different<br />
places.”<br />
Eckles also described how errors in<br />
the estimates made from randomized<br />
controlled trials (known as A/B tests)<br />
can translate into good or bad<br />
corporate decisions. A/B tests, widely<br />
used to inform decisions, measure<br />
the average effect of a new<br />
intervention on various results, such<br />
as revenue or engagement. However,<br />
these estimates can be error-prone.<br />
To reduce the errors, Eckles said,<br />
researchers can adjust the<br />
intervention or customer group, or<br />
develop better decision tools.<br />
Dean Eckles<br />
A recent experiment using<br />
aggregated network data was<br />
described by Sahil Loomba, a<br />
postdoctoral fellow in the <strong>IDE</strong><br />
research group. As Eckles noted,<br />
aggregated treatment-effect data can<br />
reflect estimate errors. As a result, it<br />
can be too high or low. To help<br />
correct these errors, Loomba studied<br />
several aspects of social networks:<br />
controlling the network structure;<br />
experimenting with different<br />
models-based solutions; and<br />
considering social spillover behavior<br />
and the role of sparsity.<br />
Another aspect of large platform<br />
data was discussed by <strong>MIT</strong><br />
doctoral candidate Alex Moehring:<br />
personalized rankings and user<br />
engagement, based on the Reddit<br />
news site.<br />
Moehring first assessed Reddit’s<br />
ranking and recommendation<br />
algorithms. He then considered<br />
some of the fundamental tradeoffs<br />
that firms make when implementing<br />
ranking and recommendation<br />
algorithms on social media<br />
platforms.<br />
Among other issues, Moehring<br />
explored the impact of boosting<br />
content ranking and engagement on<br />
the credibility of promoted news<br />
content. As a result of the ranking,<br />
he found, a majority of users actually<br />
became more discerning. However,<br />
a subset of users instead saw much<br />
more low-credibility content and<br />
misinformation. It’s up to the<br />
platforms, Moehring concluded, to<br />
adjust their algorithms in ways that<br />
boost credibility.
12<br />
KEY TAKEAWAYS<br />
8 Big Ideas from the <strong>2024</strong> <strong>IDE</strong> <strong>Annual</strong> <strong>Conference</strong><br />
1<br />
HUMAN<br />
FAVORITISM<br />
leads people to<br />
prefer content<br />
created by humans<br />
over that created by<br />
AI. People also<br />
distrust AI search<br />
results. However,<br />
when people don’t<br />
know how content<br />
was created, they<br />
find AI-created<br />
content to be of<br />
high quality.<br />
2<br />
DATA<br />
PROVENANCE<br />
will gain acceptance.<br />
The history of a dataset’s<br />
ownership is becoming<br />
vital to the accuracy of<br />
AI training data and<br />
important to those who<br />
build AI models.<br />
3<br />
THE<br />
DEMOCRATIZATION<br />
OF AI HAS A LONG<br />
WAY TO GO.<br />
Even with OpenAI efforts, large<br />
firms dominate over academia in<br />
development and access to data.<br />
4<br />
AI JOB<br />
DESCRIPTIONS<br />
are a mixed bag.<br />
Employers given access<br />
to AI-written drafts<br />
were more likely to<br />
post the descriptions,<br />
yet the AI group made<br />
fewer hires.<br />
5<br />
“GEEK”-<br />
MANAGED<br />
COMPANIES<br />
move faster than<br />
traditional<br />
organizations. They’re<br />
also more egalitarian,<br />
offer autonomy, and<br />
settle debates with<br />
evidence.<br />
6<br />
QUANTUM<br />
COMPUTING<br />
has the potential to<br />
transform how business<br />
applications are<br />
processed, but not yet.<br />
Progress is being slowed<br />
by issues around<br />
technology, funding and<br />
security.<br />
RESOURCES<br />
The Data Provenance Institute: A<br />
Large Scale Audit of Dataset<br />
Licensing and Attribution in AI<br />
Long Ties, Disruptive Life <strong>Event</strong>s,<br />
and Economic Prosperity<br />
<strong>IDE</strong>, Accenture Develop a<br />
Business Framework for Quantum<br />
Computing<br />
7<br />
AI CAN FIGHT<br />
FALSE CONSPIRACY<br />
THEORIES ONLINE.<br />
Beliefs in false theories<br />
dropped by 20% when GenAI<br />
prompted users to see how<br />
the beliefs were unsupported<br />
by facts.<br />
Misinformation Warning Labels are Widely Effective<br />
8<br />
JOB LOSS<br />
FROM AI<br />
may not be as bad as<br />
some fear. The number<br />
of jobs that are actually<br />
cost-effective to<br />
automate is much<br />
smaller than you might<br />
expect. AI systems are<br />
expensive!<br />
Which Tasks are Cost-Effective to Automate with Computer Vision?<br />
The Growing Influence of Industry in AI Research<br />
New Research May Calm Some of the AI Job-Loss Clamor—For Now<br />
Sending GenAI Into the Wild<br />
Data Authenticity, Consent and Provenance for AI are All Broken<br />
More, but Worse: The Impact of AI Writing Assistance on the Supply<br />
and Quantity of Job Posts<br />
6 New Studies Put AI to the Test<br />
How Do People Regard AI-Generated Content?<br />
Nudge Users to Catch Generative AI Errors
THANK YOU TO OUR LOYAL SUPPORTERS<br />
CORPORATE MEMBERS:<br />
FOUNDATIONS:<br />
Ewing Marion Kauffman Foundation<br />
Google.org<br />
<strong>MIT</strong>-IBM Watson AI Lab<br />
Nasdaq<br />
New Venture Fund<br />
TDF Foundation<br />
Editorial Content: Paula Klein and Peter Krass<br />
Design: Carrie Reynolds<br />
INDIVIDUALS:<br />
Nobuo N. Akiha<br />
Joe Eastin<br />
Michael Even<br />
Wesley Chan<br />
Junichi Hasegawa<br />
Ellen and Bruce Herzfelder<br />
Reid Hoffman<br />
Richard B. Homonoff<br />
Edward S. Hyman, Jr.<br />
Gustavo Pierini<br />
Gustavo Marini<br />
Tom Pappas<br />
Jeff and Liesl Wilke<br />
Other individuals who prefer to remain anonymous