Skip to content

Report / Oct 19, 2021

Communicating About the Social Implications of AI: A FrameWorks Strategic Brief

By Lindsey Conklin, Emilie L’Hôte, Michelle Smirnova , Patrick O'Shea

DOWNLOAD

Communicating About the Social Implications of AI: A FrameWorks Strategic Brief

DOWNLOAD

Although it’s a question people are asking, the discussion about AI is shaped—and derailed—by cultural mindsets that push people to either see AI in a virtuous light, leaving biases unquestioned, or to assume that technology is competing with people, working its way toward a takeover of humankind.

Our research finds that Americans hold deep assumptions about AI that challenge open and critical conversations about its social implications and obscure our urgent need as a society to manage its impacts. These ways of thinking limit social activists’ ability to show the public how AI is used within existing systems of power and oppression, augmenting discriminatory and racist policies.

In partnership with the John D. and Catherine T. MacArthur Foundation’s Technology in the Public Interest (TPI) program, the FrameWorks Institute endeavored to explore these deeply held public assumptions to understand how thinking—and framing strategies—may need to evolve when it comes to communicating about the social impact of AI.

We interviewed researchers and advocates in the AI field to identify how predictive algorithms impact the domains of policing, child welfare, and health care. Then we conducted and analyzed cognitive interviews with members of the general public to uncover deep, implicit ways of thinking about the social implications of AI within those three domains.

We identified five key obstacles that researchers, activists, and advocates face in efforts to open critical public conversations about AI’s relationship with inequity and advance needed policies:

  1. AI as Innovation. The public just sees the “bright, shiny object,” overgeneralizing AI to mean any impressive, “innovative” technology. This limits their ability to see the way AI is deeply embedded in technologies we have been using for some time in some of the most mundane aspects of our lives.
  2. AI as a Mystery. While people understand that AI uses data to make decisions, they don’t have a firm understanding of what predictive algorithms are or how they work—sometimes misunderstanding them as “fortune-tellers” able to predict the future. Without an understanding of predictive algorithms, it is difficult to open conversations about the ways that AI creates and perpetuates systemic racism and oppression.
  3. AI versus Humans. The public sees AI as standing in opposition to humankind and fixates on “robot takeovers.” The question of AI as a human replacement dominates and crowds out critical thinking about how AI amplifies human biases in problematic ways. Many of the most important social implications of AI can only be seen when we view interaction between technology and humans.
  4. AI as Consumer Product. People don’t see the connections between AI and systemic inequities, instead understanding it through a consumerist lens. They view AI simply as a luxury product that some can, and others can’t, afford. This makes any inequities that people can see around these technologies seem a natural product of our consumer culture.
  5. Bad Actors, Not Rigged Systems. While people seem open to some government intervention, they underestimate the need for regulating systems and industry, focusing on punishing “bad actors” instead.

 

Working from these obstacles, we recommend ways that advocates and activists can build a more critical and productive public conversation around AI and increase support for needed policies. This report is one piece of a larger, ongoing project aimed at finding new ways to shift mindsets about AI and advance our public discussion about what must be done to ensure that this technology does not create and perpetuate systemic inequality.