Bad AI and Beyond

Overview 

This project reports on how media representations shape public perceptions of AI and then uses findings to explore how Good Systems might better represent everyday interactions with AI to the public.  

Methods and Findings 

The research team began by developing lists, taxonomies, and case studies of popular representations of AI and hypotheses about how the general public and elite groups discriminate between “good” and “bad” AI, as well as how media representations help shape these perceptions. Hypotheses were tested on members of the public and on groups of experts. 

The project culminated with a festival featuring multiple media representations, including games, videos, and written works, all of which draw on the insights gleaned from focus group and survey research.  

Research questions included: 

  • What drives popular negativity about AI? Does the public have its reasons that the experts know not of? Or have the public adopted views of AI borne of misrepresentations? 

  • Do overtly dystopian representations of AI feed, or perhaps temper, public outrage about insidious issues with AI and machine learning such as the biases of search algorithms? 

  • Can we produce narratives on AI in different media modalities that are more nuanced and complex than just the false dichotomy of good and bad? Could a more accurately critical (and yet still exciting) model of AI themed entertainment be developed, once we’ve gained an understanding of how the public has been encountering AI? 

Team Members


Suzanne Scott
RTF

Documents