Scholarship: “Weizenbaum’s Performance and Theory Modes: Lessons for Critical Engagement with Large Language Model Chatbots”

Misti Yang and I wrote this article for Association of Internet Researchers’ 2023 conference, and it was submitted a few weeks before her passing. It draws on her excellent work on Joseph Weizenbaum, applied to the context of ChatGPT and other LLMs. It’s an honor to carry on her legacy in thinking about ethical use (and non-use) of AI that promotes human faculties of decision-making.

In 1976, Joseph Weizenbaum argued that, because “[t]he achievements of the artificial intelligentsia [were] mainly triumphs of technique,” AI had not “contributed” to theory or “practical problem solving.” Weizenbaum highlighted the celebration of performance without deeper understanding, and in response, he articulated a theory mode for AI that could cultivate human responsibility and judgment. We suggest that, given access to Large Language Model (LLM) chatbots, Weizenbaum’s performance and theory modes offer urgently-needed vocabulary for public discourse about AI. Working from the perspective of digital rhetoric, we explain Weizenbaum’s theorization of each mode and perform a close textual analysis of two case studies of Open AI’s ChatGPT shared on Twitter to illustrate the contemporary relevance of his modes. We conclude by forecasting how theory mode may inform public accountability of AI.

You can read the paper at AoIR’s Selected Papers of Internet Research here, where a free PDF is available. I also encourage you to read and cite Misti’s work, which is so relevant today, and even donate to the Misti Yang Impact Award at our alma mater if you are so lead.

Leave a comment