Argumentation and AI

Coordinator: Jean Wagemans

The research line Argumentation and AI explores the intersection of argumentation theory and artificial intelligence. Its central aim is to develop AI systems capable of generating, analyzing, and evaluating arguments in ways that are both meaningful and explainable. Building on foundational work from the Philosophy of Argument, most notably the Periodic Table of Arguments, this research seeks to algorithmicize core insights from argumentation theory and rhetoric by formalizing them and translating them into computational models and procedures.

By bridging human and machine reasoning, the research develops and empirically evaluates AI systems that can represent, classify, and assess arguments in both naturalistic and artificially generated texts and dialogues. Beyond its contributions to Argument-Checking, the research line addresses fundamental questions concerning the possibilities and limitations of AI in argumentation. Key questions include: What are the theoretical constraints of current large language models in capturing argumentative meaning? How can AI systems offer transparent and rational explanations for their inferences? And how can large language models be responsibly harnessed to support human critical thinking and decision-making in real-world contexts?

Projects and collaborations

The ArguCheckr project

Can we design a chatbot capable of understanding and evaluating arguments? In this project, John Gatev, Pouneh KouchStefan Mol, and Jean Wagemans collaborate to train the UvA AI Chatbot to emulate expert classifications of arguments. The project employs an experimental paradigm that compares human expert annotation of persuasive discourse with that of AI agents operating at different levels of expertise and trained on diverse materials.

Explicating implicit information

Ameer Saadat-Yazdi and Jean Wagemans are developing PTA-based algorithmic procedures for explicating implicit information in argumentative discourse. Apart from the argument lever (“warrant” or “missing premise”) in individual arguments, they study how condensed gamma form arguments can be interpreted as a chain of individual arguments (“serial argumentation”).

AI chatbots in the classroom

Stef Spronck, José Plug, and Jean Wagemans are developing methods for integrating generative AI and Large Language Models (LLMs) to support and scale argument-checking for education. Drawing on recent advances in generative AI, students use optimized prompting and formalized procedures such as the Argument Type Identification Procedure (ATIP) to train and evaluate the performance of the UvA AI Chatbot in argumentation analysis. This includes comparing machine outputs with expert human evaluations to better understand the strengths and limitations of current AI systems in capturing argumentative meaning and structure.

The project encompasses three complementary strands:

  • Method development: Formalizing argument-checking procedures grounded in the Periodic Table of Arguments and refining them for computational implementation.
  • AI Integration: Adapting and training LLMs such as the UvA-developed AI chatbot to perform structured argument analysis and to assist learners in mastering argument-checking techniques.
  • Evaluation and reflection: Systematically comparing human and machine analyses to identify opportunities and challenges in using AI to support critical thinking in the academic context.

Student projects

Svea Bosch examined how trust in ChatGPT versus Google Search affects users’ ability to detect misinformation, finding that GenAI tools pose risks due to hallucinations and authoritative tone. Observing 14 participants, there was a clear divide between those valuing Google’s transparency, who detected more misinformation, and those preferring ChatGPT’s convenience, who detected less, despite being aware of its accuracy limitations. Overall, the results suggest a trade-off between convenience and epistemic vigilance, with lower vigilance increasing susceptibility to misinformation and highlighting implications for AI design, education, and policy.

Activities

Journal article – Saadat-Yazdi, A., & Wagemans, J.H.M. (2025). Finding the missing link: An algorithmic approach to reconstructing enthymemesArgumentation.

Workshop talk – J.H.M. Wagemans (2025). Rhetoric and dialectic as a bonanza for hybrid argumentation. Plenary tutorial at the Lorentz Workshop on Hybrid Argumentation and Responsible AI. Lorentz Center, Leiden University, March 31, 2025.

rMA thesis – Svea Bosch (2025). “Open the Search Bar, HAL”: a Study of AI Trust and Misinformation (rMA Linguistics and Communication 2024-2025).

Journal article – Hinton, M., & Wagemans, J.H.M. (2023). How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI Text Generator. Argument & Computation, 14(1), 59-74. DOI: https://doi.org/10.3233/AAC-210026.

Seminar talk – E. Seremeta (2023). A rule-based computational model for statement type annotation LANCAR Seminar. University of Amsterdam, February 3, 2023.

Journal article – Russo, F., Schliesser, E., & Wagemans, J.H.M. (2023). Connecting ethics and epistemology of AI. AI & Society: Knowledge, Culture and Communication. DOI: https://doi.org/10.1007/s00146-022-01617-6.

Chapter – Brave, R., Russo, F., Uzovic, O., & Wagemans, J.H.M. (2022). Can an AI Analyze Arguments? Argument-Checking and the Challenges of Assessing the Quality of Online Information. In C. El Morr (Ed.), AI and Society: Tensions and Opportunities (pp. 267-281). New York: Taylor and Francis imprint Chapman and Hall/CRC. DOI: https://doi.org/10.1201/9781003261247-20.

Workshop talk – F. Russo, E. Schliesser & J.H.M. Wagemans (2022). Connecting the ethics and epistemology of AI. Workshop series Issues in XAI 4: “ExplanatoryAI: between ethics and epistemology”. ONLINE, Faculty of Technology, Policy and Management, Delft University of Technology, Delft, May 24, 2022.

Media appearance – ISOC NL (2022). Can an AI analyze arguments? ISOC NL & University of Amsterdam collaboration on scientific chapter at Taylor & Francis Group: “Argument-checking and the challenges of assessing the quality of online information”. URL = https://isoc.nl/nieuws/can-an-ai-analyze-arguments-argument-checking-and-the-challenges-of-assessing-the-quality-of-online-information/

Workshop talk – F. Russo & J.H.M. Wagemans (2021). Argumentation and (X)AI. Third Workshop “Towards an epistemological and ethical explainable AI (TEEXAI)”: Argumentation and (X)AI. University of Amsterdam. November 18-19, 2021.

Workshop talk – F. Russo, E. Schliesser & J.H.M. Wagemans (2021). Towards an epistemological and ethical XAI.  Workshop Issues in Explainable AI #3: Bias and Discrimination in Algorithmic Decision Making. ONLINE, Leibniz University Hannover. Oktober 8, 2021.

Conference talk – M. Hinton & J.H.M. Wagemans (2021). An Analysis of an Argumentative Text Produced by the GPT-3 AI Text Generator. Seventh International Conference on Philosophy of Language and Linguistics (PhiLang 2021). ONLINE, University of Łódź, May 14, 2021.

Workshop talk – J.H.M. Wagemans (2021). Evaluating subtypes of the argument from authority. Second Workshop “Towards an epistemological and ethical explainable AI (TEEXAI)”. ONLINE, University of Amsterdam. February 3, 2021.

Workshop talk – J.H.M. Wagemans (2020). Argumentative aspects of expert systems. First Workshop “Towards an epistemological and ethical explainable AI (TEEXAI)”. ONLINE, University of Amsterdam. December 2, 2020.