About Me


The challenge of fully comprehending philosophical and other arguments drives my interests. Often, I feel philosophical arguments involve some trickery or magic that I want to unveil. Fortunately, I became acquainted with applied formal logic and argument mapping, which provide precise tools for understanding and evaluating arguments. These tools have become the connecting thread throughout my interests and scientific work.

I am especially intrigued by the following questions:

  1. Norms of Public Debate and Deliberation: How should we (as a society) shape public debate and deliberation in general? Which role should deliberation play in our collective decision-making?
  2. Measuring Public Debate: How can we employ argumentation-theoretic tools to understand better what happens in public debate?
  3. Argumentation and Large Language Models: How can we use Large Language Models (LLMs) to analyse and foster public debate?

Norms of Public Debate and Deliberation

Public debate is the public exchange of divergent views on concrete political issues. It takes place through mass media, social media, democratic institutions, political events, and academic publications, to name a few. The concept of deliberation denotes rational argumentation aimed at collective decision-making. Deliberation is closely connected to the idea of deliberative democracy, in which rational argumentation among citizens plays a central role in democratic self-government. In a deliberative democracy, preference aggregation (in the form of voting) is only one part of collective decision-making. Equally important is a reasonable exchange of arguments and an open-mindedness to revise one's view in the face of new information and sound arguments.

Both concepts, public debate and deliberation, are connected. Public debate often involves exchanging reasons and arguments and indirectly influences collective decision-making in a democratic society. In other words, public debate is often a form of deliberation. Political decisions can be sound or poor, affecting the well-being of many people across countries and generations. Therefore, we should carefully consider how to shape public debate and deliberation to maximise well-being.

Ideally, participants in public debate discuss their views truthfully and based on facts, respect one another, and are willing to revise their views in light of new evidence. Their discussion should be constructive, reciprocal, and rational. Ideally, everyone can participate equally. While these aspirations are laudable, it is unclear what they require in specific contexts and how to resolve conflicts and tradeoffs between them and other constraints.

Let's consider political talk shows as a specific example. Talk shows are part of public debate but also belong to an entertainment industry focused on maximising profit. Do the media have obligations to bring talk shows closer to the deliberative ideal? Should the state regulate the media? Who should be invited to political talk shows? Should the full spectrum of public opinions be represented? Should experts have a special role? How should the moderator guide the discussion?

Measuring Public Debate

Thinking about deliberative norms concerns normative questions—that is, questions about what we ought to do and how to evaluate actions based on moral considerations. Observing and understanding what happens in public debate is a connected but distinct endeavour, as it relates to empirical questions. We could, for instance, analyse the extent to which public debate meets deliberative norms and identify explanations for why our actual practice falls short of ideal deliberation.

Researchers can approach the empirical analysis of public debate from a range of academic disciplines, including sociology, communication studies, psychology, and linguistics. With my background in argumentation theory and argument mapping, I am particularly interested in analysing the argumentative features of public debate. I am, for instance, interested in whether right-wing populism is connected to a specific form of argumentation: Do populists use certain argument types more often than other politicians? Are they prone to fallacies? What kind of argumentative strategies do they employ?

Researchers must use empirical methods to answer such questions. In some sense, they must measure features of public debate. But which measurement instruments are suitable for this purpose? Methods from applied formal and informal logic work well, but were not designed for socio-empirical contexts. As a result, they struggle to meet certain scientific criteria, particularly those related to reproducibility.

One challenge regarding reproducibility is connected to the subjective nature of analysing natural-language argumentation. Argument analysis is an interpretational process. Accordingly, two analysts can reach different conclusions. This hermeneutical underdetermination of argumentation analysis poses a challenge to the use of these methods in socio-empirical contexts. Here’s why: To “measure” features of public debate, we must ensure that divergent results reflect differences in what we measured. Observed differences should indicate differences in the phenomena. However, in the face of hermeneutical underdetermination, divergent results can stem from two sources: differences in the phenomena or analysts' interpretational choices. In my PhD thesis, I suggested a solution to this problem by advancing a statistical concept of reproducibility for analysing argumentation structure.

Argumentation and Large Language Models

Since the advance of Chat-GPT, many people have become enthusiastic about the capabilities of large language models (LLMs). Undoubtedly, LLMs will revolutionise how we create and analyse language. They have enormous potential for use in the described normative and empirical contexts. So, how can we employ LLMs to improve and analyse public debate?

Improving public debate: Public debate involves text creation. Since LLMs excel at content creation, it follows that LLM-based tools can improve public debate. We could, for instance, think of tools that help citizens better understand others' arguments or formulate their own. In this way, LLMs could help improve the quality of public debate, increase inclusion and equality of participation or even alleviate the prevailing problems in social media (such as toxic speech and misinformation). The research project KIdeKu aims to contribute to this line of employing LLMs.

Analysing public debate: Analysing public debate is a daunting task. It involves annotating and categorising text, which is often challenging. Annotators must be extensively trained, and the analysis takes time and effort. Accordingly, such a text analysis does not scale well. If LLMs could be used to perform some of the tasks involved in analysing public debate, the analysis could be applied to larger text corpora without relying on a swarm of annotators.