Below you find a list of my publications. For work-in-progress, pre-prints and pre-registrations of work that is not yet published, please go to “Research”.
Recent research on thick terms like “rude” and “friendly” has revealed a polarity effect, according to which the evaluative content of positive thick terms like “friendly” and “courageous” can be more easily canceled than the evaluative content of negative terms like “rude” and “selfish”. In this paper, we study the polarity effect in greater detail. We first demonstrate that the polarity effect is insensitive to manipulations of embeddings (Study 1). Second, we show that the effect occurs not only for thick terms but also for thin terms such as “good” or “bad” (Study 2). We conclude that the polarity effect indicates a pervasive asymmetry between positive and negative evaluative terms.
Recent research on thick terms like ‘rude’ and ‘friendly’ has revealed a polarity effect, according to which the evaluative content of positive thick terms like ‘friendly’ and ‘courageous’ can be more easily cancelled than the evaluative content of negative terms like ‘rude’ and ‘selfish’. In this paper, we study the polarity effect in greater detail. We first demonstrate that the polarity effect is insensitive to manipulations of embeddings (Study 1). Second, we show that the effect occurs not only for thick terms but also for thin terms such as ‘good’ or ‘bad’ (Study 2). We conclude that the polarity effect is indicative of a pervasive asymmetry that holds between positive and negative evaluative terms.
Thick terms and concepts, such as honesty and cruelty, are at the heart of a variety of debates in philosophy of language and metaethics. Central to these debates is the question of how the descriptive and evaluative components of thick concepts are related and whether they can be separated from each other. So far, no empirical data on how thick terms are used in ordinary language has been collected to inform these debates. In this paper, we present the ﬁrst empirical study, designed to investigate whether the evaluative component of thick concepts is communicated as part of the semantic meaning or by means of conversational implicatures. While neither the semantic nor the pragmatic view can fully account for the use of thick terms in ordinary language, our results do favor the semanticist interpretation: the evaluation of a thick concept is only slightly easier to cancel than semantically entailed content. We further discovered a polarity eﬀect, demonstrating that how easily an evaluation can be cancelled depends on whether the thick term is of positive or negative polarity.
Deceptive implicatures are a subtle communicative device for leading someone into a false belief. However, it is widely accepted that deceiving by means of deceptive implicature does not amount to lying. In this paper, we put this claim to the empirical test and present evidence that the traditional definition of lying might be too narrow to capture the folk concept of lying. Four hundred participants were presented with fourteen vignettes containing utterances that communicate conversational implicatures which the speaker believes to be false. We further collected several potential proxy measures of lying, to get a better understanding of when a deceptive implicature is considered a case of lying. The results indicate that most implicatures (ten out of fourteen) were evaluated as lies and that lie ratings were closely tracked by the degree to which speakers were considered to have committed themselves to the truth of the content conveyed by their deceptive implicatures.
Moral philosophers draw an important distinction between two kinds of moral responsibility. An agent can be directly morally responsible, or they can be derivatively morally responsible. Many scholars in the debate believe that direct moral responsibility for an action presupposes that the agent could have acted other than she actually did. However, in some situations, we hold agents responsible even though they could not have acted differently, such as when they recklessly cause an accident or do not take adequate precautions to avoid harmful consequences. Moral philosophers often argue that what we ascribe in these cases is derivative moral responsibility for the action, which results from direct moral responsibility for some other, earlier action. In this paper, I apply this conceptual distinction to the experimental debate about so-called folk-compatibilism or, more precisely, to the question of whether the folk reject the Principle of Alternative Possibilities. I argue that experimental philosophers have failed to consider this distinction when designing experiments and interpreting their results. With the help of three experiments, I demonstrate that intuitions which seem to conflict with the Principle of Alternative Possibilities are best explained by the attribution of derivative moral responsibility. For this reason, these studies do not speak in favour of compatibilism.
Thick terms and concepts, such as honesty and cruelty, are at the heart of a variety of debates in linguistics, philosophy of language, and metaethics. Central to these debates is the question of how the descriptive and evaluative components of thick concepts are related and whether they can be separated from each other. So far, no empirical data on how thick terms are used in ordinary language has been collected to inform these debates. In this paper, we present the first empirical study, designed to investigate whether the evaluative component of thick concepts can be separated. Our study might be considered to support the view that separation is not possible. However, our study also reveals an effect of valence, indicating that people reason differently about positive and negative thick terms. While evaluations cannot be cancelled for negative thick terms, they can be for positive ones. Three follow-up studies were conducted to explain this effect. We conclude that the effect of valence is best accounted for by a difference in the social norms guiding evaluative language.
In several recent papers and a monograph, Andreas Stokke argues that questions can be misleading, but that they cannot be lies. The aim of this paper is to show that ordinary speakers disagree. We show that ordinary speakers judge certain kinds of insincere questions to be lies, namely questions carrying a believed-false presupposition the speaker intends to convey. These judgements are robust and remain so when the participants are given the possibility of classifying the utterances as misleading or as deceiving. The judgements contrast with judgements participants give about cases of misleading or deceptive behaviour, and they pattern with judgements participants make about declarative lies. Finally, the possibility of lying with non-declaratives is not confined to questions: ordinary speakers also judge utterances of imperative, exclamative and optative sentences carrying believed-false presuppositions to be lies.
In this paper, I present three original, pre-registered experiments that test the relevance of alternative possibilities for the attribution of moral responsibility. Many philosophers have argued that alternative possibilities are required for an agent’s moral responsibility for the consequences of omitting an action. In contrast, it is argued that alternative possibilities are not required for moral responsibility for the consequences of performing an action. Thus, while an agent can be morally responsible for an action she could not have avoided, an agent is never morally responsible for omitting an action she could not have performed. Call this the Action/Omission Asymmetry Thesis. In this paper, I discuss various strategies to challenge the Action/Omission Asymmetry Thesis. I identify the predictions those strategies make about the conditions under which an agent will be held morally responsible for an unavoidable action or omission. These predictions are subsequently tested in three experiments to evaluate their respective plausibility. I demonstrate that whether there is an Action/Omission Asymmetry strongly depends, first, on the type of moral judgment we consider relevant for the Action/Omission Asymmetry Thesis, and, second, the scale we use to test the folk’s intuitions.
This book empirically investigates the social practice of ascribing moral responsibility to others for the things they failed to do, and it discusses the philosophical relevance of this practice.
In our everyday life, we often blame others for things they failed to do. For instance, we might blame our neighbour for not
watering our plants during our vacation. Interestingly, the attribution of blame is typically accompanied by the attribution of causal responsibility. We do not only blame our neighbour for not watering our plants, but we do so because we believe that not watering the plants caused them to dry up and die. In this book, I investigate how we make moral and causal judgments about omissions. I discuss different philosophical perspectives on this matter, and I outline to what extent the actual social practice is in line with philosophical theories.
It has recently been argued that normative considerations play an important role in causal cognition. For instance, when an agent violates a moral rule and thereby produces a negative outcome, she will be judged to be much more of a cause of the outcome, compared to someone who performed the same action but did not violate a norm. While there is a substantial amount of evidence reporting these effects, it is still a matter of debate how this evidence is to be interpreted. In this paper, we engage with the three most influential classes of explanations, namely, (a) the Norm‐Sensitive Cognitive Process View, (b) the Normative Concept View, and (c) the Pragmatics View. We will outline how these theories explain the empirical results and in what ways they differ. We conclude with a reflection on how well these strategies do overall and what questions they still leave unanswered.
Although philosophers have often held that causation is a purely descriptive notion, a growing body of experimental work on ordinary causal attributions using questionnaire methods indicates that it is heavily influenced by normative information. These results have been the subject of sceptical challenges. Additionally, those who find the results compelling have disagreed about how best to explain them. In this chapter, we help resolve these debates by using a new set of tools to investigate ordinary causal attributions—the methods of corpus linguistics. We apply both more qualitative corpus analysis techniques and the more purely quantitative methods of distributional semantics to four target questions: (a) Can corpus analysis provide independent support for the thesis that ordinary causal attributions are sensitive to normative information? (b) Does the evidence coming from corpus analysis support the contention that outcome valence matters for ordinary causal attributions? (c) Are ordinary causal attributions similar to responsibility attributions? (d) Are causal attributions of philosophers different from causal attributions we find in corpora of more ordinary language? We argue that the results of our analyses support a positive answer to each of these questions.
What are the main features that influence our attribution of moral responsibility? It is widely accepted that there are various factors which strongly influence our moral judgments, such as the agent’s intentions, the consequences of the action, the causal involvement of the agent, and the agent’s freedom and ability to do otherwise. In this paper, we argue that this picture is incomplete: We argue that social roles are an additional key factor that is radically underestimated in the extant literature. We will present an experiment to support this claim.
Consider the following causal explanation: The ball went through the goal because the defender didn’t block it. There are at least two problems with citing omissions as causal explanations. First, how do we choose the relevant candidate omission (e.g. why the defender and not the goalkeeper). Second, how do we determine what would have happened in the relevant counterfactual situation (i.e. maybe the shot would still have gone through the goal even if it had been blocked). In this paper, we extend the counterfactual simulation model (CSM) of causal judgment (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2014) to handle the second problem. In two experiments, we show how people’s causal model of the situation affects their causal judgments via influencing what counterfactuals they consider. Omissions are considered causes to the extent that the outcome in the relevant counterfactual situation would have been different from what it actually was.
Is it possible to lie despite not saying anyhing false? While the spontaneous answer seems to be ‘no’, there is some evidence from ordinary language that a lie does not require what is said to be believed-false. In this paper, we will argue for a pragmatic extension of the standard definition of lying. More specifically, we will present three experiments which show that people’s concept of lying is not about what is said, but about what is implied by saying it that way. We test three Gricean conversational maxims. For each one of them we demonstrate that if a speaker implies something misleading, even by saying something semantically true, it is still considered lying.
Lying is an everyday moral phenomenon about which philosophers have written a lot. Not only the moral status of lying has been intensively discussed but also what it means to lie in the first place. Perhaps the most important criterion for an adequate definition of lying is that it fits with people’s understanding and use of this concept. In this light, it comes as a surprise that researchers only recently started to empirically investigate the folk concept of lying. In this paper, we describe three experimental studies which address the following questions: Does a statement need to be objectively false in order to constitute lying? Does lying necessarily include the intention to deceive? Can one lie by omitting relevant facts?
Imagine you and your friend Pierre agreed on meeting each other at a café, but he does not show up. What is the difference between a friend’s not showing up at your meeting and any other person not coming? In some sense, all people who did not come show the same kind of behaviour, but most people would be willing to say that the absence of a friend who you expected to see is different in kind. In this paper, I will spell out this difference by investigating laypeople’s conceptualisation of absences of actions in four experiments. In languages such as German, French, Italian, or Polish, people consider a friend’s not coming an omission. Any other person’s not coming, in contrast, is not considered an omission at all, but just a mere nothing. This use of the term omission differs from the usage in English, where ‘omission’ refers to all kinds of absences. In addition, ‘omission’ is not even an everyday term, but invented by philosophers for the sake of philosophical investigation. In other languages, ‘omission’ (and its synonyms) is part of an everyday vocabulary. Finally, I will discuss how this folk concept of omission could be made fruitful for philosophical questions.
The omission effect, first described by Spranca and colleagues (Spranca, Minsk, & Baron, 1991), has since been extensively studied and repeatedly confirmed (Cushman, Murray, Gordon-McKeon, Wharton, & Greene, 2012). All else being equal, most people judge it to be morally worse to actively bring about a negative event than to passively allow that event to happen. In this paper, we provide new experimental data that challenges previous studies of the omission effect both methodologically and philosophically. We argue that previous studies have failed to control for the equivalence of rules that are violated by actions and omissions. Once equivalent norms are introduced, our results show that the omission effect is eliminated, even if the negative outcome of the behavior is foreseen and intended by the agent. We show that the omission effect does not constitute a basic, moral disposition but occurs exclusively in complex moral situations. Building on these empirical results, we cast doubt onto two influential explanations of the omission effect, the Causal Relevance Hypothesis and the Overgeneralization Hypothesis, and provide a novel explanation of the phenomenon. Furthermore, we discuss various ramifications of the interplay between our understanding of omissions and legal systems.