Is that a human? Categorization (dis)fluency drives evaluations of agents ambiguous on human-likeness
Journal of Experimental Psychology: Human Perception and Performance
A fundamental and seemingly unbridgeable psychological boundary divides humans and nonhumans. Essentialism theories suggest that mixing these categories violates "natural kinds." Perceptual theories propose that such mixing creates incompatible cues. Most theories suggest that mixed agents, with both human and nonhuman features, obligatorily elicit discomfort. In contrast, we demonstrate top-down, cognitive control of these effects-such that the discomfort with mixed agents is partially driven
... y disfluent categorization of ambiguous features that are pertinent to the agent. Three experiments tested this idea. Participants classified 3 different agents (humans, androids, and robots) either on the humanlikeness or control dimension and then evaluated them. Classifying on the human-likeness dimensions made the mixed agent (android) more disfluent, and in turn, more disliked. Disfluency also mediated the negative affective reaction. Critically, devaluation only resulted from disfluency on human-likenessand not from an equally disfluent color dimension. We argue that negative consequences on evaluations of mixed agents arise from integral disfluency (on features that are relevant to the judgment at-hand, like ambiguous human-likeness). In contrast, no negative effects stem from incidental disfluency (on features that do not bear on the current judgment, like ambiguous color backgrounds). Overall, these findings support a top-down account of why, when, and how mixed agents elicit conflict and discomfort. Public Significance Statement People have always been fascinated with hybrid, mixed creatures. In antiquity, these were chimeras and griffins, but in modern times, these are androids (or robots with human-like features). However, such creatures (including androids) are often disliked. Dominant explanations claim that this occurs because they mix seemingly unbridgeable core "essences" or create early perceptual conflicts. In contrast, our experiments show that the dislike of androids can come from the aversive mental effort of categorizing them as human versus non-human. Critically, this effort (and the resultant dislike) is not inevitable, but instead depends on the perceiver's flexible categorization mindset. This suggests that higher-order cognitive processes can override seemingly "automatic" reactions to incompatible cues.