Large language models open up new opportunities and challenges for psychometric assessment of artificial intelligence [post]

Max Pellert, Clemens M. Lechner, Claudia Wagner, Beatrice Rammstedt, Markus Strohmaier
2022 unpublished
In this perspective article, we will argue that it is possible that systems built on large language models exhibit psychological traits that have so far been studied only in humans. Whereas we do not mean to anthropomorphize artificial intelligence, we argue that because large language models are trained on vast corpora of text that often contain statements about human values, attitudes, beliefs, and personality traits, such models will have learned a set of psychological characteristics that
more » ... timately gives a unique "psychological" makeup to every such model. This psychological makeup can manifest in the model's outputs. Therefore, it should be possible to assess these characteristics by applying standard psychometric assessments originally designed for humans to these models. In a nutshell, the models are given questionnaire items as input and are "asked" to choose an answer as output. This opens a pathway to studying potential biases ingrained in large language models, and ultimately can help avoiding that systems based on large language models induce harm when deployed in broader societal applications.
doi:10.31234/osf.io/jv5dt fatcat:gg6zew7bwzgujg6gybtfh4ceze