Filters








1,645 Hits in 4.1 sec

Revealing Persona Biases in Dialogue Systems [article]

Emily Sheng, Josh Arnold, Zhou Yu, Kai-Wei Chang, Nanyun Peng
2021 arXiv   pre-print
We define persona biases as harmful differences in responses (e.g., varying levels of offensiveness, agreement with harmful statements) generated from adopting different demographic personas.  ...  However, the adoption of a persona can result in the adoption of biases.  ...  Persona Bias Metrics To investigate persona biases in dialogue systems, we specifically design four metrics to evaluate different ways harm can arise in generated responses.  ... 
arXiv:2104.08728v2 fatcat:ozebeb5ibrgixacjrpujepvfum

Data-Assisted Persona Construction Using Social Media Data

Dimitris Spiliotopoulos, Dionisis Margaris, Costas Vassilakis
2020 Big Data and Cognitive Computing  
This work utilizes an approach to activate an accurate persona definition early in the design cycle, using topic detection to semantically enrich the data that are used to derive the persona details.  ...  A user study in persona construction compares the topic modelling metadata to a traditional user collected data analysis for persona construction.  ...  Designer-generated personas are costly and take time to create. Additionally, they may be biased by their creators.  ... 
doi:10.3390/bdcc4030021 fatcat:3wk7ba5aavc5ppwvieqt6kviea

Hi, my name is Martha: Using names to measure and mitigate bias in generative dialogue models [article]

Eric Michael Smith, Adina Williams
2021 arXiv   pre-print
We show that several methods of tuning these dialogue models, specifically name scrambling, controlled generation, and unlikelihood training, are effective in reducing bias in conversation, including on  ...  All AI models are susceptible to learning biases in data that they are trained on.  ...  In this work, we expand upon prior investigations into the social biases of generative dialogue models by detecting-both with standard metrics and human evaluations-differences in how models react to first  ... 
arXiv:2109.03300v1 fatcat:st6tkecxejaafb3tuvxneejzf4

Using artificially generated pictures in customer-facing systems: an evaluation study with data-driven personas

Joni Salminen, Soon-gyo Jung, Ahmed Mohamed Sayed Kamel, João M. Santos, Bernard J. Jansen
2020 Behavior and Information Technology  
STUDY 2 examines the application of artificially generated facial pictures in data-driven personas using an experimental setting where the highquality pictures are implemented in persona profiles.  ...  We conduct two studies to evaluate the suitability of artificially generated facial pictures for use in a customer-facing system using data-driven personas.  ...  Algorithmic Bias.  ... 
doi:10.1080/0144929x.2020.1838610 fatcat:ubsgdxjeq5bd3ebki5rf4qzoay

Multi-Dimensional Gender Bias Classification [article]

Emily Dinan, Angela Fan, Ledell Wu, Jason Weston, Douwe Kiela, Adina Williams
2020 arXiv   pre-print
We show our classifiers prove valuable for a variety of important applications, such as controlling for gender bias in generative models, detecting gender bias in arbitrary text, and shed light on offensive  ...  In this work, we propose a general framework that decomposes gender bias in text along several pragmatic and semantic dimensions: bias from the gender of the person being spoken about, bias from the gender  ...  Bias Detection Creating classifiers along different dimensions can be used to detect gender bias in any form of text, beyond dialogue itself.  ... 
arXiv:2005.00614v1 fatcat:o3lgzjeouvhepmp6bkmw2jk7jm

Reports of the Workshops Held at the 2018 International AAAI Conference on Web and Social Media

Managing Editor, Jisun An, Rumi Chunara, David J. Crandall, Darian Frajberg, Megan French, Bernard J. Jansen, Juhi Kulshrestha, Yelena Mejova, Daniel M. Romero, Joni Salminen, Amit Sharma (+4 others)
2018 The AI Magazine  
Media, Use and Well-Being; Chatbot; Data-Driven Personas and Human-Driven Analytics: Automating Customer Insights in the Era of Social Media; Designed Data for Bridging the Lab and the Field: Tools, Methods  ...  , and Challenges in Social Media Experiments; Emoji Understanding and Applications in Social Media; Event Analytics Using Social Media Data; Exploring Ethical Trade-Offs in Social Media Research; Making  ...  We illustrated these problems by presenting a methodology called automatic persona generation that summarizes real data on online behaviors and demographics into easily interpretable personas.  ... 
doi:10.1609/aimag.v39i4.2835 fatcat:uzx3cx6btrfbvmatd74gymacji

Automatically Dismantling Online Dating Fraud

Guillermo Suarez-Tangil, Matthew Edwards, Claudia Peersman, Gianluca Stringhini, Awais Rashid, Monica Whitty
2019 IEEE Transactions on Information Forensics and Security  
Our work presents the first fully described system for automatically detecting this fraud.  ...  In this paper, we investigate the archetype of online dating profiles used in this form of fraud, including their use of demographics, profile descriptions, and images, shedding light on both the strategies  ...  of these profiles which can be identified for automatic detection.  ... 
doi:10.1109/tifs.2019.2930479 fatcat:jhe3rdyerbefxpgupe6zqxd57u

Wearing Many (Social) Hats: How Different are Your Different Social Network Personae? [article]

Changtao Zhong, Hau-wen Chan, Dmytro Karamshuk, Dongwon Lee, Nishanth Sastry
2017 arXiv   pre-print
This paper investigates when users create profiles in different social networks, whether they are redundant expressions of the same persona, or they are adapted to each platform.  ...  However, different genders and age groups adapt their behaviour differently from each other, and these differences are, in general, consistent across different platforms.  ...  However, the above differences from 'expected' demographics in terms of age and gender highlight the issue of 'representativeness bias' (Tufekci 2014) introduced by the dataset.  ... 
arXiv:1703.04791v2 fatcat:fgntybcfmfgwxjx4lxdgez4rmu

ConvAbuse: Data, Analysis, and Benchmarks for Nuanced Abuse Detection in Conversational AI [article]

Amanda Cercas Curry, Gavin Abercrombie, Verena Rieser
2021 arXiv   pre-print
We present the first English corpus study on abusive language towards three conversational AI systems gathered "in the wild": an open-domain social bot, a rule-based chatbot, and a task-based system.  ...  We find that the distribution of abuse is vastly different compared to other commonly used datasets, with more sexually tinted aggression towards the virtual persona of these systems.  ...  Bias and representation in abuse detection. Previous research has already pointed out the problem of bias in offensiveness detection (Poletto et al., 2019; Sap et al., 2019) .  ... 
arXiv:2109.09483v1 fatcat:h434pu77lzdhjkmx6jqks3mray

Societal Biases in Language Generation: Progress and Challenges [article]

Emily Sheng, Kai-Wei Chang, Premkumar Natarajan, Nanyun Peng
2021 arXiv   pre-print
To better understand these challenges, we present a survey on societal biases in language generation, focusing on how data and techniques contribute to biases and progress towards reducing biases.  ...  Language generation presents unique challenges for biases in terms of direct user interaction and the structure of decoding techniques.  ...  , hate speech detection) for different demographics.  ... 
arXiv:2105.04054v3 fatcat:dhwma4hvfbf7jke3lt227qb73i

How Do You Speak about Immigrants? Taxonomy and StereoImmigrants Dataset for Identifying Stereotypes about Immigrants

Javier Sánchez-Junquera, Berta Chulvi, Paolo Rosso, Simone Paolo Ponzetto
2021 Applied Sciences  
We carried out two preliminary experiments: first, to evaluate the automatic detection of stereotypes; and second, to distinguish between the two supracategories of immigrants' stereotypes.  ...  We propose a new approach to detect stereotypes about immigrants in texts focusing not on the personal attributes assigned to the minority but in the frames, that is, the narrative scenarios, in which  ...  In this sense, automatic approaches can detect other patterns that escape human detection.  ... 
doi:10.3390/app11083610 fatcat:bxnmljgcqvhw3c7rj33rbdvxay

Quantifying Phishing Susceptibility for Detection and Behavior Decisions

Casey Inez Canfield, Baruch Fischhoff, Alex Davis
2016 Human Factors  
Conclusion: Phishing-related decisions are sensitive to individuals' detection ability, response bias, confidence, and perception of consequences.  ...  Objective: We use signal detection theory to measure vulnerability to phishing attacks, including variation in performance across task conditions.  ...  In addition, we thank Carnegie Mellon's Behavior, Decision, and Policy Working Group and the CyLab Usable Privacy and Security Laboratory for their feedback.  ... 
doi:10.1177/0018720816665025 pmid:27562565 fatcat:qvhzmrbz3zgrzmo7xs6ipjf3be

Attentional bias in eating disorders: A meta-review

Natalie Stott, John R E Fox, Marc O Williams
2021 International Journal of Eating Disorders  
Sad mood induction may generate attentional bias for food in those with binge-eating disorder. There may also be attentional bias to general threat in eating disorder samples.  ...  This meta-review summarizes and synthesizes the most reliable findings regarding attentional bias in eating disorders across paradigms and stimulus types and considers implications for theory and future  ...  at both automatic and strategic stages of attentional processing to be detected.  ... 
doi:10.1002/eat.23560 pmid:34081355 fatcat:tmxqiupenfdhnenddpcbszlb5a

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias [article]

Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilovic, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha (+4 others)
2018 arXiv   pre-print
The package includes a comprehensive set of fairness metrics for datasets and models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.  ...  A built-in testing infrastructure maintains code quality.  ...  Finally, Themis (Galhotra et al., 2017) is an open source bias toolbox that automatically generates test suites to measure discrimination in decisions made by a predictive system.  ... 
arXiv:1810.01943v1 fatcat:5f2ud4crbfhnrld2qfqj6ihpqa

Anticipating Safety Issues in E2E Conversational AI: Framework and Tooling [article]

Emily Dinan, Gavin Abercrombie, A. Stevie Bergman, Shannon Spruit, Dirk Hovy, Y-Lan Boureau, Verena Rieser
2021 arXiv   pre-print
In this paper, we survey the problem landscape for safety for end-to-end conversational AI and discuss recent and related work.  ...  Over the last several years, end-to-end neural conversational agents have vastly improved in their ability to carry a chit-chat conversation with humans.  ...  Verena Rieser's and Gavin Abercrombie's contribution was supported by the EPSRC project 'Gender Bias in Conversational AI' (EP/T023767/1).  ... 
arXiv:2107.03451v3 fatcat:ofl2i3btmzbmxlpozqy5iifd3e
« Previous Showing results 1 — 15 out of 1,645 results