Filters








1,034 Hits in 2.8 sec

Conversational Agents for Chronic Disease Self-Management: A Systematic Review

Ashley C Griffin, Zhaopeng Xing, Saif Khairat, Yue Wang, Stacy Bailey, Jaime Arguello, Arlene E Chung
2021 AMIA Annual Symposium Proceedings  
There is early evidence that suggests conversational agents are acceptable, usable, and may be effective in supporting self-management, particularly for mental health.  ...  In several studies, there were improvements on the Patient Health Questionnaire (p<0.05), Generalized Anxiety Disorder Scale (p=0.004), Perceived Stress Scale (p=0.048), Flourishing Scale (p=0.032), and  ...  Risk of Bias. Risk of bias of one study was rated as good, five as fair, and six as poor.  ... 
pmid:33936424 pmcid:PMC8075433 fatcat:mtywen6qmfdpzh4nugoio3636e

User reactions to COVID-19 screening chatbots from reputable providers

Alan R Dennis, Antino Kim, Mohammad Rahimi, Sezgin Ayabakan
2020 JAMIA Journal of the American Medical Informatics Association  
The primary factor driving perceptions of ability is the user's trust in the hotline provider, with a slight negative bias against chatbots' ability.  ...  Objective The objective was to understand how people respond to COVID-19 screening chatbots.  ...  We designed two vignettes in which the users either reported mild or severe symptoms.  ... 
doi:10.1093/jamia/ocaa167 pmid:32984890 pmcid:PMC7454579 fatcat:b5ir3gzjbrgpxgfouuac44ioqq

Personalized Chatbot Trustworthiness Ratings [article]

Biplav Srivastava and Francesca Rossi and Sheema Usmani and and Mariana Bernagozzi
2020 arXiv   pre-print
For example, users may want to use chatbots that are not biased, that do not use abusive language, that do not leak information to other users, and that respond in a style which is appropriate for the  ...  may consider important in order to trust a specific chatbot.  ...  Examples of developers' concerns are related to bias behavior (that is, the chatbot should not be prone to erratic response in the presence of protected variables like gender or race) and language usage  ... 
arXiv:2005.10067v2 fatcat:qkvf5vcnafcsriij2j3l4w5nke

Perceived Utilities of COVID-19 Related Chatbots in Saudi Arabia: a Cross-sectional Study

Manal Almalki
2020 Acta Informatica Medica  
Health chatbots are increasingly being utilized in healthcare to combat COVID-19. However, few studies have explored the perception and willingness of end-users toward COVID-19-related chatbots.  ...  This paper explored 166 end-users' perceived utilities of health chatbots in Saudi Arabia, and how their characteristics affect their perceptions.  ...  no statistically significant differences in all perceptions of health chatbots' utilities between groups of the following variables: gender, age, nationality, health status (medically diagnosed conditions  ... 
doi:10.5455/aim.2020.28.218-223 pmid:33417645 pmcid:PMC7780760 fatcat:m4rqbkvnrvafreuuls7ny7lwwa

AI-based chatbots in customer service and their effects on user compliance

Martin Adam, Michael Wessel, Alexander Benlian
2020 Electronic Markets  
Today, human chat service agents are frequently replaced by conversational software agents or chatbots, which are systems designed to communicate with human users by means of natural language often based  ...  Though cost-and time-saving opportunities triggered a widespread implementation of AI-based chatbots, they still frequently fail to meet customer expectations, potentially resulting in users being less  ...  Thus, when employing CAs, and chatbots in particular, providers should design dialogs as carefully as they design the user interface.  ... 
doi:10.1007/s12525-020-00414-7 fatcat:kxesdlkhgfelzed5leqmgeanu4

Assessing Political Prudence of Open-domain Chatbots [article]

Yejin Bang, Nayeon Lee, Etsuko Ishii, Andrea Madotto, Pascale Fung
2021 arXiv   pre-print
This is safe but evasive and results in a chatbot that is less engaging.  ...  However, dealing with politically sensitive content in a responsible, non-partisan, and safe behavior way is integral for these chatbots.  ...  Lee et al. (2019b) studied social bias in chatbots using the same technique, scoring the rate of agreement or disagreement with stereotypical statements about races and genders.  ... 
arXiv:2106.06157v1 fatcat:rdxq2nohifeunph6x4qaah4nau

State-of-the-art in Open-domain Conversational AI: A Survey [article]

Tosin Adewumi, Foteini Liwicki, Marcus Liwicki
2022 arXiv   pre-print
In addition, we provide statistics on the gender of conversational AI in order to guide the ethics discussion surrounding the issue.  ...  languages, and 3) the discussion about the ethics surrounding the gender of conversational AI.  ...  The initial step was to search using the term "gender chatbot" on Google Scholar and note all chatbots identified in the scientific papers in the first ten pages of the results.  ... 
arXiv:2205.00965v1 fatcat:cxif5tx5rrdrrboxwfnykrk5ka

Physicians' Perceptions of Chatbots in Healthcare: A Cross-Sectional Web-Based Survey (Preprint)

Adam Palanica, Peter Flaschner, Anirudh Thommandram, Michael Li, Yan Fossat
2018 Journal of Medical Internet Research  
However, little is known about the perspectives of practicing medical physicians on the use of chatbots in health care, even though these individuals are the traditional benchmark of proper patient care  ...  Many potential benefits for the uses of chatbots within the context of health care have been theorized, such as improved patient education and treatment compliance.  ...  The authors would also like to acknowledge Gaurav Baruah and Peter Leimbigler for their helpful comments of the research design and survey.  ... 
doi:10.2196/12887 pmid:30950796 pmcid:PMC6473203 fatcat:dok6725qlvec5g2hqesnjdxsf4

Acceptability of chatbot versus General Practitioner consultations for healthcare conditions varying in terms of perceived stigma and severity (Preprint)

Oliver Miles
2020 Qeios  
in 12 conditions where participants rated acceptability of each consultation source for each health condition.  ...  ) or iii) a GP-chatbot combination.  ...  Consequently, the population may have resulted in a more technology accepting population participating in the study due to self-selection bias.  ... 
doi:10.32388/bk7m49 fatcat:mwixdje4vrbalb5ivbsmmtexv4

Self-Diagnosis through AI-enabled Chatbot-based Symptom Checkers: User Experiences and Design Considerations

Yue You, Xinning Gui
2021 AMIA Annual Symposium Proceedings  
Recently, there has been a growing interest in developing AI-enabled chatbot-based symptom checker (CSC) apps in the healthcare market.  ...  Based on these results, we derived implications for the future features and conversational design of CSC apps.  ...  chatbot design 28 .  ... 
pmid:33936512 pmcid:PMC8075525 fatcat:z7njxwkzazajleuhgz55qphmfm

Self-Diagnosis through AI-enabled Chatbot-based Symptom Checkers: User Experiences and Design Considerations [article]

Yue You, Xinning Gui
2021 arXiv   pre-print
Recently, there has been a growing interest in developing AI-enabled chatbot-based symptom checker (CSC) apps in the healthcare market.  ...  Based on these results, we derived implications for the future features and conversational design of CSC apps.  ...  chatbot design 27 .  ... 
arXiv:2101.04796v1 fatcat:5pdomxrszbbqhmn4z6dc5tsmsm

Artificial Intelligence in Financial Services: Customer Chatbot Advisor Adoption

2019 VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE  
A platform designed to understand, learn and converse like a human and answer ad-hoc queries in real time is commonly referred to as a Chabot".  ...  This empirical study was conducted in Pune city in India by collecting primary data from 310 online financial services customers.  ...  Chatbot advisors should be designed in a manner which is easy to understand and adopt. XII.  ... 
doi:10.35940/ijitee.a4928.119119 fatcat:dibnq25lhjcopgeejknjtc6iny

Chatbots and messaging platforms in the classroom: an analysis from the teacher's perspective [article]

J. J. Merelo, P. A. Castillo, Antonio M. Mora, Francisco Barranco, Noorhan Abbas, Alberto Guillen, Olia Tsivitanidou
2022 arXiv   pre-print
In addition, an analysis of how and when teachers' opinions towards the use of these tools can vary across gender, experience, and their discipline of specialisation is presented.  ...  Introducing new technologies such as messaging platforms, and the chatbots attached to them, in higher education, is rapidly growing.  ...  Discussion Our initial intention in the design of these surveys was to probe the opinions of tertiary education teachers in the introduction of chatbot technologies in class.  ... 
arXiv:2201.10289v1 fatcat:wr2pncva7veg3jectjg5vvb3iq

Chatbots: A Tool to Supplement the Future Faculty Mentoring of Doctoral Engineering Students

Sylvia Mendez, Katie Johanson, Valerie Martin Conley, Kinnis Gosha, Naja A Mack, Comas Haynes, Rosario A Gerhardt
2020 International Journal of Doctoral Studies  
Methodology: Chatbot efficacy is examined through a phenomenological design with focus groups with underrepresented minority doctoral engineering students.  ...  Contribution: No studies have investigated the utility of chatbots in providing supplemental mentoring to future faculty.  ...  human mentors, and receive non-biased answers to questions about graduate school.  ... 
doi:10.28945/4579 fatcat:gph7ip4ttbbczdyfngvdiqhwly

Patients' perceptions and opinions about mental health chatbots: A scoping review (Preprint)

Alaa A Abd-Alrazaq, Mohannad Alajlani, Nashva Ali, Kerstin Denecke, Bridgette M Bewick, Mowafa Househ
2020 Journal of Medical Internet Research  
Chatbots have been used in the last decade to improve access to mental health care services. Perceptions and opinions of patients influence the adoption of chatbots for health care.  ...  to show high variability in responses.  ...  The range of study designs currently used in the field makes equitable risk of bias assessment difficult; it is acknowledged that the risk of bias assessments is not required in scoping reviews [18, 19  ... 
doi:10.2196/17828 pmid:33439133 fatcat:c7b2bblgynh6xnb5dez5ci5ppa
« Previous Showing results 1 — 15 out of 1,034 results