This dataset consists of 79 dialogs between a human user and a chatbot in English language. This data was collected during an online experiment conducted by the research group "Information Systems & Service Design" at the Karlsruhe Institute of Technology (KIT).
- chats_single.csv contains the 79 dialogs with all messages from users and bots.
- chats_aggregated.csv contains the following additional for each dialog: participants' overall satisfaction score and sentiment scores for the entire text written by users that was analyzed with five different sentiment analysis tools/services (i.e., AFINN, VADER, IBM, Microsoft, and Google; see Feine et al. 2019).
Participants were asked to interact with a chatbot to find out whether they could save money by switching to a better mobile phone plan. Additionally, there were shown a fictitious copy of last month's mobile phone bill. During the conversation, the chatbot asked about the participant's usage patterns (e.g., how much data was used) and recommended a randomly generated plan that better met the participant’s requirements. For more information, see Gnewuch et al. (2018).
If you have any questions, please contact us via email (email@example.com) or visit https://chatbotresearch.com.
Some dialogs contain profanity and/or offensive language. Profanity was not removed because it is important for calculating sentiment scores.
PUBLICATIONS / REFERENCES
Gnewuch, U., Morana, S., Adam, M. T. P., and Maedche, A. 2018. “Faster Is Not Always Better: Understanding the Effect of Dynamic Response Delays in Human-Chatbot Interaction,” in Proceedings of the 26th European Conference on Information Systems (ECIS 2018), Portsmouth, United Kingdom.
Feine, J., Morana, S., and Gnewuch, U. 2019. “Measuring Service Encounter Satisfaction with Customer Service Chatbots using Sentiment Analysis,” in Proceedings of the 14th International Conference on Wirtschaftsinformatik (WI 2019), Siegen, Germany, February 24–27.