Chatbot Usability

How to minimize the impact of chatbot mistakes
16 January 2021 until 31 December 2026
Project type

Human-like chatbots

An effective strategy is to imbue these agents with human-like behaviour to improve the user experience when interacting with non-human agents (such as virtual characters or robots), an effective strategy is to imbue these agents with human-like behaviour. This sense of humanity can be achieved through several factors, including personalization, an emotional tone, greetings, reciprocity, and apologies. Several studies also discuss the importance of the user in the decisions made by the computer.
Following this line of research, in this project, we focus on three features of chatbot behaviour and and their impact on user experience: Emotional interaction, apologies, and explanations. We will study Emotional Interaction because emotion is a crucial to human-human interaction. When an agent exhibits emotional behaviour (in terms of perceiving and expressing emotions), it has a stronger connection with the user. Furthermore, emotions have led to successful dialogue systems or conversational agents. To provide users with a more natural experience, chatbots must generate an appropriate response not only in terms of content, but also in terms of emotion. 

A second aspect of the chatbot communication style that we will explore is using apologies. Apologizing is the standard response of humans when they make mistakes, and this strategy has been shown to work for some robots as well. Therefore, our research will investigate the extent to which apologies allow chatbots to "compensate" for their mistakes. 

The third factor we will address is the use of explanations. With the advent of new machine learning algorithms, the ability to explain certain decisions is currently one of the "hot topics" in artificial intelligence.
According to some literature, the ability to explain advice is the most essential feature of a computer-based decision support system. Moreover, the explanation component in expert systems is claimed to improve user satisfaction, user acceptance, and the perceived quality of the system.

Theoretical Framework

The Technology Acceptance Model (TAM) provides a theoretical explanation for user acceptance of information systems. This model explains people's use of technology based on two principles: perceived usefulness and ease of use. The TAM is a useful model for analyzing and understanding people's behaviour toward new technologies from a social perspective. 

The primary purpose of TAM is to provide a basis for tracking the impact of external variables on internal beliefs, intentions, and attitudes. Thus, perceived usefulness and perceived ease of use are two significant factors that we will investigate in this project. For this project, we will use only three factors of TAM methodology: Perceived Usefulness, Perceived Ease of Use, and Attitude Toward Using Technology. In addition, another factor that is expected to have a positive effect on attitude is Social Presence. As we aim to make the chatbot more human-like, we hope people will feel a more personal connection with it. Therefore, we will adapt the TAM model with the concept of Social Presence, which is the "feeling of being together". According to social presence theory, people experience a high degree of presence during face-to-face interaction, but a low degree of company during written communication. Social presence has been shown to influence how much people use technology. This theory distinguishes different modes of communication on a scale where the degree to which one is aware of another person in a discussion ranges from low to high.

The main research question in the current project is: How can the user experience of chatbots be improved, even when they make mistakes?

model of chatbot communication style

Contact information