How AI can prevent AI from harming your organization

Artificial Intelligence (AI), for example in the form of a chatbot or a virtual assistant, can help tremendously with operations within companies and make your customers' lives a lot easier. But if you don't think carefully before integrating this kind of technology, the damage is potentially huge. To avoid that, Karim Sidaoui, assistant professor of marketing at Radboud University, co-wrote a guideline that will help managers avoid blind spots in such integrations.

After studying computer science and business administration, Sidaoui worked internationally in software development as well as retail for many years. After seeking out a PhD in data-driven AI technology and customer experience, he further explored the potential and risks involved in using AI in business. He witnessed and engaged with managers who integrated AI and generative AI technologies into their businesses with varying degrees of success. GenAI helps to create new types of content, such as texts, videos, images, or audio.

GenAI technology could be integrated into so-called conversational agents, making them more functional and better at conversations. Managers are often enthusiastic about integrating these agents to reduce costs and provide round-the-clock service for customers. For example, chatbots or virtual assistants that can perform a wide variety of tasks. From advising on sizes in an online clothing shop to scheduling an appointment at the doctor's office. ‘Conversational agents can greatly facilitate processes within organizations. Say you have a problem with your bank card: using a chatbot that can ask and answer questions to find out and solve the problem, it is possible to help millions of people at once. Especially when the program can run in multiple languages.’

At the same time, warns Sidaoui, conversational agents remain nothing more than software that is built by processes and people. The more complicated the software, the more this influences this process and outcomes. He points to a chatbot from parcel service DPD. A disgruntled yet ingenious user manipulated the chatbot to write a negative poem about its own company. ‘It is therefore important for managers to better understand and be more involved in the integration process of such technologies to avoid negative outcomes with their customers. In other words, if managers do not ensure the responsible integration of such technologies by improving their understanding of AI in the integration process, they risk more costs in repairing the damage than what they are saving by integrating it.’

Karim Sidaoui

Guideline for managers

To help managers with this, Sidaoui, together with colleagues Dominik Mahr and Gaby Odekerken-Schröder from Maastricht University, wrote a paper containing a guideline that helps managers implement conversational agents responsibly and reduce blind spots. 

To introduce new technologies, such as GenAI, thoughtfully, there should be a culture of corporate digital responsibility within an organization. ‘Not only the service manager, the person who is in charge of implementing the new technology, but the whole organization should feel responsible to some extent for the digital technologies being used.’ This arises partly by formulating shared norms and values that the conversational agent must also comply with, discussing opportunities and risks with employees, and assigning employees to continuously check whether the technology is functioning properly. Everyone should feel responsible and involved.

Who is to blame?

That shared responsibility is also important in contact with the software providers who offer the technology and the users who then have to deal with the conversational agent. ‘You see too often that service managers only see the possibilities, that they hear about the benefits from other entrepreneurs, for example, and buy the same technology without much thought.’ But suppose you don't know what a chatbot does with users' data or have no idea that your new program might be unsuitable for children, you could be in big trouble sooner or later. ‘Who do you blame when people ask anxiously about their data and you have no idea? The technology? The provider? Or are you to blame?.’ 

Big companies have legal resources to protect themselves, but smaller companies that integrate these technologies without reflecting on matters of ethics and data privacy could find themselves in legal disputes that also cost them money and influence their public image.

Let AI help you

While GenAI can cause unpleasant surprises, it can also help service managers on their way to a culture of corporate digital responsibility. ‘You can have your plans tested, check what you haven't thought of, or ask for input when considering implementing a conversational agent in your company,’ tips Sidaoui. ‘And if it turns out that your company is not yet ready for an advanced program, you can always start by introducing simpler programs without GenAI and work step by step towards programs that help both users and your organization move forward to minimize risk and improve customer experience.’ 

The key for managers lies in a better understanding and deeper involvement in the technologies they integrate for their business objectives. A balance exists between company objectives, customer experience, and what technology needs to be integrated. ‘Improving AI literacy for managers and the firm can help not only save costs and improve customer experience but also aid managers in using such technology to help them achieve their goals responsibly.’


Generative AI in Responsible Conversational Agent Integration: Guidelines for Service Managers | Karim Sidaoui , Dominik Mahr , Gaby Odekerken-Schröder 

Read the article

Additional reading

Would you like to read more about this? Check the article Karim Sidaoui wrote together with Vera Blazevic

Read the article


Contact information

Innovation, Artificial intelligence (AI), Management