‘We have been studying various artificial intelligence solutions over the past few years, and found that they can already be implemented by organizations in a number of helpful ways,’ says Vera Blazevic, researcher in innovation management at Radboud University and one of the authors of the paper. ‘When organisations need to innovate, they need more and more divergent ideas – that usually leads to better quality ideas further on. With the right prompting, transformer-based language models such as GPT-3 can quickly generate a lot of these ideas that can help in prototyping, for example.’
Speeding up knowledge extraction
‘Further more, GPT-3 can be used to summarize large texts, or to discern sentiment from those texts. For example, if an organisation wants to analyse user reviews of their product, they can use tools such as these to find out which features customers respond most positivitely or negatively to. That's work that can be done by humans, but language models can help to speed up this knowledge extraction so that humans can focus on actually using the insights gained. ’
GPT-3 is a transformer-based language model created by OpenAI. In a broad sense, it is an artificial intelligence system that has studied millions of texts and topics, and uses that to form new texts based on queries from users. In the past few years, the GPT-3 model has been used in various application that have drawn a lot of attention, such as ChatGPT, DALL-E (which can generate images) and MuseNet (capable of generating songs).
Blazevic warns that language models will, at least for now, still play a limited role in the innovation process. ‘Once you need ideas to converge, GPT is not particularly helpful. The AI can't judge which ideas are truly feasible and make sense, and which of them fit the organization you're working for. For those situations, humans remain essential. That's why we see room for hybrid intelligence: the language model can help kickstart meetings or discussions, after which humans take over to carry these ideas to the finish line.’
Organisations that choose to use artificial intelligence for idea generation should also be cognizant of the biases these tools have, as the language models are trained on large datasets of existing, often-biased texts. Blazevic: ‘In a hybrid intelligence team, humans can can check for potential bias as part of the process. This also necessitates the active management of such teams, by for example training employees in searching for and reflecting on biases. Acquiring those skills might then even help humans become more aware of their own biases, eventually leading humans and AI to learn from each other.’