Pim Haselager praat tegen publiek over zijn onderzoek
Pim Haselager praat tegen publiek over zijn onderzoek

'We can choose not to follow the destructive AI of Zuckerberg and Musk'

On 13 February, the Radboud AI Event: Collaboration for Sustainable Growth is taking place, bringing together AI professionals from industry and academia. What opportunities are there for collaboration in AI? Pim Haselager, professor of AI and keynote speaker at the event, looks ahead.

If we don't act soon enough, we’ll miss the boat with artificial intelligence. At least, that’s the fear of several European countries, including the Dutch government. But if you ask Pim Haselager, we can still distinguish ourselves more than enough in Europe and make a real difference to how AI develops. To do so, it is crucial that scientists and industry work together, emphasises Haselager.

How do you expect AI to develop in the next few year?

“Roughly speaking, we see AI development going in two directions. On the one hand, you have the destructive hypercapitalism of the big US tech companies: put as much money as possible into AI, let it grow as quickly as possible, regardless of the consequences. Move fast and break things, goes the slogan. The other approach is that of social control, as we see in China, for example. A government that collects as much data about its citizens as possible, an algorithm that checks that no one is misbehaving.” 

“In Europe, we are certainly lagging behind Musk and Zuckerberg, and what they are doing in China. But madly chasing after that destructive form of AI is actually not such a smart move at all. In the European context, we often talk about a third approach to AI. A variant which respects human rights and which protects the democracy and individuality we value so much.”

“Regulations can strengthen that third approach, and offer opportunities. Also for business. Rules are always tricky of course, and without rules things move faster. But you can actually increase the quality of AI and its acceptance in society. If you design it to be people friendly, by taking into account the United Nations' Sustainable Development Goals, for example, you have a variant that is suddenly a lot more palatable for us in the long run.” 

“And of course, you can create a revenue model from that. Greater user acceptance, internationally not just in Europe but also in India and South America, and a playing field in which there is no longer an inevitable 'race to the bottom' offer great commercial opportunities. But in that case, the EU must provide active financial support. Later, you won’t then hear someone saying that we originally lagged behind other countries. And in terms of regulations, we are ahead, not behind. Will it work? Given what is at stake, I don't really see any other option. We need to make a virtue of necessity. And we’ll never know whether it will work unless we try.” 

How do you approach collaborating with companies?

“I travel all over the country every week giving lectures. From the Zuidas to the church, from children's libraries to law firms, I talk to everyone about AI. Basically, they all say the same thing: we are afraid of missing the boat. Everyone wants to get involved with AI, but they are also concerned about the risks. Which is why I say: you don't always have to be in the lead at all. If you use AI unwisely, you run the risk of claims, of losing investment in a technology that, in hindsight, may not do what your organisation needs at all. We’ve experienced that with AI and other tech stuff many times in the past. Take the internet bubble around 2000 or the failure of Google Glass [Google's augmented reality glasses - ed.].”

“By talking to all these organisations, I have a good idea of the concerns around the technology, but also the opportunities that lie ahead. Incidentally, you’ll find both camps in almost every organisation: the enthusiasts who want to embrace AI as soon as possible and the techno-sceptics who would rather wait and see. We need all that input to give AI a proper place in society. It’s certainly not too late for that: AI is already here, but in Europe we have an opportunity to think about how to develop technology in such a way that it really makes a constructive contribution to our society.”

“That's why I am delighted with the university-wide interest in improving AI literacy, and the two-year Master’s Degree we now have in our AI study programme, Societal Impact of AI. This is training a new generation of AI specialists to become advisers on responsible innovation. When it comes to effectiveness, our students not only look at the technical aspects, but also at the user's point of view and regulations. That’s the direction I think we need to go in: analysing new AI systems in a social context. And as a university with so much interdisciplinary knowledge, that's where we can play an important role.” 

Radboud AI - overlegsituatie tussen mensen en een robot

 Radboud AI Event Collaboration for Sustainable Growth

Pim Haselager will be speaking on the Radboud AI Event on Thursday February 13, 2025. We have reached the maximum number of registrations. If there are cancellations, spots will become available, and you can sign up using the button below.

More information 

Contact information

Organizational unit
Radboud AI
Theme
Current affairs, Ethics, Artificial intelligence (AI)