Tom Heskes
Tom Heskes

Plans galore: what should we do with all those AI reports?

AI researcher Tom Heskes closed out 2025 with an inbox full of AI reports. His New Year’s resolution? To share his thoughts on those plans in a column.

At the end of last year, we were treated to a veritable flood of AI reports. My inbox filled up with the Delta Plan AI, the Wennink Report, a position paper from AIC4NL, and an “AI Deep Dive” from Invest-NL. An impressive stack of PDFs, good for hundreds of pages of ambitions, claims running into the billions, and terms like “digital sovereignty.”

The strongest report in terms of content (written by former PhD student Stefan Leijnen—so yes, I’m biased) is, in my view, the one from Invest-NL. It translates a solid analysis of the AI landscape into a clear vision: don’t try to compete head-on with the U.S. and China where they are already far ahead, but invest in niches where we can still make a real impact. It is the only report that attempts to describe the AI knowledge ecosystem. According to the report, Radboud University is particularly strong in cognitive AI, fundamental machine learning, medical imaging, neuro-AI and brain-inspired learning.

The Delta Plan AI is considerably more ambitious and is grounded in unbridled tech optimism. And that’s not only because, as the authors candidly admit, parts of it were co-created by an LLM. Which is ironic in itself: a national plan to become less dependent on American Big Tech, generated by the algorithms of those very tech giants. One wonders whether recommendation #17 was also dreamed up by an LLM: AI investments should preferably take place in Amsterdam.

The Wennink Report attempts to translate the famous Draghi report to the Dutch context and calls for billions in investments to save our economic relevance. In doing so, it seems to have adopted parts of the Delta Plan AI—Amsterdam fixation included—wholesale and without much criticism. Two infrastructural megaprojects, including the AI Gigafactory, swallow up a quarter of the proposed budget, while we as a region are left with the short end of the stick: two meager little circles on the map showing all project proposals (page 2 of the PDF). Altogether, the collection of projects mainly comes across as a group that just happened to still have some plans lying around. This is also a moment for self-reflection: why were our ideas barely represented there?

The position paper from AIC4NL overlaps somewhat with the earlier reports, but in my view it is more modest and realistic. No Amsterdam tunnel vision here: the seven existing regional AI hubs are functioning well and collaborating effectively. There is also specific attention for developing AI FairTech: AI technology that complies with European values, norms, and regulations. That said, this report too emphasizes that substantial additional capital is simply needed for infrastructure and funds to enable transitions in strategic sectors.

Paper overload

Of course, all this paper did not go unanswered. Search for the name of any plan and criticism leaps off the screen. Important points of critique can be found in the open letter calling for a “careful and caring” digital Netherlands, co-authored by several of our colleagues from Nijmegen. The moral objections in such letters and critical columns focus almost exclusively on the excesses of LLMs and generative AI. The reports themselves, by contrast, stress that we should focus on niches where we can truly make a difference—such as AI in healthcare or energy-efficient hardware—instead of blindly copying what is already being done in Silicon Valley.

To my mind, “value-driven AI” is therefore not an argument for moral paralysis, but rather a call to action. Values have to take shape in practice, by having technologists, humanities scholars, and domain experts work closely together from the very beginning. Precisely in the space between grand economic ambitions and necessary moral reflection lies, in my view, our greatest opportunity: to show that by actually doing the work, you can build AI that is technically sound and socially responsible. In Nijmegen, we have already gained quite a bit of experience with this approach. Think of the interdisciplinary perspective of the iHub, the concrete co-creation in educational practice at NOLAI, or the way students at the SME Datalab make AI accessible to entrepreneurs.

The authors of the reports are still dreaming of national supercomputers and new hubs. In the meantime, we can continue building the AI applications of tomorrow—and make sure our proposals end up on top of the relevant stacks. That seems to me an excellent plan for now.

Contact information

Organizational unit
Radboud AI, Faculty of Science
Theme
Artificial intelligence (AI)