This thesis investigates the security of deep learning systems, with a particular focus on backdoor attacks—a form of data poisoning where models behave normally under typical inputs but produce attacker-controlled outputs when a specific trigger is present. The research systematically analyzes the effectiveness and stealthiness of such attacks across a range of modern machine learning settings, including convolutional neural networks, spiking neural networks with neuromorphic data, vision transformers, and federated learning systems. The findings show that backdoor success depends heavily on trigger design, model architecture, and training conditions. Larger models and those trained from scratch tend to be more vulnerable. In decentralized and neuromorphic contexts, novel attack strategies are introduced that exploit the structure of data and training workflows, achieving high attack success rates while remaining undetected. The evaluation of common defenses reveals that many are ineffective against more sophisticated or context-specific attacks. Overall, the work highlights the growing complexity of securing machine learning systems and the need for defense mechanisms that are robust across architectures, data modalities, and deployment scenarios.
Gorka Abad Garcia is a PhD candidate in computer science at Radboud University, where he studies the security and robustness of machine learning systems. His research explores how deep learning models can be compromised through backdoor attacks, with a focus on computer vision, spiking neural networks, and federated learning. He has co-authored several peer-reviewed publications in the field and has presented his work at conferences such as NDSS and SaTML. He holds a Master's degree in Cybersecurity and has been involved in collaborative research with Ikerlan Technology Research Centre in Spain. His work aims to contribute to a deeper understanding of the vulnerabilities in modern AI systems and to support the development of more secure machine learning methods.