Nearly all children acquire their first language with ease, but how is that possible? While some have argued that humans need innate knowledge of language, recent research has shown that artificial neural networks (i.e. general learning systems with no syntactic knowledge built-in) can induce human-like grammatical knowledge solely based on the input they receive. One important problem with this recent research is that the input language these networks receive is almost never varied; English dominates this field. The human language learning system must be universal, however, which means that it needs to work equally well for all languages.
Therefore, this research project will investigate what determines successful learning of grammar by neural networks, and conduct this investigation with various languages differing in word order and morphological complexity. In three studies, the effect of these two structural properties of the languages will be investigated, and different types of neural networks will be compared. Moreover, to assess whether the network’s knowledge is actually human-like, their performance will be compared to that of human native speakers. This project will bring relevant new insights to multiple disciplines, such as computational (psycho-)linguistics, (experimental) syntax, and first language acquisition by investigating the neural network’s syntactic learning ability across languages and by comparing it to human experimental data.