Obtain an in-depth overview of Nao programming using the NaoQi framework, with the goal to provide a solid basis for developing interactive and autonomous robot applications in Human-Robot Interaction research. Students will work individually and in groups to explore different paradigms of robot motion, auditory and visual information processing, speech production and gesture analysis and production. In particular:
- Students will learn about the possibilities and limitations of both the NAO's and Pepper bodies, i.e., the different limbs/motors and sensors.
- Students will master practices of concurrent programming in order to control multiple aspects of the robot simultaneously (e.g. continuously process audio for dialogue while at the same time analysing visual input to control head movement for face tracking).
- Students will learn how to develop server/client structured software to divide computation intensive components of their concurrent programs over different computers.
- Students will learn the in-depth working of several framework-provided modules, understanding how they work 'under the hood'.
- Students will learn to replace or expand on those standard modules with self-built modules or modules from different libraries (e.g., OpenCV).
- Students will learn how to read, store and process data (like joint trajectories, visual, audio, etc.) from the different robot sensors for real time analyses of robot behaviour and the environment.
- Students will get experience debugging robot software, e.g., by using data they learned how to acquire.
The NaoQi framework provides a well-organized API for programming Nao robots. In eight tutorial-like practicals (four hours each), we will explore one of the topics described below. During each practical, you will work individually on a corresponding assignment that must be demonstrated in the next lesson. Besides the four hours/week practicals, you are expected to further work on your assignments for an additional four hours/week (8x8=64 hours), until the end of November. In December and January, you will work in groups on a challenging final assignment, which you should demonstrate by the end of January.
- Make Nao/Pepper move! We will start with understanding the Nao, Naoqi framework, Python, Java and other development platforms. During this lesson, you should develop your first software which makes your Nao move: head rotation, arm gestures and … walking!
- Nao/Pepper sensors: overview. You will learn the different sensors and how to use built-in perception components, like isolated word recognition and face detection.
- Vision (and OpenCV)
- Speech (and wit.ai)
- Java, concurrency and client-server programming
- Multimodality: fusion and fission
- Gluing it together: multimodal human-robot interaction
|Bachelor in AI or Computing Science. Other students are also welcome (e.g., from Cognitive Neuroscience), but programming skills are required (Python/Java).|
|The final grade is based on the weighted average of:
- report (0,5)
- demo (0,5)
Resit of both components are possible on individual basis.|
Students who have passed this course are excluded from the course MKI70 - Human-Robot Interaction.