Thesis/Internships

Thesis/Internship Opportunities

At IDLAB-AIRO we've got three thesis/internship opportunities for the academic year 2020-2021. You will be supervised by Prof. Tony Belpaeme and me. Your research will be conducted at Ghent University (Belgium) which offers state of the art research infrastructure. Note that these topics are not 'fixed', and that these topic description function merely as a starter for your own personal research project!

For more information, you can send me an email via first.lastname[at]ugent[dot]be

  • Gesticular Alignment in Human-Robot Interaction
    Promotor: prof. Tony Belpaeme

    Daily Supervisor: Pieter Wolfert

    Description:

    We humans have the tendency to align ourselves with our conversation partner (Holler, 2011). Alignment can be found in gesticulation (using co-speech gestures), but also in our language (aligning our word use to that of our conversation partner). Alignment is an important communication skill, as we humans like it when our conversation partner matches our way of communicating (Bergmann, 2012). In Human-Robot Interaction non-verbal communication plays an important role, and to improve this, alignment needs to come into play. At the moment systems for gesture generation are one way only, and do not take into account the behavior of the conversation partner, such as the system proposed by Youngwoo et al, which generates co-speech gestures from TedX videos. Co-speech gestures can be categorized in four categories: iconic, deictic, metaphoric, and beat gestures. Iconic gestures are gestures that are used to depict an object, whereas deictic gestures are pointing gestures. Metaphoric gestures portray a metaphor, and beat gestures are used to emphasize speech (yet this often occurs unconsciously) (McNeill, 1992). When a robot is able to align its gesticulation to that of the interlocutor, not only the communication might improve but also the perception of that robot by the human, which again leads to improved interaction

    References & Further Reading:


    Goal:

    In this thesis you will focus on creating and evaluating a technical system that is able to align gestures with that of a conversational partner. Gestures can be aligned in multiple ways, in quantity, scope or speed. The main research question is whether gesticular alignment in human-robot interaction can improve said interaction. This will be evaluated by running a HRI userstudy, with at least two conditions (one with and one without the system). Therefor, the system needs to be implemented with either a Nao or Pepper robot.

  • Integrative Gesticulation, Context and Semantically-Aware Speech-Driven Gesture Generation for Social Robots
    Promotor: prof. Tony Belpaeme

    Daily Supervisor: Pieter Wolfert
    External Supervisor: Taras Kucherenko (KTH, Stockholm)

    Description:

    Iconic gesture for 'Big' Social robots are often equipped with human-like traits, and humans expect them to behave as humans in communication. We know that 55% of our communication constitutes to non-verbal communication. Non-verbal communication involves body language like co-speech gestures, head nodding and the use of eye blinks. More and more co-speech gesture generation methods become available, due to the steep rise of the use of machine learning. These methods use either speech audio or text as input for generating human-like co-speech gestures for physical and virtual agents. Kucherenko (2020) integrated text and speech information for the generation of semantically-aware gesticulation in a virtual agent. This system is able to generate semantic and beat gestures, where semantic gestures are related to the content of what’s being said. Semantic gestures are also known as iconic gestures (see image, where 'big' is depicted, referring to the size of an object). Beat gestures are often used when we want to emphasize parts of what we're saying. However, the system cannot be used for creating pointing gestures towards objects (something we learn as young children), since it lacks the visual context. Yet, integrating point gestures greatly influences Human-Robot Interaction (De Wit et al 2018, Sauppé & Mutlu 2014).

    References:


    Goal:

    For this thesis we expect you to research a system that integrates semantic, beat, and pointing gestures. Kucherenko (2020) came up with a model that integrates semantic and beat gestures, based on text and speech. Including pointing gestures requires integrating the visual environment with said speed and text. Such an integration can be achieved by using computer vision to map the environment, for example using an off-the-shelf neural network for object recognition, that can be used to recognize and detect visual objects. Part of this setup is the integration of a physical robot, such as Pepper or Nao. This does not specifically mean this software must run on the robot, as mobile robots do have serious hardware limitations. Once this system is implemented, it can be tested through a user study, which is critical to human-robot interaction. One possible context would be a robot-tutoring situation, where the robot teaches a participant in a specific subject. In Human-Robot Interaction we look at the effect of (technical) improvements in the robot on the interaction with humans. For this topic you need to apply machine learning in the context of a human-robot interaction.

  • Study of eye gaze behaviour of social robots: how persuasive can a robot be?
    Promotoren: prof. Tony Belpaeme

    Daily Supervisor: Pieter Wolfert

    Description:

    Social robots are robots that interact with us in the same way in which people interact with each other. Instead of pressing buttons or programming a robot, the user can talk or gesture to a social robot. One of the biggest challenges in designing social robots is building software that makes the robot respond in an appropriate manner, meaning that the robot responds at the right time with the correct response. To us humans, social responses come naturally and we are so attuned to social interaction that the smallest deviation is immediately noticeable: when someone stands too close or looks at us too long we immediately feel uneasy. In this thesis, you will replicate a study in which we showed that a robot that looked in a natural way at people (as opposed to avoiding looking at people) collected more when raising money for charity (Wills et al., 2016). This was inspired by a psychology study by Powell et al. (2012) in which researchers showed that the image of eyes can persuade people to donate more money: we speculate that the eyes of robots have the same persuasive power as images of uses in the Powell study.

    You will implement the software of a social robot, specifically a Furhat Robot, to interact with people relying on social rules to control its interactive behaviour. You will program two versions of the robot: one prosocial robot, which uses the same behaviour that people exhibit when speaking to each other, and one non-social robot, which breaks the social rules people use. To test whether people are sensitive to the social behaviour of the robot, you will implement an experiment in which we test how persuasive the robot is. You will bring in test subjects and measure how much the robot convinces them to give money for a good cause, with the hypothesis being that the prosocial robot will collect more money.

    References and further reading


    Goals:

    1. Learn about Human-Robot Interaction and social interaction with robots.

    2. Implement social signal processing algorithms to detect people and detect their gaze direction.

    3. Implement behaviour of the robot to interact with people: here two version of the robot will be built. One social version and one non-social version, these will form two conditions for an experiment.

    4. You will design and run an experiment in which you will check whether the robot is more persuasive if it uses human-like gazing behaviour. This is both a technical evaluation, assessing the technical aspects of the system, and a psychological experiment.

    5. Show awareness of the literature on HRI, social robots, user studies.