Empathy Prediction Challenge

The OMG-Empathy Dataset

The One-Minute Empathy Prediction dataset (OMG-Empathy) is designed to provide a foundation for empathy prediction models. The dataset is composed of recordings of two individuals talking to each other about a certain topic. One of the individuals is the actor who leads a semi-scripted conversation with a listener. The actors tell fictional stories about what happened to them recently, and we record the reactions of the listeners to these stories over time. We created a series of eight topics that the actors talked about, each of them relating to one or more emotional state:


  • - Talking about a childhood friend.
  • - How I started a band!
  • - My relation with my dog.
  • - I had a bad flight experience.
  • - I had an adventurous travelling experience.
  • - I cheated on an exam when I was younger.
  • - I won a martial arts challenge.
  • - I ate a very bad food item.

The actors were free and encouraged to improvise on each of these topics so that we recorded a natural conversation scenario. They were instructed to maintain control over the conversation. This way we guaranteed that the recorded interactions were not completely one-sided but at the same time that the listener did not take over the direction of the conversation. We used a total of actors, each of them telling stories to each participant. Each actor was recruited from our own department and presented a different style of storytelling. The styles were pre-defined before the recordings and followed personality traits: introverted, calm, extroverted and excited. The actor responsible for the introverted style presented the stories in a very monotonic manner, avoiding much eye contact with the subjects. The actor with the calm style told the stories in a normal voice tone, maintaining the minimum of interaction. The actor telling the stories in an excited way made more space for controllable interactions with the participants, as well presented the stories using a higher activation on its emotion expressions. Finally, the excited actor presented the stories in a over-reactive way, making use of a lot of gesticulations and facial expressions. Each actor comes from a different country, but all were able to speak English fluently.

A total of 80 different interaction videos were recorded. Each video spanned for an average of 5 minutes and 12 seconds, providing us with 415 minutes (around 7 hours) of recordings. While interacting with different subjects, the actors extended or reduced the dialogue duration spontaneously.


Self-assessment Annotation
Immediately after each session, we asked the listeners to watch the interactions on a computer screen and used a joystick to annotate how they felt in terms of valence using a continuous scale ranging from positive (1) to negative (-1) values. The use of the joystick allowed for continuous and gradual tracking of annotations which are temporally related to the interaction scenario.
The dataset is separated into Training, Validation and Testing folders. Each folder contains the videos, each video representing one story and containing the entire actor/subject interaction, and the annotations. Each annotation is related to one video. The annotation are stored in a .csv file. Each file contains a header (valence) and the valence values for each frame of the video. The valence is annotated in an interval between -1 (negative) and 1 (positive).

Tracks

We let available for the challenge a pre-defined set of training, validation and testing samples. We separate our samples based on each story: 4 stories for training, 1 for validation and 3 for testing. Each story sample is composed of 10 videos with interactions, one for each listener. Although using the same training, validation and testing data split, we propose two tracks which will measure different aspects of the self-assessed empathy:

The Personalized Empathy track, where each team must predict the empathy of a specific person. We will evaluate the ability of proposed models to learn the empathic behavior of each of the subjects over a newly perceived story. We encourage the teams to develop models which take into consideration the individual behavior of each subject in the training data.

The Generalized Empathy track, where the teams must predict the general behavior of all the participants over each story. We will measure the performance of the proposed models to learn a general empathic measure for each of the stories individually. We encourage the proposed models to take into consideration the aggregated behavior of all the participants for each story, and to generalize this behavior in a newly perceived story.


Example of the Sub Empathy track prediction.


The training and validation samples will be given to the participants at the beginning of the challenge together with all the associated labels. The test set will be given to the participants without the associated labels. The team`s predictions on the test set will be used to calculate the final metrics of the challenge.


Metrics

To have a adequate measure for the models` predictions we will use the Concordance Correlation Coefficient (CCC). With this metric we can measure the similarity between the model`s prediction and the listener`s own assessment of how they felt.

For the Personalized Empathy track, we will calculate the CCC between proposed models' output and each of the participant's own assessment for each of the stories. That means that we will have one CCC measure for each of participant, and the final result will average the CCC over all the participants.

The Generalized Empathy track will evaluate the CCC between the proposed models's output and each of the stories. We will calculate one CCC per story, averaging over all the listeners. We then will calculate an average CCC for all the stories, which will be used as a measure for the challenge


Scripts
Access to important scripts: https://github.com/knowledgetechnologyuhh/OMGEmpathyChallenge

Team Registration
To participate to the challenge, please send us an email to barros @ informatik.uni-hamburg.de with the title "OMG-Empathy Team Registration". This e-mail must contain the following information:
Team Name
Team Members
Affiliation
Participating tracks

Paper submission
Each participating team must submit, together with their final results, a short 2-4 pages paper describing their solution. This paper must follow the IEEE specifications ( Latex and Word templates) and will be peer reviewed following the FG 2019 standards. The accepted papers will be included in the FG 2019 workshop proceedings.

Distribution and License
To have full access to this dataset, please send an e-mail with your name, affiliation and research summary to: barros @ informatik.uni-hamburg.de.

This corpus is distributed under the Creative Commons CC BY-NC-SA 3.0 DE license. If you use this corpus, you have to agree with the following itens:
  • - To cite our reference in any of your papers that make any use of the database.
  • - To use the corpus for research purpose only.
  • - To not provide the corpus to any second parties.
  • -To delete the dataset as soon as you finish using it.