Objectives and expected results

In humans the concepts of reasonable and safe actions can be taught directly via natural language. In this project we will develop an understanding of explicit information from dialogues in natural language to infer and learn safe and unsafe concepts. In a conversation, humans use changes in a dialogue to predict safety-critical situations and use them to react accordingly. We propose to use the same cues for safer human-robot interaction for early verbal detection of dangerous situations. To achieve this goal, we use different language features, such as sentiment and dialogue act. We developed the neural models that learn to predict the sentiment of next upcoming utterance and to recognize the dialogue act of that utterance. Currently, we aim to bind them in a way to achieve our primary goal.

Keywords: Natural Language Processing, Human-Robot Interaction, Sentiment Analysis, Dialogue Act Processing

Short Curriculum

Experience

  • March 2016 - Feb 2019: Research Associate (PhD Student: SECURE Project) at Knowledge Technology Research Group, Department of Computer Science, University of Hamburg, Germany.
    • PhD Title: Conversational Language Learning for Human-robot Interaction
  • Qualification:
    • November 2020: PhD in Computer Science in "AI, Robotics and Informatics", University of Hamburg, Hamburg, Germany.
      Thesis Title: "Conversational Language Learning for Human-robot Interaction"
    • September 2015: Master in "Robotics and Applied Informatics", Ecole Centrale de Nantes, Nantes, France.
      Thesis Title: "Human-Humanoid Interaction by Verbal Dialogues"
    • June 2011: Bachelor of Enggineering in "Electronics and Tele-Communication", Aunradha Engg. College (Chikhli), Amravati University (MS), Amravati, India.

Project Grants

Publications

Bothe, C., Weber, C., Magg, S., and Wermter, S. (2020).
EDA: Enriching Emotional Dialogue Acts using an Ensemble of Neural Annotators.
The 12th Language Resources and Evaluation Conference (LREC 2020)


Bothe, C., and Wermter, S. (2019).
Ensemble BiRNNs for Contextual Emotion Detection in Dialogues.
The 13th International Workshop on Semantic Evaluation (SemEval-2019)


Bothe, C., Garcia F., Maya, A. C., Pandey A. K., and Wermter, S. (2018).
Towards Dialogue-based Navigation with Multivariate Adaptation driven by Intention and Politeness for Social Robots.
The International Conference on Social Robotics (ICSR 2018) Article  Demo Video


Bothe, C., Magg, S., Weber, C., and Wermter, S. (2018).
Discourse-Wizard: Discovering Deep Discourse Structure in your Conversation with RNNs.
arXiv:1806.11420 [cs.CL] Web Demo


Bothe, C., Magg, S., Weber, C., and Wermter, S. (2018).
Conversational Analysis using Utterance-level Attention-based Bidirectional Recurrent Neural Networks.
Proceedings of INTERSPEECH 2018. Abstract PDF Slides


Bothe, C., Weber, C., Magg, S., and Wermter, S. (2018).
A Context-based Approach for Dialogue Act Recognition using Simple Recurrent Neural Networks. 
Proceedings of the Language Resources and Evaluation Conference (LREC 2018). Abstract PDF


Bothe, C., Magg, S., Weber, C., and Wermter, S. (2017).
Dialogue-based Neural Learning to Estimate Sentiment of Next-upcoming Utterance.
Proceedings of 26th International Conference on Artificial Neural Networks (ICANN 2017).


Lakomkin, E., Bothe, C., and Wermter, S. (2017).
GradAscent at EmoInt-2017: Character- and Word-Level Recurrent Neural Network Models for Tweet Emotion Intensity Detection.
Proceedings of Workshop WASSA at EMNLP 2017.


Activities: Workshops, school and other participations

Demonstrations

Discourse-Wizard Live Web-Demo: Discovering Deep Discourse Structure in your Conversation with RNNs.
Dialogue Act Recognition Demonstration with and without context model, shows the importance of context in a conversation. [initial release: 25 May, 2018] Visit full website of demonstration project.

Secondment Project: Dialogue-based Navigation with Multivariate Adaptation driven by Intention and Politeness for Social Robots.
Video demo of the secondment work accomplished in collaboration with SoftBank Robotics Europe in Paris, France during July-August 2018. [initial release: 29 August 2018]