Welcome to the homepage of the International Workshop on Understanding and Modeling Multiparty, Multimodal Interactions. The workshop will take place on the 16th of November 2014 in association with the 16th ACM International Conference on Multimodal Interaction (ICMI 2014), Istanbul, Turkey.


Analysis and understanding of human-human conversations has stressed the importance of modeling all available verbal and non-verbal signals occurring in conversations to develop human-machine interfaces that are capable of interpreting multimodal signals as well as generating natural and synchronized responses. While the focus of research has been primarily to dyadic interactions, multiparty interactions as communicative setups involving more than two participants are a complex, yet challenging construct that merits attention.

Understanding and modelling the multiparty configuration and the underlying affective and social behavior of the participants address the design of interfaces that:

  • (a) can follow and participate in the conversation,
  • (b) present interactional skills to control the interaction flow,
  • (c) respond to it in the appropriate timing and as naturally as possible,
  • (d) keep track of the multimodal conversation of the participants as well as
  • (e ) guarantee a high and balanced level of involvement between them.

Related applications, such as socially aware and affective spoken dialogue systems and social robots, aim at stimulating natural, fluent and spontaneous spoken behavior from the users and decode the multimodal signals providing significant information about the participants’ state and verbal actions. Research on multimodal and multiparty interaction in terms of analysis, modelling and interface design encompasses the following challenges:

  • Interpretation of conversational behavior in terms of identifying the intended addressee and understand the conversational status of the participants according to the turn-taking structure, their conversational roles and their social and affective state as expressed in their speech and non-verbal modalities.
  • Modeling the multimodal strategies and regulatory actions reflected through the participants’ verbal and non-verbal signals and psychological variables
  • Generating and displaying the appropriate timed and natural multimodal behavior, including feedback and turn-taking signals.

Scope and Expected Impact

This workshop aims to explore this growing area of multiparty multimodal interaction by bridging this multidisciplinary area and bringing together researchers from domains of multimodal signal processing, dialog systems, human-computer interaction, human-robot interaction, multimodal conversation analysis and multimodal user interfaces.

Topics discussed will highlight recent developments and adopted methodologies in the analysis and modelling of multiparty interaction, the design and implementation principles of related human-machine interfaces, as well as the identification of potential limitations and ways of overcoming them, to conclude with the shaping of future lines of research.

Workshop topics include, but are not limited to:

  • Models of multimodal & multiparty human-human, human-computer, human-robot interaction.
  • Multiparty and multimodal dialogue modeling and systems.
  • Modeling engagement in multiparty interactions.
  • Modeling group/social relationships and dynamics.
  • Models of multiparty collaboration.
  • Design principles and best practices in human-machine interfaces tackling multiple modalities in multiparty interaction.
  • Evaluation methodologies for multiparty interactions.
  • Multiparty and multimodal turn-taking models.
  • Multi-human & multi-machine models of multiparty interaction.
  • Creating resources of multimodal & multiparty interactions (data capturing, corpus annotation, etc.).
  • Multimodal social and affective aspects in multiparty interaction.