BAUM-2: a multilingual audio-visual affective face database


Erdem C. E., TURAN Ç., AYDIN Z.

MULTIMEDIA TOOLS AND APPLICATIONS, vol.74, no.18, pp.7429-7459, 2015 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 74 Issue: 18
  • Publication Date: 2015
  • Doi Number: 10.1007/s11042-014-1986-2
  • Journal Name: MULTIMEDIA TOOLS AND APPLICATIONS
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.7429-7459
  • Abdullah Gül University Affiliated: Yes

Abstract

Access to audio-visual databases, which contain enough variety and are richly annotated is essential to assess the performance of algorithms in affective computing applications, which require emotion recognition from face and/or speech data. Most databases available today have been recorded under tightly controlled environments, are mostly acted and do not contain speech data. We first present a semi-automatic method that can extract audio-visual facial video clips from movies and TV programs in any language. The method is based on automatic detection and tracking of faces in a movie until the face is occluded or a scene cut occurs. We also created a video-based database, named as BAUM-2, which consists of annotated audio-visual facial clips in several languages. The collected clips simulate real-world conditions by containing various head poses, illumination conditions, accessories, temporary occlusions and subjects with a wide range of ages. The proposed semi-automatic affective clip extraction method can easily be used to extend the database to contain clips in other languages. We also created an image based facial expression database from the peak frames of the video clips, which is named as BAUM-2i. Baseline image and video-based facial expression recognition results using state-of-the art features and classifiers indicate that facial expression recognition under tough and close-to-natural conditions is quite challenging.