Multimodal Behavioral Analysis in the Wild: Advances and Challenges presents the state-of- the-art in behavioral signal processing using different data modalities, with a special focus on identifying the strengths and limitations of current technologies. The book focuses on audio and video modalities, while also emphasizing emerging modalities, such as accelerometer or proximity data. It covers tasks at different levels of complexity, from low level (speaker detection, sensorimotor links, source separation), through middle level (conversational group detection, addresser and addressee identification), and high level (personality and emotion recognition), providing insights on how to exploit inter-level and intra-level links.
This is a valuable resource on the state-of-the- art and future research challenges of multi-modal behavioral analysis in the wild. It is suitable for researchers and graduate students in the fields of computer vision, audio processing, pattern recognition, machine learning and social signal processing.
Please Note: This is an On Demand product, delivery may take up to 11 working days after payment has been received.
Table of Contents
1. Multimodal open-domain conversations with robotic platforms 2. Audio-motor integration for robot audition 3. Audio source separation into the wild 4. Designing audio-visual tools to support multisensory disabilities 5. Audio-visual learning for body-worn cameras 6. Activity recognition from visual lifelogs: State of the art and future challenges 7. Lifelog retrieval for memory stimulation of people with memory impairment 8. Integrating signals for reasoning about visitors' behavior in cultural heritage 9. Wearable systems for improving tourist experience 10. Recognizing social relationships from an egocentric vision perspective 11. Complex conversational scene analysis using wearable sensors 12. Detecting conversational groups in images using clustering games 13. We are less free than how we think: Regular patterns in nonverbal communication 14. Crowd behavior analysis from fixed and moving cameras 15. Towards multi-modality invariance: A study in visual representation 16. Sentiment concept embedding for visual affect recognition 17. Video-based emotion recognition in the wild 18. Real-world automatic continuous affect recognition from audiovisual signals 19. Affective facial computing: Generalizability across domains 20. Automatic recognition of self-reported and perceived emotions