Affective computing is an emerging field situated at the intersection of artificial intelligence and behavioral science. Affective computing refers to studying and developing systems that recognize, interpret, process, and simulate human emotions. It has recently seen significant advances from exploratory studies to real-world applications.
Multimodal Affective Computing offers readers a concise overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches in applied affective computing systems and social signal processing. It covers affective facial expression and recognition, affective body expression and recognition, affective speech processing, affective text, and dialogue processing, recognizing affect using physiological measures, computational models of emotion and theoretical foundations, and affective sound and music processing.
This book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.The book is an informative resource for academicians, professionals, researchers, and students at engineering and medical institutions working in the areas of applied affective computing, sentiment analysis, and emotion recognition.
Multimodal Affective Computing offers readers a concise overview of the state-of-the-art and emerging themes in affective computing, including a comprehensive review of the existing approaches in applied affective computing systems and social signal processing. It covers affective facial expression and recognition, affective body expression and recognition, affective speech processing, affective text, and dialogue processing, recognizing affect using physiological measures, computational models of emotion and theoretical foundations, and affective sound and music processing.
This book identifies future directions for the field and summarizes a set of guidelines for developing next-generation affective computing systems that are effective, safe, and human-centered.The book is an informative resource for academicians, professionals, researchers, and students at engineering and medical institutions working in the areas of applied affective computing, sentiment analysis, and emotion recognition.
Table of Contents
Chapter 1 Affective Computing1.1. Introduction
1.2. What is Emotion?
1.2.1. Affective Human-Computer Interaction
1.3. Background
1.4. The Role of Emotions in Decision Making
1.5. Challenges in Affective Computing
1.5.1. How Can Many Emotions Be Analyzed in a Single Framework?
1.5.2. How Can Complex Emotions Be Represented in a Single Framework or Model?
1.5.3. is the Chosen Theoretical Viewpoint Relevant to Other Areas of Affective Computing?
1.5.4. How Can Physiological Signals Be Used to Anticipate Complicated Emotions?
1.6. Affective Computing in Practice
1.6.1. Avatars or Virtual Agents
1.6.2. Robotics
1.6.3. Gaming
1.6.4. Education
1.6.5. Medical
1.6.6. Smart Homes and Workplace Environments
- Conclusion
- References
2.1. Introduction
2.2. Affective Computing and Emotion
2.2.1. Affective Human-Computer Interaction
2.2.2. Human Emotion Expression and Perception
2.2.2.1. Facial Expressions
2.2.2.2. Audiohg
2.2.2.3. Physiological Signals
2.2.2.4. Hand and Gesture Movement
2.3. Recognition of Facial Emotion
2.3.1. Facial Expression Fundamentals
2.3.2. Emotion Modeling
2.3.3. Representation of Facial Expression
2.3.4. Facial Emotion's Limitations
2.3.5. Techniques for Classifying Facial Expressions
- Conclusion
- References
3.1. Introduction
3.2. Emotion Theory
3.2.1. Categorical Approach
3.2.2. Evolutionary Theory of Emotion by Darwin
3.2.3. Cognitive Appraisal and Physiological Theory of Emotions
3.2.4. Dimensional Approaches to Emotions
- Conclusion
- References
4.1. Introduction
4.2. Affective Information Extraction and Processing
4.2.1. Information Extraction from Audio
4.2.2. Information Extraction from Video
4.2.3. Information Extraction from Physiological Signals
4.3. Studies on Affect Information Processing
4.4. Evaluation
4.4.1. Types of Errors
4.4.1.1. False Acceptance Ratio
4.4.1.2. False Reject Ratio
4.4.2. Threshold Criteria
4.4.3. Performance Criteria
4.4.4. Evaluation Metrics
4.4.4.1. Mean Absolute Error (Mae)
4.4.4.2. Mean Square Error (Mse)
4.4.5. Roc Curves
4.4.6. F1 Measure
- Conclusion
- References
5.1. Introduction
5.2. Multimodal Information Fusion
5.2.1. Early Fusion
5.2.2. Intermediate Fusion
5.2.3. Late Fusion
5.3. Levels of Information Fusion
5.3.1. Sensor or Data-Level Fusion
5.3.2. Feature Level Fusion
5.3.3. Decision-Level Fusion
5.4. Major Challenges in Information Fusion
- Conclusion
- References
6.1. Introduction
6.2. The Benefits of Multimodal Features
6.2.1. Noise in Sensed Data
6.2.2. Non-Universality
6.2.3. Complementary Information
6.3. Feature Level Fusion
6.4. Multimodal Feature-Level Fusion
6.4.1. Feature Normalization
6.4.2. Feature Selection
6.4.3. Criteria for Feature Selection
6.5. Multimodal Fusion Framework
6.5.1. Feature Extraction and Selection
6.5.1.1. Extraction of Audio Features
6.5.1.2. Extraction of Video Features
6.5.1.3. Extraction of Peripheral Features from Eeg
6.5.2. Dimension Reduction and Feature-Level Fusion
6.5.3. Emotion Mapping to a 3D Vad Space
6.6. Multiresolution Analysis
6.6.1. Motivations for the Use of Multiresolution Analysis
6.6.2. The Wavelet Transform
6.6.3. The Curvelet Transform
6.6.4. The Ridgelet Transform
- Conclusion
- References
7.1. Introduction
7.2. The Challenges in Facial Emotion Recognition
7.3. Noise and Dynamic Range in Digital Images
7.3.1. Characteristic Sources of Digital Image Noise
7.3.1.1. Sensor Read Noise
7.3.1.2. Pattern Noise
7.3.1.3. Thermal Noise
7.3.1.4. Pixel Response Non-Uniformity (Prnu)
7.3.1.5. Quantization Rrror
7.4. The Database
7.4.1. Cohn-Kanade Database
7.4.2. Jaffe Database
7.4.3. In-House Database
7.5. Experiments With the Proposed Framework
7.5.1. Image Pre-Processing
7.5.2. Feature Extraction
7.5.3. Feature Matching
7.6. Results and Discussions
7.7. Results Under Illumination Changes
7.8. Results Under Gaussian Noise
7.8.1. Comparison With Other Strategies
- Conclusion
- References
8.1. Introduction
8.2. Recognition of Spontaneous Effects
8.3. The Database
8.3.1. Enterface Database
8.3.2. Rml Database
8.4. Audio-Based Emotion Recognition System
8.4.1. Experiments
8.4.2. System Development
8.4.2.1. Audio Features
8.5. Visual Cue-Based Emotion Recognition System
8.5.1. Experiments
8.5.2. System Development
8.5.2.1. Visual Feature
8.6. Experiments Based on the Proposed Audio-Visual Cues Fusion Framework
8.6.1. Results
8.6.2. Comparison to Other Research
- Conclusion
- References
9.1. Introduction
9.1.1. Electrical Brain Activity
9.1.2. Muscle Activity
9.1.3. Skin Conductivity
9.1.4. Skin Temperature
9.2. Multimodal Emotion Database
9.2.1. Deap Database
9.3. Feature Extraction
9.3.1. Feature Extraction from Eeg
9.3.2. Feature Extraction from Peripheral Signals
9.4. Classification and Recognition of Emotion
9.4.1. Support Vector Machine (Svm)
9.4.2. Multi-Layer Perceptron (Mlp)
9.4.3. K-Nearest Neighbor (K-Nn)
9.5. Results and Discussion
9.5.1. Emotion Categorization Results Based on the Proposed Multimodal Fusion Architecture
- Conclusion
- References
10.1. Introduction
10.2. Affect Representation in 2D Space
10.3. Emotion Representation in 3D Space
10.4. 3D Emotion Modeling Vad Space
10.5. Emotion Prediction in the Proposed Framework
10.5.1. Multimodal Data Processing
10.5.1.1. Prediction of Emotion from a Visual Cue
10.5.1.2. Prediction of Emotion from Physiological Cue
10.5.2. Ground Truth Data
10.5.3. Emotion Prediction
10.6. Feature Selection and Classification
10.7. Results and Discussions
- Conclusion
- References
- Subject Index
- Contents Preface
- I List of Contributors
Chapter 1 Overview, Category and Ontology of Assistive Devices
- Arun Kumar G. Hiremath and Nirmala C.R. Introduction
- Scope of the Assistive Technology
- Smart Self-Management as a Means to Empower With Assistive Technology
- Who Adopts Assistive Technology?
- The Emergence of Assistive Technology
- Professional Practice in Assistive Technology
- The Features of Assistive Technology
- Categories
- No-Technology Devices
- Low-Technology Devices
- Mid and High Technology Devices
- Design Considerations for at
- Evaluation of Functional Capabilities of Assistive Devices
- Possible Outcomes With at
- Feature Matching
- Ontology of Assistive Devices
- General Purpose Assistive Technologies
- Performance Areas
- Assistive Technology for Manipulation and Control of the Environment
- Issues Associated With Assistive Technology Practice
- Attempts to Maximize the Accessibility and Affordability of Assistive Technology
- Research Trends and Future Research Directions
- Conclusion
- Consent for Publication
- Conflict of Interest
- Acknowledgment
- References
- Meenu Chandel and Manu Sood Background
- Introduction
- Accessibility for Different Categories of Pwds
- Visually Impaired Individuals
- Physically Challenged Individuals
- Deaf And/Or Hearing Impaired Individuals
- Hardware and Software Accessibility for Pwds
- Hardware Options
- Software Options
- Assistive Technology
- Disabilities and Web Accessibility
- Disabilities and Ict Accessibility
- Frequency of Using Ict Facilities
- Challenges Constraining Access to and Use of Icts by the Pwd
- Inadequate Friendliness
- Ineffective Training Provisions
- Power Supply Outages
- Outdated Ict Infrastructure
- Shortage of Icts Experts and Technicians
- Internet Connectivity
- Results of Shortage of Ict Facilities
- Recommendations and Suggestions
- Conclusion
- Consent for Publication
- Conflict of Interest
- Acknowledgement
- Refrences
Chapter 3 Computer Vision-Based Assistive Technology for Blind and Visually Impaired People: a Deep Learning Approach
- Roopa G.M. Chetana Prakash and Pradeep N. Introduction
- The Global Assistive Technology Community and Its Impacts on People With Disabilities
- Present-Day Scenario
- General Design Ideas and the Usability of Daily Items
- Evolution of Assistive Technologies
- Assistive Technologies: Functional Framework
- Hard-Soft Technologies
- Object Recognition
- Background Theory
- Object Detection Algorithms
- Sift (Scale Invariant Feature Transform) Algorithm
- Surf (Speeded Up Robust Features)
- Ocr(Optical-Character-Recognition)
- Yolo (You Only Look Once)
- R-Cnn
- Gaps Identified
- Existing Assistance Solutions for Blind People
- Primary Objective of Computer Vision
- Methodology Proposed
- Yolov3 Architecture
- Experimental Setup
- Results and Discussion
- System Work-Flow for Object Detection
- Smart Reading System for Visually Impaired People Using Tesseract
- Flow Process of Tesseract
- Future Research Directions
- Conclusion
- Consent of Publication
- Conflict of Interest
- Acknowledgment
- References
- Annu Rani, Vishal Goyal and Lalit Goyal Introduction
- Disability
- Types of Disabilities
- Blindness
- Low Vision
- Hearing Disability
- Dwarfism
- Intellectual Disability
- Autism Spectrum Disorder (Asd)
- Mental Illness
- Locomotor Disability
- Leprosy Cured Persons
- Muscular Dystrophy (Md)
- Chronic Neurological Conditions
- Specific Learning Disability
- Multiple Sclerosis(Ms)
- Speech and Language Disability
- Thalassemia
- Hemophilia
- Sickle Cell Disease
- Multiple Disabilities, Including Deaf-Blindness
- Acid Attack
- Parkinson’S Disease (Pd)
- Cerebral Palsy (Cp)
- Common Barriers Faced by People With Disabilities
- Communication Problem
- Physical Obstacles
- Social Obstacles
- Attitudinal Barriers
- Transportation Obstacles
- Principles for Providing Assistive Devices
- Availability
- Accessibility
- Affordability
- Adaptability
- Acceptability
- Quality
- Assistive Technologies for Home Relaxation and Care for Disabled People
- Mobility Aids
- Listening and Hearing Aids
- Cognitive Devices
- Comforting Aids
- Limit Motor Skills Aids
- Vision Aids
- Home Security and Safety
- Daily Living Aids
- Computer Access Aids
- Mobile Apps for All Disabilities
- Benefits of Assistive Technology Devices in Individual's Life
- Conclusion
- Consent of Publication
- Conflict of Interest
- Acknowledgement
- References
- Rakesh Kumar, Lalit Goyal and Vishal Goyal Introduction
- Facts About Indian Sign Language
- Communication Between Deaf and Hearing Communities
- English Text to Indian Sign Language Translation System
- English-Isl Lexicon
- Text Parser Module to Parse English Sentences
- Grammatical Rules for Transformation of English to Isl Sentence
- Eliminator Module for Removal of Undesired Words
- Lemmatization and Synonym Replacement
- Sign Animation Using Avatar
- Announcements System for Railway Stations
- Announcements System for Airports
- Announcements System for Bus Stands
- Conclusion and Future Work
- Consent for Publication
- Conflict of Interest
- Acknowledgement
- References
- Jestin Joy, Kannan Balakrishnan and M Sreeraj Introduction
- Background
- Sign Language Recognition
- Sensor-Based System
- Vision-Based Systems
- Challenges and Motivation of Sign Language Recognition
- Commonly Used Sensors
- Different Recognition Models
- Sign Language Generation
- Data Science Based Aac Solutions
- Conclusion and Future Directions
- Consent for Publication
- Conflict of Interest
- Acknowledgement
- References
- Bhagvan Kommadi Introduction
- Accessibility for Different Disabilities
- Critical Elements - Accessibility Ecosystem