China Automotive Multimodal Interaction Development Research Report, 2023 combs through the interaction modes of mainstream cockpits, the application of interaction modes in key vehicle models launched in 2023, the cockpit interaction solutions of suppliers, and the multimodal interaction fusion trends.
By sorting out the interaction modes and functions of new models rolled out in the previous year, it can be seen that active, anthropomorphic and natural interaction has become the main trend. In terms of interaction mode, in single-modal interaction, the control scope of mainstream interactions such as touch and voice has expanded from inside to outside cars, and the application cases of novel interactions like fingerprint and electromyography in cars have begun to increase; in multimodal fusion interaction, multiple fusion interactions, for example, voice + head posture/face/lip language, and face + emotion/smell, are being available to cars, aiming to create more active and natural human-vehicle interaction.
Equipped with 'SIMO' voice assistant, Jiyue 01 supports fully offline voice control in all zones, and allows for online voice interaction in the full process with weak network or without network. It enables recognition in 500 milliseconds and response in 700 milliseconds. Outside the car, the voiceprint recognition technology allows the driver and passengers to voice to operate air conditioning audio, lights, windows, doors, rear tailgate, charging cover and other functions, and supports voice parking outside the car.
The voiceprint recognition VOICE ID of Hycan A06/V09 can clearly identify valid users and commands, and will become the entrance to HYCAN ID, allowing users to access rich smart ecosystems and use 100+ entertainment applications. Moreover based on voiceprint recognition technology, the system will actively block other disturbing sounds to improve the accuracy of recognition at the driver’s seat.
Directly facing the rear row, the camera on the B-pillar of ARCFOX Kaola can monitor a child in real time. For example, when the child smiles, a snapshot will be taken automatically and sent to the center console screen; when the child cries, soothing music will be automatically played and the surface of the smart seat will make a respiratory rhythm to calm him/her down. In addition, the camera can also be linked with the in-car radar to determine whether the child is asleep or not. If the child is asleep, the sleep mode will be automatically opened, the seat ventilation will be turned on, the air-conditioning temperature will be adjusted appropriately, and the audio and ambient lighting will be linked, producing a rhythmic effect.
As multimodal foundation models continue to develop, their capabilities will also be significantly improved. This improvement gives AI Agent higher capabilities of perception and environment understanding to achieve more intelligent, automatic decisions and actions, and also creates new possibilities for its application in automotive, providing a broader prospect for future intelligent development.
The Spark Cockpit OS developed by iFlytek based on the Spark Model supports multiple interaction modes such as voice, gesture, eye tracking and DMS/OMS. The Spark Car Assistant enables multi-intent recognition by deep understanding of the context, providing more natural human-machine interaction. The iFlytek Spark Model, first mounted on the model EXEED Sterra ES, will bring five new experiences: Vehicle Function Tutor, Empathy Partner, Knowledge Encyclopedia, Travel Planning Expert, and Physical Health Consultant.
AITO M9, to be launched in December 2023, has HarmonyOS 4 IVI system built in. Xiaoyi, the intelligent assistant in HarmonyOS 4, has been connected to Huawei Pangu Model, which includes natural language model, visual model, and multi-modal model. The combination of HarmonyOS 4 + Xiaoyi + Pangu Model further enhances ecosystem capabilities such as cooperation of devices, and AI scenarios, and provides diverse interaction modes, including voice recognition, gesture control, and touch control, using multimodal interaction technology.
This product will be delivered within 3-5 business days.
By sorting out the interaction modes and functions of new models rolled out in the previous year, it can be seen that active, anthropomorphic and natural interaction has become the main trend. In terms of interaction mode, in single-modal interaction, the control scope of mainstream interactions such as touch and voice has expanded from inside to outside cars, and the application cases of novel interactions like fingerprint and electromyography in cars have begun to increase; in multimodal fusion interaction, multiple fusion interactions, for example, voice + head posture/face/lip language, and face + emotion/smell, are being available to cars, aiming to create more active and natural human-vehicle interaction.
1. Single-modal interaction develops in depth.
- Haptic interaction: cockpits more tend to have large and multiple screens. The wider application of smart surface materials in cockpits also allows for extension of the haptic sensing scope to doors, windows, seats and other components, and haptic feedback technology is gradually introduced;
- Voice interaction: enabled by large AI models, the voice interaction function becomes more intelligent and emotional. The introduction of lip movement recognition, voiceprint recognition and other technologies into cars brings higher accuracy of voice interaction and expands the control scope from inside to outside cars;
- Visual interaction: the scope of face/gesture recognition based on visual technology begins to expand to body recognition, including head posture, arm movements, and body actions, etc.;
- Olfactory interaction: the olfactory interaction function, which was originally often used to purify the air and remove odors, can now enable cockpit sterilization and disinfection, and supports the linkage of the fragrance system with cockpit scenes/
Case 1: car control by voice extends from inside to outside cars.
- Typical models: Changan Nevo A07, Jiyue 01
- Typical functions: voice outside the car to control doors, windows, parking assist, etc.
Equipped with 'SIMO' voice assistant, Jiyue 01 supports fully offline voice control in all zones, and allows for online voice interaction in the full process with weak network or without network. It enables recognition in 500 milliseconds and response in 700 milliseconds. Outside the car, the voiceprint recognition technology allows the driver and passengers to voice to operate air conditioning audio, lights, windows, doors, rear tailgate, charging cover and other functions, and supports voice parking outside the car.
Case 2: voiceprint recognition finds wider application.
- Typical models: Li L7, Hycan A06/V09
- Typical functions: identify drivers and passengers to provide targeted services
The voiceprint recognition VOICE ID of Hycan A06/V09 can clearly identify valid users and commands, and will become the entrance to HYCAN ID, allowing users to access rich smart ecosystems and use 100+ entertainment applications. Moreover based on voiceprint recognition technology, the system will actively block other disturbing sounds to improve the accuracy of recognition at the driver’s seat.
Case 3: myoelectric interaction comes into commercial use in cars.
- Typical model: Voyah Passion
- Typical model: micro-gesture control inside and outside the car
2. Multimodal fusion creates active interaction.
Currently multimodal fusion enabled by automakers includes but is not limited to voice + lip motion recognition, voice + face recognition, voice + gesture recognition, voice + head posture, face + emotion recognition, face + eye tracking, and fragrance + face + voice recognition. Wherein multimodal voice interaction is mainstream, and supports models mentioned above, like Changan Nevo A07, Jiyue 01, Li L7, and Hycan A06/V09.Case 1: voice + head posture interaction: WEY Blue Mountain DHT PHEV combines voice and head posture, offering a simple and intuitive interaction mode.
When the driver engages in a voice conversation, the camera in the cockpit of Blue Mountain captures the driver's head movements, and allows the driver to give yes/no reply by nodding/shaking head. For example, when voicing to control navigation, the driver can select a planned route scheme by nodding/shaking head.Case 2: face + emotion recognition: LIVAN 7 and ARCFOX Kaola among other models integrate emotion recognition technology into the face recognition function to provide active interaction and enhance interaction experience.
The multimodal intelligent recognition Face-ID system of LIVAN 7 supports lip movement recognition and emotion recognition, and can remember the personalized settings of vehicle functions such as voice, seats, rearview mirrors, ambient light and trunk, that correspond to the associated accounts. It can also select the appropriate music according to the user’s expression.Directly facing the rear row, the camera on the B-pillar of ARCFOX Kaola can monitor a child in real time. For example, when the child smiles, a snapshot will be taken automatically and sent to the center console screen; when the child cries, soothing music will be automatically played and the surface of the smart seat will make a respiratory rhythm to calm him/her down. In addition, the camera can also be linked with the in-car radar to determine whether the child is asleep or not. If the child is asleep, the sleep mode will be automatically opened, the seat ventilation will be turned on, the air-conditioning temperature will be adjusted appropriately, and the audio and ambient lighting will be linked, producing a rhythmic effect.
Case 3: face + smell: NIO EC7, LIVAN 7 and other models realize the linkage between the driver monitoring system and the fragrance system to improve driving safety.
- When NIO EC7 detects the driver's tiredness, it will automatically release a refreshing fragrance to ensure driving safety;
- When the camera on the A-pillar of LIVAN 7 detects a drowsy driver, it will automatically release a refreshing fragrance and give a voice prompt.
3. Foundation models and multimodal fusion will facilitate the introduction of AI Agent into cars.
Large AI models are evolving from the single-modal to the multi-modal and multi-task fusion. Compared with the single-modal that can only process one type of data such as text, image and speech, the multimodal can process and understand multiple types of data, including vision, hearing and language, thus better understanding and generating complex information.As multimodal foundation models continue to develop, their capabilities will also be significantly improved. This improvement gives AI Agent higher capabilities of perception and environment understanding to achieve more intelligent, automatic decisions and actions, and also creates new possibilities for its application in automotive, providing a broader prospect for future intelligent development.
The Spark Cockpit OS developed by iFlytek based on the Spark Model supports multiple interaction modes such as voice, gesture, eye tracking and DMS/OMS. The Spark Car Assistant enables multi-intent recognition by deep understanding of the context, providing more natural human-machine interaction. The iFlytek Spark Model, first mounted on the model EXEED Sterra ES, will bring five new experiences: Vehicle Function Tutor, Empathy Partner, Knowledge Encyclopedia, Travel Planning Expert, and Physical Health Consultant.
AITO M9, to be launched in December 2023, has HarmonyOS 4 IVI system built in. Xiaoyi, the intelligent assistant in HarmonyOS 4, has been connected to Huawei Pangu Model, which includes natural language model, visual model, and multi-modal model. The combination of HarmonyOS 4 + Xiaoyi + Pangu Model further enhances ecosystem capabilities such as cooperation of devices, and AI scenarios, and provides diverse interaction modes, including voice recognition, gesture control, and touch control, using multimodal interaction technology.
This product will be delivered within 3-5 business days.
Table of Contents
1 Overview of Multimodal Interaction
2 Human-Computer Interaction Based on Touch
3 Human-Computer Interaction Based on Hearing
4 Human-Computer Interaction Based on Vision
5 Human-Computer Interaction Based on Smell
6 Human-Computer Interaction Based on Biometrics
7 Multimodal Interaction Application by OEMs
8 Multimodal Interaction Solutions of Suppliers
9 Multimodal Interaction Summary and Trends
Companies Mentioned
- Aptiv
- Cipia Vision
- Cerence
- Continental
- iFlytek
- SenseTime
- ADAYO
- Desay SV
- ArcSoft Technology
- AISpeech
- Horizon Robotics
- ThunderSoft
- PATEO
- Joyson Electronics
- Huawei
- Baidu
- Tencent
- Banma Network
- MINIEYE
- Hikvision
Methodology
LOADING...