Automotive AI Agent product development: How to enable “cockpit endorser” via foundation models?
According to OPEN AI’s taxonomy of AI (a total of 5 levels), AI Agent is at L3 in the AI development path:
Limited by interaction modes and tool usage capabilities, popular foundation models in 2023 can only reach L2 (Reasoners) at most. In contrast, developing the automotive AI system by building automotive agents is a more appropriate goal: Agent improves weak links of application of foundation models in scenarios by way of calling active intelligent features and multiple tools/foundation models, further improving cockpit intelligence level.Agent is the endorser of emotional cockpits
'Emotional cockpit' has been nothing new for multiple years, but actually realizing it still starts with the introduction of foundation models in vehicles. Under specific triggering conditions, voice assistant chats with the user through preset emotional corpus, but it cannot adapt to human dialogue logic in real chat scenarios. After being applied to vehicles, Agent integrated with multiple foundation model bases can recognize the environment more accurately, and more tool library interfaces further enhances its generalization capability to cope with chat and Q&A in diversified scenarios, truly realizing the warm companionship of the'cockpit endorser'.The design of mainstream emotional interaction scenarios focuses on emotion recognition, user memory, and behavior arrangement. Some OEMs and Tier1s have also launched technologies or products to enhance the emotional value of Agents:
For example, Xiaoai Tongxue’s'emotional dialogue system' is built in three steps:
The mixed strategy dredging model is composed of three important components: mental state-enhanced encoder, mixed strategy learning module, and multi-factor-aware decoder.The Institute of Digital Games at University of Malta proposes the Affectively Framework, establishes an emotional model, and adopts behavior reward and affective reward mechanisms in the training process to help Agents better understand human emotions and interact with humans more naturally.
Sore points that need to be solved to improve user experience
Imagine that an intelligent cockpit can not only understand and execute instructions given by the car owner, but also predict the owner's needs, just like a thoughtful personal assistant. Will this make car owners more excited? Compared to buying a traditional car and having to explore each function on one’s own, everyone wants a cockpit'endorser' which can help manage all cockpit functions as they just say a few words. Agent is a time-saving and trouble-free solution.Currently most Agents introduced in vehicles still serve as an assistant and a companion listing functions for specific scenarios. Yet compared with foundation models, Agents feature greater potential, motivated autonomy, and outstanding tool-using capabilities, more fit with the label of'active intelligence', and can even make up for the limitations of foundation models in practical applications.
There is however still a long way to go in technology development to make automotive agents truly'active and intelligent' and meet users' experience value. Agent needs to be more precise in active perception, data processing, state recognition, etc., accurately understand the environment, judge real needs of people in the car, and then adopt corresponding strategies. Wherein, one of challenges lies in Agent's accurate judgment of user needs. Compared with passive interaction in normal circumstances, active intention recognition lacks voice commands. In the process of environment/personnel/vehicle state recognition, it may not be possible to obtain a description that is extremely close to the current scenario through vector feature matching, and the preset solution may not satisfy the real intentions of people in the car.
At present, most recommended functions are just to execute preset instructions. This limits'active and intelligent' capabilities of Agent and leads to frequent sore points in the reasoning process. For example, if Agent fails to accurately understand the current scenario, it may not make recommendations as expected, for instance, recommending music or navigation at a wrong time. The final result is to affect user experience and make the Agent become a'guessing machine' to users.
In addition, Agent also has shortcomings in perception when receiving voice commands. According to the publisher's incomplete statistics on sore points in automotive agent use cases of some car owners, the most frequent sore points are wake-up failure, recognition error, and false wake-up.
Among the 120 cases, wake-up failure, recognition error, and false wake-up are mentioned 19, 18, and 17 times respectively, namely, accounting for 16%, 15%, and 14%. Other sore points also include unavailability of see-and-speak, semantic clarification and continuous commands, inability to recognize dialects, and delayed response, totaling 89 sore points in voice link, or 74.2% of the total in this statistical survey.
Furthermore, a range of problems caused by unreasonable Agent architecture/scenario design also include irrational scenario triggering conditions, secondary wake-up of foundation models, failure of long/short-term memory, and recommended actions made autonomously according to owners' habits but failing to meet expectations, which respectively reflect limitations of Agent in scenario setting, architecture deployment, memory module, and reflection module.
In summary, sore points of users are concentrated in the perception and reasoning links:
- Perception: wake-up failure, false wake-up, recognition error, unavailability of see-and-speak, delayed response, etc.
- Reasoning: object recognition error, autonomous recommendation failing to meet user expectations, etc.
Quick-response multi-agent framework
To enable all the functions of the'endorser' in cockpit, it is very critical to design the service framework of Agent in diversified scenarios. Agent framework is relatively flexible in construction. The simplest'receiver + executer' architecture can be used, or a more complex multi-agent architecture can be built. Its design principle is very simple: as long as it can solve user problems in a specific scenario, it is a good framework design. As a qualified'cockpit endorser', automotive Agent not only needs to act as an independent thinker, make decisions and solve problems on its own, but also quickly and freely adopts human behavior patterns, acting as a human.A typical example is NIO Nomi. It uses a multi-agent architecture, calling different tools in different scenarios, and using multiple agents with different functions to perform specific duties and jointly complete the process of understanding needs, making decisions, executing tasks, and reflecting on iterations. The multi-agent architecture allows Nomi to not only make quick response, but also react more naturally like a human. Its seamless integration with other vehicle functions brings smoother experiences.
Compared with single-agent systems, multi-agent systems are more suitable for executing complex instructions. They are like a small community in which each'agent' has its own tasks, but can cooperate to complete more complex tasks. For example, one agent is responsible for understanding your instructions, another is responsible for making a decision, and there are special agents to perform tasks. This design makes automotive AI Agent systems more flexible and allows them to handle more diverse tasks. For example, the Commonwealth Scientific and Industrial Research Organization (CSIRO) of Australia proposed a multi-agent system that uses both collaboration agents and execution agents:
The entire Agent framework is divided into 6 modules, namely, Understanding & Interaction, Reasoning, Tool Use, Multi-Agent Collaboration, Reflection, and Alignment. It embraces mainstream Agent design patterns, and covers the entire process from active perception, reasoning and decision, tool calling to generation and execution, reflection and iteration, and alignment with human values. This framework features a multi-agent system where different Agents can play different roles (distribution/decision/actuation) in the entire process, making best use of each Agent to improve task execution efficiency.
In addition, in diversified scenarios, Agent deployment methods and tool calling capabilities also affect whether or not user needs can be quickly and accurately executed. Take NIO Nomi as an example:
Nomi Agents are deployed at the end and cloud sides. End-side model and NomiGPT are deployed at the end and cloud sides, respectively. Deeply integrated with SkyOS, the end-side model can call atomic capabilities in time and schedule resources (data, vehicle control hardware/software, etc.) across domains to speed up response. NomiGPT on the cloud connects more cloud tool resource interfaces to further enhance Nomi Agents’ capability of calling tools. Nomi Agents’ architecture is arranged in SkyOS middleware layer. Combining with SkyOS, it makes the process of calling atomic APIs, hardware/software and data more natural, coordinated, and faster.
Table of Contents
1 Overview of Automotive AI Agent
2 Key Issues in Development of Automotive AI Agent Products - User Sore Points and Technical Difficulties
3 OEMs’ AI Agent Investment, Development, and Operation
4 Automotive AI Agent Suppliers and Their Supply Relationships
Companies Mentioned
- Chery
- Geely
- Li Auto
- NIO
- Xiaomi
- Zeekr
- Neta
- BAIC
- Huawei
- AISpeech
- Zhipu
- Tinnove
- Lenovo
Methodology
LOADING...