+353-1-416-8900REST OF WORLD
+44-20-3973-8888REST OF WORLD
1-917-300-0470EAST COAST U.S
1-800-526-8630U.S. (TOLL FREE)

Automotive AI Agent Product Development and Commercialization Research Report, 2024

  • PDF Icon

    Report

  • 223 Pages
  • August 2024
  • Region: China, Global
  • Research In China
  • ID: 6039996

Automotive AI Agent product development: How to enable “cockpit endorser” via foundation models?

According to OPEN AI’s taxonomy of AI (a total of 5 levels), AI Agent is at L3 in the AI development path:

Limited by interaction modes and tool usage capabilities, popular foundation models in 2023 can only reach L2 (Reasoners) at most. In contrast, developing the automotive AI system by building automotive agents is a more appropriate goal: Agent improves weak links of application of foundation models in scenarios by way of calling active intelligent features and multiple tools/foundation models, further improving cockpit intelligence level.

Agent is the endorser of emotional cockpits

'Emotional cockpit' has been nothing new for multiple years, but actually realizing it still starts with the introduction of foundation models in vehicles. Under specific triggering conditions, voice assistant chats with the user through preset emotional corpus, but it cannot adapt to human dialogue logic in real chat scenarios. After being applied to vehicles, Agent integrated with multiple foundation model bases can recognize the environment more accurately, and more tool library interfaces further enhances its generalization capability to cope with chat and Q&A in diversified scenarios, truly realizing the warm companionship of the'cockpit endorser'.

The design of mainstream emotional interaction scenarios focuses on emotion recognition, user memory, and behavior arrangement. Some OEMs and Tier1s have also launched technologies or products to enhance the emotional value of Agents:

For example, Xiaoai Tongxue’s'emotional dialogue system' is built in three steps:

The mixed strategy dredging model is composed of three important components: mental state-enhanced encoder, mixed strategy learning module, and multi-factor-aware decoder.

The Institute of Digital Games at University of Malta proposes the Affectively Framework, establishes an emotional model, and adopts behavior reward and affective reward mechanisms in the training process to help Agents better understand human emotions and interact with humans more naturally.

Sore points that need to be solved to improve user experience

Imagine that an intelligent cockpit can not only understand and execute instructions given by the car owner, but also predict the owner's needs, just like a thoughtful personal assistant. Will this make car owners more excited? Compared to buying a traditional car and having to explore each function on one’s own, everyone wants a cockpit'endorser' which can help manage all cockpit functions as they just say a few words. Agent is a time-saving and trouble-free solution.

Currently most Agents introduced in vehicles still serve as an assistant and a companion listing functions for specific scenarios. Yet compared with foundation models, Agents feature greater potential, motivated autonomy, and outstanding tool-using capabilities, more fit with the label of'active intelligence', and can even make up for the limitations of foundation models in practical applications.

There is however still a long way to go in technology development to make automotive agents truly'active and intelligent' and meet users' experience value. Agent needs to be more precise in active perception, data processing, state recognition, etc., accurately understand the environment, judge real needs of people in the car, and then adopt corresponding strategies. Wherein, one of challenges lies in Agent's accurate judgment of user needs. Compared with passive interaction in normal circumstances, active intention recognition lacks voice commands. In the process of environment/personnel/vehicle state recognition, it may not be possible to obtain a description that is extremely close to the current scenario through vector feature matching, and the preset solution may not satisfy the real intentions of people in the car.

At present, most recommended functions are just to execute preset instructions. This limits'active and intelligent' capabilities of Agent and leads to frequent sore points in the reasoning process. For example, if Agent fails to accurately understand the current scenario, it may not make recommendations as expected, for instance, recommending music or navigation at a wrong time. The final result is to affect user experience and make the Agent become a'guessing machine' to users.

In addition, Agent also has shortcomings in perception when receiving voice commands. According to the publisher's incomplete statistics on sore points in automotive agent use cases of some car owners, the most frequent sore points are wake-up failure, recognition error, and false wake-up.

Among the 120 cases, wake-up failure, recognition error, and false wake-up are mentioned 19, 18, and 17 times respectively, namely, accounting for 16%, 15%, and 14%. Other sore points also include unavailability of see-and-speak, semantic clarification and continuous commands, inability to recognize dialects, and delayed response, totaling 89 sore points in voice link, or 74.2% of the total in this statistical survey.

Furthermore, a range of problems caused by unreasonable Agent architecture/scenario design also include irrational scenario triggering conditions, secondary wake-up of foundation models, failure of long/short-term memory, and recommended actions made autonomously according to owners' habits but failing to meet expectations, which respectively reflect limitations of Agent in scenario setting, architecture deployment, memory module, and reflection module.

In summary, sore points of users are concentrated in the perception and reasoning links:

  • Perception: wake-up failure, false wake-up, recognition error, unavailability of see-and-speak, delayed response, etc.
  • Reasoning: object recognition error, autonomous recommendation failing to meet user expectations, etc.

Quick-response multi-agent framework

To enable all the functions of the'endorser' in cockpit, it is very critical to design the service framework of Agent in diversified scenarios. Agent framework is relatively flexible in construction. The simplest'receiver + executer' architecture can be used, or a more complex multi-agent architecture can be built. Its design principle is very simple: as long as it can solve user problems in a specific scenario, it is a good framework design. As a qualified'cockpit endorser', automotive Agent not only needs to act as an independent thinker, make decisions and solve problems on its own, but also quickly and freely adopts human behavior patterns, acting as a human.

A typical example is NIO Nomi. It uses a multi-agent architecture, calling different tools in different scenarios, and using multiple agents with different functions to perform specific duties and jointly complete the process of understanding needs, making decisions, executing tasks, and reflecting on iterations. The multi-agent architecture allows Nomi to not only make quick response, but also react more naturally like a human. Its seamless integration with other vehicle functions brings smoother experiences.

Compared with single-agent systems, multi-agent systems are more suitable for executing complex instructions. They are like a small community in which each'agent' has its own tasks, but can cooperate to complete more complex tasks. For example, one agent is responsible for understanding your instructions, another is responsible for making a decision, and there are special agents to perform tasks. This design makes automotive AI Agent systems more flexible and allows them to handle more diverse tasks. For example, the Commonwealth Scientific and Industrial Research Organization (CSIRO) of Australia proposed a multi-agent system that uses both collaboration agents and execution agents:

The entire Agent framework is divided into 6 modules, namely, Understanding & Interaction, Reasoning, Tool Use, Multi-Agent Collaboration, Reflection, and Alignment. It embraces mainstream Agent design patterns, and covers the entire process from active perception, reasoning and decision, tool calling to generation and execution, reflection and iteration, and alignment with human values. This framework features a multi-agent system where different Agents can play different roles (distribution/decision/actuation) in the entire process, making best use of each Agent to improve task execution efficiency.

In addition, in diversified scenarios, Agent deployment methods and tool calling capabilities also affect whether or not user needs can be quickly and accurately executed. Take NIO Nomi as an example:

Nomi Agents are deployed at the end and cloud sides. End-side model and NomiGPT are deployed at the end and cloud sides, respectively. Deeply integrated with SkyOS, the end-side model can call atomic capabilities in time and schedule resources (data, vehicle control hardware/software, etc.) across domains to speed up response. NomiGPT on the cloud connects more cloud tool resource interfaces to further enhance Nomi Agents’ capability of calling tools. Nomi Agents’ architecture is arranged in SkyOS middleware layer. Combining with SkyOS, it makes the process of calling atomic APIs, hardware/software and data more natural, coordinated, and faster.

Table of Contents

1 Overview of Automotive AI Agent
1.1 Definition of Agent
1.2 Development History of Agent
1.3 Foundation Models Regain Vitality Using the Agent Concept
1.4 Differences between Foundation Models, Agents, and AIGC
1.5 Automotive AI Agent Product Definition
1.6 Automotive AI Agent based on Multi-agent System: Module Design
1.6 Automotive AI Agent based on Multi-agent System: Component Functions
1.6 Automotive AI Agent based on Multi-agent System: Component Characteristics
1.7 Automotive AI Agent Reference Architecture (by Functional Module and Component)
1.7 Automotive AI Agent Reference Architecture (by Deployment Level)
1.8 Agent Architecture Case (1): Original Diagram of NIO (Nomi) Architecture
1.8 Agent Architecture Case (1): Original Diagram of NIO (Nomi) Deployment
1.8 Agent Architecture Case (1): NIO (Nomi) Module Design
1.8 Agent Architecture Case (1): NIO (Nomi) Module Design - Multimodal Perception
1.8 Agent Architecture Case (1): NIO (Nomi) Module Design - Command Distribution
1.8 Agent Architecture Case (1): NIO (Nomi) Module Design - Scenario Customization and Creation Process
1.8 Agent Architecture Case (1): Highlights of NIO (Nomi)
1.8 Agent Architecture Case (2): Original Diagram of Li Auto (Lixiang Tongxue) Architecture
1.8 Agent Architecture Case (2): Li Auto (Lixiang Tongxue) Module Design
1.8 Agent Architecture Case (2): Li Auto (Lixiang Tongxue) Supporting Facilities - Data/Training Platform
1.8 Agent Architecture Case (2): Li Auto (Lixiang Tongxue) Supporting Facilities - Reasoning Engine
1.8 Agent Architecture Case (3): Original Diagram of Xiaomi (Xiaoai Tongxue) Architecture
1.8 Agent Architecture Case (3): Xiaomi (Xiaoai Tongxue) Module Design
1.8 Agent Architecture Case (4): Zeekr Agent Module Design
1.8 Agent Architecture Case (5): Original Diagram of Neta Agent Architecture Deployment
1.8 Agent Architecture Case (5): Neta Agent Module Design
1.8 Agent Architecture Case (6): Original Diagram of BAIC Agent Architecture Deployment
1.8 Agent Architecture Case (6): BAIC Agent Module Design
1.8 Agent Architecture Case (7): Huawei (Pangu Agent) Module Design
1.8 Agent Architecture Case (8): Original Diagram of AISpeech Agent Architecture Deployment
1.8 Agent Architecture Case (8): AISpeech Agent Module Design
1.8 Agent Architecture Case (9): Original Diagram of Lenovo Agent Architecture Deployment
1.8 Agent Architecture Case (10): Original Diagram of Zhipu Agent Architecture Deployment
1.8 Agent Architecture Case (10): Zhipu Agent Module Design
1.8 Agent Architecture Case (11): Original Diagram of Tinnove Agent Architecture Deployment
1.8 Agent Architecture Case (11): Tinnove Agent Module Design
1.9 Agent Architecture Design Process: Framework Selection
1.9 Agent Architecture Design Process: Tool Calling Method
1.10 Comparison of Automotive AI Agent Architecture
2 Key Issues in Development of Automotive AI Agent Products - User Sore Points and Technical Difficulties
2.1 Classification of Automotive AI Agent Scenario: Typical Commands in Different Scenarios
2.1 Classification of Automotive AI Agent Scenario: Case (1) NIO
2.1 Classification of Automotive AI Agent Scenario: Case (2) Li Auto
2.1 Classification of Automotive AI Agent Scenario: Case (3) Xiaomi
2.2 Automotive AI Agent Scenario Design Case (1) Q&A Scenario
2.2 Automotive AI Agent Scenario Design Case (2) Q&A Scenario
2.2 Automotive AI Agent Scenario Design Case (3) Mobility Scenario
2.2 Automotive AI Agent Scenario Design Case (4) Chat Scenario
2.2 Automotive AI Agent Scenario Design Case (5) Chat Scenario
2.2 Automotive AI Agent Scenario Design Case (6) Chat Scenario
2.2 Automotive AI Agent Scenario Design Case (7) Q&A/Office Scenario
2.3 User Sore Points in Different Agent Usage Scenarios: Summary
2.4 User Sore Points (1): Vehicle Control Scenario
2.4 User Sore Points (2): Mobility Scenario
2.4 User Sore Points (3): Q&An Scenario
2.4 User Sore Points (4): Entertainment Scenario
2.5 Agent Technical Difficulties
2.6 Agent Technology Case: Intent Recognition (Case 1)
2.6 Agent Technology Case: Intent Recognition (Case 2)
2.6 Agent Technology Case: Intent Recognition (Case 3)
2.6 Agent Technology Case: Intent Recognition (Case 4)
2.6 Agent Technology Case: Reasoning Acceleration (Case 1)
2.6 Agent Technology Case: Reasoning Acceleration (Case 2)
2.6 Agent Technology Case: Reasoning Acceleration (Case 3)
2.6 Agent Technology Case: Streaming Voice (Case 1)
2.6 Agent Technology Case: Streaming Voice (Case 2)
2.6 Agent Technology Case: Streaming Voice (Case 3)
2.6 Agent Technology Case: Emotional Interaction (Case 1)
2.6 Agent Technology Case: Emotional Interaction (Case 2)
2.6 Agent Technology Case: Emotional Interaction (Case 3)
2.7 Agent Technology Trends (1): Two Keys to Achieving Active Intelligence
2.7 Agent Technology Trends (2):
2.7 Agent Technology Trends (3): Two Mainstream Design Methods for Emotional Anthropomorphism
3 OEMs’ AI Agent Investment, Development, and Operation
3.1 Comparison of Automotive AI Agent Development Support
3.2 OEMs’ Planning for Automotive AI Agents
3.3 Comparison between Three Automotive AI Agent Development Modes: Advantages/Disadvantages
3.3 Comparison between Three Automotive AI Agent Development Modes: Cost
3.4 Position Setting of OEMs’ AI Agent Team
3.4 Case of OEMs’ AI Agent Team Position Setting (1): Positions Recruited by Chery AI Agent Team
3.4 Case of OEMs’ AI Agent Team Position Setting (2): Positions Recruited by Geely AI Agent Team
3.4 Case of OEMs’ AI Agent Team Position Setting (3): Positions Recruited by Li Auto AI Agent Team
3.4 Case of OEMs’ AI Agent Team Position Setting (4): Positions Recruited by NIO AI Agent Team
3.4 Case of OEMs’ AI Agent Team Position Setting (5): Positions Recruited by Xiaomi AI Agent Team
3.5 AI Agent Development Cycle and Operation Mode
3.6 AI Agent Business: OEMs’ Profit Model
3.6 AI Agent Business: Suppliers’ Profit Model
3.6 AI Agent Business: Suppliers’ Charging Standards
3.7 Commercial Development Trends of Automotive AI Agents
4 Automotive AI Agent Suppliers and Their Supply Relationships
4.1 Cockpit Base Foundation Model: Model Configurations
4.1 Cockpit Base Foundation Model: Selection Reference Factors
4.2 Cockpit Base Foundation Model Suppliers
4.2 Cockpit Base Foundation Model Suppliers (10)
4.3 Industry Chain of Vector Database Suppliers
4.4 Comparison between Vector Database Products: Chinese Vector Databases
4.4 Comparison between Vector Database Products: Foreign Vector Databases
4.5 Vector Database Supplier Cases
4.6 Comparison between Voice ASR Module Suppliers
4.7 ASR Module Supplier Cases
4.8 Cockpit Data Collection Sensors: Mainstream Configurations/Data Collection Regulations
4.9 Sensor Data Processing Cases
4.9 Sensor Data Processing Cases (4)

Companies Mentioned

  • Chery
  • Geely
  • Li Auto
  • NIO
  • Xiaomi
  • Zeekr
  • Neta
  • BAIC
  • Huawei
  • AISpeech
  • Zhipu
  • Tinnove
  • Lenovo

Methodology

Loading
LOADING...