Functional safety research: under the "equal rights for intelligent driving", safety of the intended functionality (SOTIF) design is crucial
As Chinese new energy vehicle manufacturers propose "Equal Rights for Intelligent Driving," when a high-level autonomous driving system is in operation, the time from the system issuing a takeover request to an actual collision is only 1-2 seconds. The importance of "safety of the intended functionality (SOTIF)" design by OEMs is self-evident. Mandatory industry standards and laws and regulations are essential. In the case of the European functional safety standard ISO 26262, accountability mechanisms can compel OEMs to take safety design seriously.
In recent years, OEMs and suppliers have placed greater emphasis on functional safety certification. According to statistics of public information, in 2024, Chinese companies obtained 134 functional safety certifications, including 52 functional safety product certifications (compared to 44 in 2023).
In addition to functional safety certification, driven by the formal implementation of SOTIF standards, over the past two years, more than 20 OEMs and suppliers, including Great Wall Motor, FAW Hongqi, Changan, GAC, Horizon Robotics, Jingwei Hirain, Huawei, Desay SV, and SenseAuto, have deployed SOTIF processes and obtained pre-certification, laying a safety foundation for their further layout of autonomous driving systems.
The core content of ISO/PAS 8800 includes: AI safety lifecycle management, safety requirements for AI systems, design and verification processes, AI system safety analysis, and data-related safety considerations. Its implementation will effectively help OEMs, component suppliers, and software developers systematically identify and manage potential risks in AI-related technology applications, thereby improving the overall safety of automotive products.
Additionally, ISO plans to include safety requirements for AI systems in the third edition of ISO 26262, scheduled for release in 2027. This will cover failure mode identification for deep learning models, safety mechanism design, and verification methods.
The new third edition requires OEMs to establish a full lifecycle management system for AI development, involving transparency and traceability in data collection, model training, deployment verification, and other stages. For example, formal verification is required to ensure the determinacy of neural network outputs, and safety cases are established for AI components.
Furthermore, in January 2024, SC 42, the joint IEC and ISO committee that develops international standards for artificial intelligence (AI), formulated and released ISO/IEC TR 5469:2024 Artificial Intelligence - Functional Safety and AI Systems, aiming to address the differences between traditional functional safety system development processes, and the technical characteristics and processes of AI technology development and enable the gradual application of AI technology in functional safety systems. The report highlights the application and usage levels of AI technology in safety-related systems, the components of AI technology, the unique technical characteristics and risks introduced by AI compared to non-AI technology, how to apply AI technology in functional safety systems, how to use non-AI technology to ensure the safety of AI-controlled systems, and practical techniques for designing and developing safety-related functions using AI systems.
For intelligent driving, Bosch has proposed an AI Safety mechanism. Its Chinese and global teams have applied years of expertise in AI safety, including pre-research, practical processes, methodologies, and tools, into every stage of the full development cycle of functional safety for high-level intelligent driving solutions, involving data selection, model safety, and model verification, so as to ensure safety for AI-driven driving systems in all aspects.
Bosch has also introduced an innovative, systematic, and structured solution - the Machine Learning Development V-Model Process, which combines the traditional system/software development V-model and expands with a data-driven approach, referred to as the Data-Driven Engineering (DDE) process.
DDE provides a systematic process for ML system development, featuring a flexible and scalable operational design domain (ODD) analysis method. It standardizes data management methods for ML system development and provides infrastructure for safety analysis, testing, verification, and functional iteration of ML systems.
With the support of AI foundation models, the functional safety processes in vehicle function development, including hazard identification, risk assessment, functional safety concept, system design, and safety implementation, can benefit from AI at each stage.
For example, in the hazard identification phase, AI and LLMs can assist by analyzing vast datasets, historical accidents, and industry reports. They process unstructured data, such as natural language documents, to extract valuable insights that traditional methods might overlook, and detect potential hazards that could escape human eyes.
In October 2024, Jingwei Hirain successfully self-developed HIRAIN FuSa AI Agent, a functional safety agent capable of automatically conducting hazard analysis and risk assessment for functional safety analysis targets, setting safety goals, conducting safety analysis and deriving safety requirements, and continuously performing R&D testing and verification to ensure vehicle safety.
At GTC 2025, NVIDIA announced NVIDIA Halos, a full-stack, comprehensive safety system for autonomous vehicles that brings together NVIDIA’s lineup of automotive hardware and software safety solutions with its cutting-edge AI research in AV safety.
Halos is a holistic safety system on three different but complementary levels. At the technology level, it spans platform, algorithmic and ecosystem safety. At the development level, it includes design-time, deployment-time and validation-time guardrails. And at the computational level, it spans AI training to deployment, using three powerful computers - NVIDIA DGX for AI training, NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX for simulation, and NVIDIA DRIVE AGX for deployment.
Serving as an entry point to Halos is the NVIDIA AI Systems Inspection Lab, which allows automakers and developers to verify the safe integration of their products with NVIDIA technology. The AI Systems Inspection Lab has been accredited by the ANSI National Accreditation Board for an inspection plan integrating functional safety, cybersecurity, AI safety and regulations into a unified safety framework.
The NVIDIA DRIVE AI Systems Inspection Lab also complements the missions of independent third-party certification bodies, including technical service organizations such as TüV SüD, TüV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA. It dovetails with recent significant safety certifications and assessments of NVIDIA automotive products.
As Chinese new energy vehicle manufacturers propose "Equal Rights for Intelligent Driving," when a high-level autonomous driving system is in operation, the time from the system issuing a takeover request to an actual collision is only 1-2 seconds. The importance of "safety of the intended functionality (SOTIF)" design by OEMs is self-evident. Mandatory industry standards and laws and regulations are essential. In the case of the European functional safety standard ISO 26262, accountability mechanisms can compel OEMs to take safety design seriously.
In recent years, OEMs and suppliers have placed greater emphasis on functional safety certification. According to statistics of public information, in 2024, Chinese companies obtained 134 functional safety certifications, including 52 functional safety product certifications (compared to 44 in 2023).
In addition to functional safety certification, driven by the formal implementation of SOTIF standards, over the past two years, more than 20 OEMs and suppliers, including Great Wall Motor, FAW Hongqi, Changan, GAC, Horizon Robotics, Jingwei Hirain, Huawei, Desay SV, and SenseAuto, have deployed SOTIF processes and obtained pre-certification, laying a safety foundation for their further layout of autonomous driving systems.
In terms of regulation, ISO incorporates AI into functional safety certification.
On the regulation front, in December 2024, the International Organization for Standardization (ISO) officially released ISO/PAS 8800:2024 Road Vehicles - Safety and Artificial Intelligence. This standard aims to manage and enhance the safety of AI systems in road vehicles, and provide a comprehensive safety framework and guidelines for ever wider adoption of AI technology in the automotive sector.The core content of ISO/PAS 8800 includes: AI safety lifecycle management, safety requirements for AI systems, design and verification processes, AI system safety analysis, and data-related safety considerations. Its implementation will effectively help OEMs, component suppliers, and software developers systematically identify and manage potential risks in AI-related technology applications, thereby improving the overall safety of automotive products.
Additionally, ISO plans to include safety requirements for AI systems in the third edition of ISO 26262, scheduled for release in 2027. This will cover failure mode identification for deep learning models, safety mechanism design, and verification methods.
The new third edition requires OEMs to establish a full lifecycle management system for AI development, involving transparency and traceability in data collection, model training, deployment verification, and other stages. For example, formal verification is required to ensure the determinacy of neural network outputs, and safety cases are established for AI components.
Furthermore, in January 2024, SC 42, the joint IEC and ISO committee that develops international standards for artificial intelligence (AI), formulated and released ISO/IEC TR 5469:2024 Artificial Intelligence - Functional Safety and AI Systems, aiming to address the differences between traditional functional safety system development processes, and the technical characteristics and processes of AI technology development and enable the gradual application of AI technology in functional safety systems. The report highlights the application and usage levels of AI technology in safety-related systems, the components of AI technology, the unique technical characteristics and risks introduced by AI compared to non-AI technology, how to apply AI technology in functional safety systems, how to use non-AI technology to ensure the safety of AI-controlled systems, and practical techniques for designing and developing safety-related functions using AI systems.
Suppliers' Layout of Functional Safety Solutions for AI Systems
Facing challenges in AI system safety, suppliers such as Bosch and NVIDIA have introduced AI system safety-related solutions.For intelligent driving, Bosch has proposed an AI Safety mechanism. Its Chinese and global teams have applied years of expertise in AI safety, including pre-research, practical processes, methodologies, and tools, into every stage of the full development cycle of functional safety for high-level intelligent driving solutions, involving data selection, model safety, and model verification, so as to ensure safety for AI-driven driving systems in all aspects.
Bosch has also introduced an innovative, systematic, and structured solution - the Machine Learning Development V-Model Process, which combines the traditional system/software development V-model and expands with a data-driven approach, referred to as the Data-Driven Engineering (DDE) process.
DDE provides a systematic process for ML system development, featuring a flexible and scalable operational design domain (ODD) analysis method. It standardizes data management methods for ML system development and provides infrastructure for safety analysis, testing, verification, and functional iteration of ML systems.
With the support of AI foundation models, the functional safety processes in vehicle function development, including hazard identification, risk assessment, functional safety concept, system design, and safety implementation, can benefit from AI at each stage.
For example, in the hazard identification phase, AI and LLMs can assist by analyzing vast datasets, historical accidents, and industry reports. They process unstructured data, such as natural language documents, to extract valuable insights that traditional methods might overlook, and detect potential hazards that could escape human eyes.
In October 2024, Jingwei Hirain successfully self-developed HIRAIN FuSa AI Agent, a functional safety agent capable of automatically conducting hazard analysis and risk assessment for functional safety analysis targets, setting safety goals, conducting safety analysis and deriving safety requirements, and continuously performing R&D testing and verification to ensure vehicle safety.
At GTC 2025, NVIDIA announced NVIDIA Halos, a full-stack, comprehensive safety system for autonomous vehicles that brings together NVIDIA’s lineup of automotive hardware and software safety solutions with its cutting-edge AI research in AV safety.
Halos is a holistic safety system on three different but complementary levels. At the technology level, it spans platform, algorithmic and ecosystem safety. At the development level, it includes design-time, deployment-time and validation-time guardrails. And at the computational level, it spans AI training to deployment, using three powerful computers - NVIDIA DGX for AI training, NVIDIA Omniverse and NVIDIA Cosmos running on NVIDIA OVX for simulation, and NVIDIA DRIVE AGX for deployment.
Serving as an entry point to Halos is the NVIDIA AI Systems Inspection Lab, which allows automakers and developers to verify the safe integration of their products with NVIDIA technology. The AI Systems Inspection Lab has been accredited by the ANSI National Accreditation Board for an inspection plan integrating functional safety, cybersecurity, AI safety and regulations into a unified safety framework.
The NVIDIA DRIVE AI Systems Inspection Lab also complements the missions of independent third-party certification bodies, including technical service organizations such as TüV SüD, TüV Rheinland and exida, as well as vehicle certification agencies such as VCA and KBA. It dovetails with recent significant safety certifications and assessments of NVIDIA automotive products.
Table of Contents
1 Status Quo and Development Trends of Vehicle Functional Safety
2 Status Quo and Related Scenario Cases of Vehicle SOTIF
3 Standards and Policies Concerning Vehicle Functional Safety and SOTIF
4 Development of Vehicle Functional Safety and SOTIF Certifications
5 Functional Safety Requirements, Design and Cases of Major Automotive Components and Systems
6 Functional Safety and SOTIF Layout of OEMs
Companies Mentioned
- SGS Group
- TÜV Rheinland
- TÜV SÜD
- DNV
- UL Solutions
- DEKRA
- ResilTech
- Bureau Veritas (BV)
- Exida
- Changan
- GAC Group
- Great Wall Motor
- Geely
- IM Motors
- NIO
- XPeng
- Li Auto
- BMW
- Mercedes-Benz
- Ford
- Volvo
- Jingwei HiRain
- VECTOR
- Bosch
- Continental
- eSOL
- Synopsys
- CICV
- Saimo Technology
- Worthy Technology
- OMNEX
- PARASOFT
- MUNIK
- SafenuX
Methodology
LOADING...