01 January 1970 2 7K Report

As driving automation progresses to SAE Level 3 and 4, the relevance of Safety of the Intended Functionality (SOTIF) becomes more critical—especially in scenarios where the system operates without driver supervision for extended periods.

This discussion aims to explore how foreseeable misuse, human factors, and machine learning (ML) uncertainties should be systematically considered in the design and validation of L3/L4 systems.

Key focus areas include:

  • Methods to identify and model triggering conditions for foreseeable misuse.
  • The role of human behavior and misunderstanding in shared control or fallback situations.
  • Challenges in uncertainty representation within ML-based perception and decision systems.
  • Integration of these factors into simulation frameworks or scenario-based testing aligned with ISO 21448.

I'm interested in methodologies, frameworks, or real-world examples that support addressing these safety-critical aspects.

More Milin Patel's questions See All
Similar questions and discussions