This section presents an example of a validation approach for a camera-based ALKS (automated lane keeping system) as a V-ECU (virtual electronic control unit, https://www.prostep.org/fileadmin/downloads/WhitePaper_V-ECU_2020_05_04-EN.pdf) on vehicle level by means of a scenario-based test. It is purely simulation based and focuses on the technical aspects of SOTIF (safety of the intended function, ISO DPAS 21448) for a positive risk balance for unknown risks. The presented user journey is intended as an example only; the presented workflow, terms, and roles have no guiding character. Subsequently, examples for the emergency brake assist (AEB) and the automated lane keeping system (ALKS) illustrate this user journey in detail.
User Journey
The basic test idea is the comparison of an ALKS system and human driver capabilities in a closed-loop Software-in-the-Loop test bench. The SUT (system under test) is a camera-based ALKS closed-loop system as software stack in the form of a V-ECU. The driver, the vehicle, and the driving environment are virtual and are implemented as models and as scenario and map data.
The test goal is to find unknown risks by exploring and generating scenarios and finally provide evidence of a positive risk balance for the ALKS compared to a human driver for these scenarios.
Supporting Information
As supporting information and data the system requirements, the ODD (operational design domain), a scenario database, credible simulation models, and qualified tools (e.g. simulator) are required to define and implement the test specification. In the example the system requirements say that the focus is on the evidence that ALKS does not introduce unreasonable risks compared to a competent human driver. The ODD is for speeds below 60 kph and expects no pedestrians on the road (e.g. motorway). It aims at the usage of ALKS for passenger vehicles only.
Input Prerequisites
All the supporting information is turned into a complete and consistent test specification with this content:
- The test design specification is a “document specifying the features to be tested and their corresponding test conditions” [ISO 29119]
- The test case specification is a “documentation of a set of one or more test cases” [ISO 29119]
- It specifies (concrete) test cases consisting of concrete scenarios and one or more evaluation criteria
- E.g. use TTC (time to collision) and collision events as metrics including expected results (e.g. TTC never below 1.0 sec).
- Vary scenario/map parameters (e.g. road curvature)
Create new scenarios by reordering existing scenarios
- The test procedure specification is a “document specifying one or more test procedures, which are collections of test cases to be executed for a particular objective [in execution order]” [ISO 29119]. Additionally, the test procedure specification may also specify zero or more test bench configurations that shall be used to execute the test case(s).
- Test bench configuration: e.g. SIL environment, scalable (cloud), qualified tools
- Definition of the SUT (or also called test object): E.g. ALKS as V-ECU Level 1, Version 8.3
Processing
The processing phase is divided up into the steps test preparation, test execution, and test evaluation. During the preparation phase all the specified metrics, models, and scenarios are put in place according to the test specification given. During the test execution many simulation runs are performed, each on a new concrete scenario. The concrete scenarios were created by parameter variation (e.g. road curvature) of logical scenarios/maps.
During the test evaluation specified metrics are applied to the simulation results to produce the test results and to generate aggregated information from all the test results.
Outputs
Test execution and test evaluation produce test results and further data for analysis. In this way unintended ALKS behavior is detected, can be analyzed, and finally be improved by the ALKS development team. New scenarios which turned out to be important but have been “unknown” prior to this testing are an additional result. They are fed into the scenario database and are deployed in the subsequent testing activities to continuously measure the positive risk balance (KPI).