EVOLVING LANDSCAPES OF COLLABORATIVE TESTING FOR ADAS & AD

A Blueprint for New ADAS/AD Test Strategies

Requirements-based Vehicle-in-the-Loop Testing

This section presents an example of a validation approach for a camera-based ALKS (automated lane keeping system) as a V-ECU (virtual electronic control unit, https://www.prostep.org/fileadmin/downloads/WhitePaper_V-ECU_2020_05_04-EN.pdf) on vehicle level by means of a scenario-based test. It is purely simulation based and focuses on the technical aspects of SOTIF (safety of the intended function, ISO DPAS 21448) for a positive risk balance for unknown risks. The presented user journey is intended as an example only; the presented workflow, terms, and roles have no guiding character. Subsequently, examples for the emergency brake assist (AEB) and the automated lane keeping system (ALKS) illustrate this user journey in detail.

User Journey
The basic test idea is the comparison of an ALKS system and human driver capabilities in a closed-loop Software-in-the-Loop test bench. The SUT (system under test) is a camera-based ALKS closed-loop system as software stack in the form of a V-ECU. The driver, the vehicle, and the driving environment are virtual and are implemented as models and as scenario and map data.

The test goal is to find unknown risks by exploring and generating scenarios and finally provide evidence of a positive risk balance for the ALKS compared to a human driver for these scenarios.

Supporting Information
As supporting information and data the system requirements, the ODD (operational design domain), a scenario database, credible simulation models, and qualified tools (e.g. simulator) are required to define and implement the test specification. In the example the system requirements say that the focus is on the evidence that ALKS does not introduce unreasonable risks compared to a competent human driver. The ODD is for speeds below 60 kph and expects no pedestrians on the road (e.g. motorway). It aims at the usage of ALKS for passenger vehicles only.

Input Prerequisites
All the supporting information is turned into a complete and consistent test specification with this content:

  • The test design specification is a “document specifying the features to be tested and their corresponding test conditions” [ISO 29119]
  • The test case specification is a “documentation of a set of one or more test cases” [ISO 29119]
  • It specifies (concrete) test cases consisting of concrete scenarios and one or more evaluation criteria
  • E.g. use TTC (time to collision) and collision events as metrics including expected results (e.g. TTC never below 1.0 sec).
  • Vary scenario/map parameters (e.g. road curvature)
    Create new scenarios by reordering existing scenarios
  • The test procedure specification is a “document specifying one or more test procedures, which are collections of test cases to be executed for a particular objective [in execution order]” [ISO 29119]. Additionally, the test procedure specification may also specify zero or more test bench configurations that shall be used to execute the test case(s).
  • Test bench configuration: e.g. SIL environment, scalable (cloud), qualified tools
  • Definition of the SUT (or also called test object): E.g. ALKS as V-ECU Level 1, Version 8.3

Processing
The processing phase is divided up into the steps test preparation, test execution, and test evaluation. During the preparation phase all the specified metrics, models, and scenarios are put in place according to the test specification given. During the test execution many simulation runs are performed, each on a new concrete scenario. The concrete scenarios were created by parameter variation (e.g. road curvature) of logical scenarios/maps.
During the test evaluation specified metrics are applied to the simulation results to produce the test results and to generate aggregated information from all the test results.

Outputs
Test execution and test evaluation produce test results and further data for analysis. In this way unintended ALKS behavior is detected, can be analyzed, and finally be improved by the ALKS development team. New scenarios which turned out to be important but have been “unknown” prior to this testing are an additional result. They are fed into the scenario database and are deployed in the subsequent testing activities to continuously measure the positive risk balance (KPI).

Requirements concerning Test Specifications
The table below lists some requirements and characteristics of the presented example to allow a quick comparison to other user journeys based on common criteria.

Requirement Evaluation
Requires scenario Yes, basic technique for the presented example is to generate and explore scenarios
Requires hardware No hardware required; pure simulation-based approach
Requires coordination of tools and models Yes: Evaluation, exploration, scenario generation need to be coordinated: Test software, SIL simulator, test management software, scenario database
Requires exchangeability among test instances At least the newly created scenarios go to the pool of “known scenarios” for other test activities
Configuration of stubs/drivers/mock-up n.a.
Kind of interface between test and scenario The test explores and generates new scenarios
Covered by best practice, regulations, and standards Yes, approach is based on ISO/PAS 21448, safety of the intended functionality (SOTIF)

Examples
Three different examples for testing driving functions in different ways in scenario-based testing are described below. The focus is on the relationship between test cases and scenarios.

Focuses of Example 1

  • Strong coupling of test case and scenario
  • Parameterization of logical scenarios and logical
    test cases as reusable, independent units
  • Trigger from scenario to test case to execute action
  • Trigger from test case to scenario to execute next maneuver steps

Focuses of Example 2:

  • Reuse of artifacts
  • One test case is executed with different scenarios
  • One scenario is executed with different test cases

Conclusion
The analysis of the different examples provides the following findings:

  1. In some cases, test case and scenarios are more or less coupled with each other. The concrete type of coupling is still partly tool-specific and proprietary and thus not transferable between tools. For better transferability and maintainability, the coupling could be standardized. Existing standards, such as OpenDrive, OpenScenario, XIL, and OSI, might be used and extended.
  2. Test case and scenario are two independent artifacts. At the same time, parameters are varied in the test case and scenario. Parameter variation should therefore take place across the two artifacts. The parameter variation as currently defined in OpenScenario is thus not sufficient and should be reconsidered.