EVOLVING LANDSCAPES OF COLLABORATIVE TESTING FOR ADAS & AD

A Blueprint for New ADAS/AD Test Strategies

Hardware Reprocessing / Data Replay

This section introduces data replay (DR) testing. DR is an open-loop test methodology that is based on replaying recorded data sets to the interfaces of a system under test (SUT) and evaluating the responses of this SUT against reference or ground truth (GT) data. Such a test methodology can be deployed on multiple test platforms, whether as software data replay (SW-DR) or as hardware data replay (HW-DR).

User Journey
In the following figure, the user journey for the validation of an environment perception component with data replay is presented. The multiple phases of the test journey can be generalized to cover not only the example of environment perception components but the whole ADAS/AD function spectrum. Depending on the nature of the SUT, the test platform could take the shape of a purely software test platform (SW-DR) or a hardware test platform (HW-DR), once the SUT is deployed on the target system on chip (SoC).:

The whole test starts with the requirements definition. In this phase the requirements engineer and the test manager define the scope of the DR test campaign. Once the SUT success and failure criteria are identified, such as the given example here with the percentage of positive object detections, the operational design domain (ODD) of the function must be clearly identified. This is an important step to utilize the data management software to select the relevant data sets within this ODD out of the petabytes of recorded data.

The test designer then takes this specific task of selecting the right data sets out of the data lake. In addition, he/she creates the test cases for each category of these data sets, selects the DR test platform (SW-DR or HW-DR test station), and selects the SUT version (always the latest SUT version or a fixed released version). He/she is also responsible for the grouping of the multiple test cases within a test suite so that the results of the individual test executions are aggregated in a single viewpoint.

Once all the test specifications have been performed, the test engineer takes over the automatic test execution. The tests can be executed iteratively or parallelly. Parallel execution is preferred in order to shorten the test cycle and correspondingly the time to market. The results are then shown in a test report with all test result artifacts attached to it, such as the execution log data, including the test metadata. Moreover, a preview tool enables test debugging if a test failure exists.

Requirements concerning Test Specifications
The table below lists some requirements and characteristics of the presented example to allow a quick comparison to other user journeys based on common criteria.

Requirement Evaluation
Requires scenario No. A scenario is not required. A selection of recorded data sets out of the data lake, according to the criteria of the ODD, is used.
Requires hardware According to the test target, SW-DR and HW-DR test stations could be selected.

For functional testing, especially at the beginning of the software development, SW-DR is preferable for the ease of scalability and parallelism. HW-DR is preferred in later development cycles for function testing, as well as for robustness testing in different traffic scenarios. Both test platforms are usable for failure insertion on bus/network as well as sensor data streams

Requires coordination of tools and models Coordination of data replay test platform, replay data management, and test management
Requires exchangeability among test instances A single SUT can be tested with multiple data sets, making the exchangeability of test artifacts across different test instances relevant
Configuration of stubs/drivers/mock-up Optional
Kind of interface between test and scenario Does not apply