EVOLVING LANDSCAPES OF COLLABORATIVE TESTING FOR ADAS & AD

A Blueprint for New ADAS/AD Test Strategies

Requirements Based Testing on Closed Loop HIL + Requirements Based Test SIL

This section presents requirements-based testing with scenarios in the area of hardware in the loop (HIL) and software in the loop (SIL). The following user journey demonstrates the high-level view of the workflow.

The focus of the examples is on the interaction between test cases, scenarios, metrics, and test-environment-specific conditions.

User Journey
In this exemplary user journey, the different phases for the validation of ADAS/AD functions in the area of requirements-based testing are shown. Here, the focus is on integration tests for HIL, SIL, and possibly also model in the loop (MIL) platforms:

Requirements are the starting point for testing driving functions. The test designer creates the test specification according to the requirements. These contain both the information about the test sequence.

Based on the test specification, all necessary building blocks for the test execution are created or already existing artifacts are reused. This includes the scenario (optional), the test case, metrics, and test-bench-specific preconditions to be created.

Ideally, test executions are triggered by the availability of corresponding new versions of the driving functions. The results include the test report with the procedure of the test case, records of scenario executions, further records (e.g. bus communication), and further metadata (e.g. software data, hardware data, additional test data, etc.) for traceability.

Requirements concerning Test Specifications

Requirement Evaluation
Requires scenario SIL: Optional, depends on SUT
HIL: Optional, depends on SUT (example ESP-ECU)
Requires hardware SIL: Hardware for SIL environment (e.g. cloud server)
HIL: SUT, Restbus, and test bench definition and configuration required
Requires coordination of tools and models Optional
(toolchain orchestration)
Requires exchangeability among test instances Test criteria, models, scenarios, reports
Configuration of stubs/drivers/mock-up  optional
Kind of interface between test and scenario Optional, depends on the relationship between test case and scenario (see examples)
Covered by best practice, regulations, and standards n.a.

Focus of Use Case 4:

  • Testing of driving functions without scenarios, e.g. open-loop component testing in the field of SIL/MIL

Conclusion
The use cases are structurally very similar in HIL and SIL. Reuse of test cases and scenarios is possible, but not trivial, since specific test environment steps are quickly built into the artifacts unconsciously. A keyword-based approach can mitigate this problem by distinguishing between the general test specification (keyword-based) and the implementation. Depending on the platform (SIL, MIL, HIL), the corresponding implementations of the keyword are assigned.