EVOLVING LANDSCAPES OF COLLABORATIVE TESTING FOR ADAS & AD

A Blueprint for New ADAS/AD Test Strategies

Fault Injection Testing MIL

This user journey describes how fault injection testing can already be applied at the model-in-the-loop (MIL) level and in which contexts and regards it can supplement the classic fault injection testing activities performed at the hardware level.

Motivation
Fault-injection testing serves mainly two purposes: First, it checks whether functionalities that are intended to be implemented in a fault-tolerant manner indeed sustain the fault. Second, it analyzes the behavior of not-fault-tolerant functionalities in case of failure due to a specific fault. Some measures are implemented in hardware, e.g. via redundancy, and therefore cannot be tested other than through hardware-in-the-loop (HIL) tests. For those measures that are part of the functional model, however, MIL testing is not only available but offers significant benefits when performed in advance of the HIL test.

As for requirements-based model-in-the-loop testing, fault injection testing at the model level intends to find flaws in a functional model. This distinguishes it from later testing activities such as particularly hardware-in-the-loop testing, which are carried out at a stage where the system under test is already assumed or even required to be free of errors. The main reason is that MIL testing takes place very shortly after or even during model development. Hence, feedback cycles are relatively short and error-correction costs are much lower when compared to HIL testing. Furthermore, fault-injection testing is very effective and highly automatable at the MIL level, allowing a system to be tested against hundreds of faults per minute, creating perfectly reproducible results.

There are limitations of course. For example, MIL testing is not able to simulate faults in components which are not part of the environment model, such as sensor failure. In these cases, only their expected impact on the model can be simulated, which can be sufficient for test-driven development but is not for proper system validation. Likewise, hardware robustness tests that involve the application of physical stress to the devices can hardly be carried out at the MIL level.

User Journey
As explained above, MIL fault-injection testing is well-suited to check and evaluate the behavior of a functional model in case of a fault, mainly to check its reaction to the fault. Measures such as fallback modes or even fault-tolerant controls are often realized in software and already part of the functional model the code is derived or generated from. In these cases, what is nominally entitled as “fault” is nothing more than another valid input for the model under test. A different scenario is a fault that occurs inside the model under test. Both types of faults, external and internal ones, can be simulated and evaluated with appropriate MIL test tools.

Usually, the basis of fault-injection testing is given by a document that files and specifies different types of faults along with the effects they may, must, and must not have on the functional system.
From this documentation, the test engineer derives semiformal definitions of specific faults he/she wants to test the system against. These can be external or internal faults or even combinations of both kinds. Based on the definitions, concrete test specifications can be derived. These often follow a generic pattern:

  1. Initialize the system and drive it to a certain state via regular stimulation
  2. Inject a fault
    a. to the environment model or
    b. to the stimulation of the model under test or
    c. into the model under test
  3. Continue stimulation and capture the system’s behavior under the fault

Besides the stimulation, the specification should contain the expected reaction to the fault in terms of expected internal and output signals.

Fault-injection testing is often applied in a closed-loop manner, i.e. the input ports of the system under test are stimulated by an environment model that in return consumes its outputs. In most cases, neither the environment model nor the model under test provides any fault-injection mechanisms. The test tool thus needs to locate the variables/signals where the fault shall be injected and override them with the specified faulty value. In the above cases of a. and c., the tool must provide capabilities to not only override the functional model’s inputs but any arbitrary signal inside both models’ hierarchies.

During simulation, all relevant data must be captured. These encompass especially the (regular) stimulation, all variables/signals with which a fault is injected, and all output signals and all internal signals that give relevant insights into the behavior of the system under the fault.

As for requirements-based MIL testing, an automated evaluation of the recorded signals against the expected signals can be performed in the post-processing step, followed by the extraction/generation of conveniently readable test reports.

Example: Adaptive Cruise Control (ACC)
Consider an adaptive cruise control system that is tested against an environment model which provides the ACC model under test with its required inputs, including a simulated sensor value for the distance to the preceding vehicle.

The fault tolerance specification states that a (sudden) failure of the distance sensor must not cause the ACC to trigger a hard (potentially hazardous) deceleration.

In this example, the fault is injected to the stimulation of the model under test inside the closed loop (type b. of the above enumeration). Based on the electric characteristics of the sensor, the test engineer identifies two different logical signal values that would likely be fed to the system upon sensor failure: 0 and NaN (not a number).

The engineer provides a parameterized test specification that accelerates the lead car to 40 km/h within 8 seconds and remains constant thereafter. The ACC is enabled. Thus, the ego car follows the lead car at a distance of 40 m. Two seconds after the final speed of 40 km/h has been reached, the fault is injected by instantly overriding the distance value by the parameter. The simulation is continued for another 5 seconds. During the entire time, at least the following signals were captured: lead car speed, ego car speed, lead car distance, and acceleration. Note that some of these are internal signals of the environment.

Two test case implementations are generated with the respective values of 0 and NaN for the parameter and automatically executed. For the evaluation of the test, the acceleration output of the ACC model is scanned for values below the legal threshold. After that, a fault-injection test report can be generated and, depending on the findings, passed back to the model engineers or to the archive.

Requirements concerning Test Specifications
The table below lists some requirements and characteristics of the presented example to allow a quick comparison to other user journeys based on common criteria.

 

Requirement Evaluation
Requires scenario No
Requires hardware Simulation platform (usually a PC, server, or computing cluster)
Requires coordination of tools and models Yes: Test-harness creation and feedback to evaluation algorithms (usually both automated)
Requires exchangeability among test instances Not required but beneficial
Configuration of stubs/drivers/mock-up In large parts automatable/derivable from the model
Kind of interface between test and scenario Scenarios may include fault definitions that can be used as basis for fault-injection MIL testing