A Blueprint for New ADAS/AD Test Strategies

Requirements Based Testing MIL

This user journey describes a requirements-based generic model-in-the-loop (MIL) testing process as applicable in model-based development processes. In this context it is irrelevant whether the model artifacts are used for the generation of productive controller code or merely for the purpose of simulation. In both cases a fault-free model is required for further development steps to be based upon.

MIL testing is most effective if applied closely after or even at the time of active model development. In contrast to hardware-in-the-loop (HIL) testing, which rather serves the purpose of verifying a system’s function as comprehensively as possible, MIL testing is available much earlier in the development process and comes at significantly less cost. Thus, it is intended to find flaws inside a (potentially still incomplete) functional model and instantly provide the documentation for remedy, rather than proving the accuracy of a fully finished system.

User Journey
In requirements-based testing, tests are often derived directly from the requirements to the system under test.

Executing a MIL test always involves the simulation of the model under test on an appropriate simulator, usually a part of the modeling tool, the test tool, or a standalone tool, often equipped with supplementary libraries and toolboxes. For the test implementation, however, there are two options. It can be part of the simulation along with the model or it runs externally while communicating with the simulator and the model via the harness. In the latter case, the test tool is responsible for the synchronization of both components. This is possible if it has full control over the simulation time, i.e. it can pause/resume simulation or even execute it stepwise and hence perfectly synchronize the test script with the model. During the execution, the simulator or the harness are responsible for capturing relevant output data of the model. Capturing and storing the entire input data also absolves the test tool and the test implementation of the liability of correct stimulation ex-post, so it is recommended. In the context of white-box testing, the model coverage, i.e. the ratio of executed model parts over their total number, aggregated over the entire test suite, is a good metric of its comprehensiveness. When it is too low – 100% coverage is a frequent claim – either further tests must be added to the suite or dead parts within the model should be eliminated.

The test engineer can inspect the captured data in accordance with the previously specified acceptance criteria. Either this is done manually or the test tool performs the evaluation automatically, creating a test result and report. If the report contains relevant findings of a mismatch between the test specification and the behavior of the model observed during simulation, it must be assumed that the model does not meet the requirements which the test specification has been derived from. Thus, failed test reports should be fed back to the departments who are responsible for model specification and development.
In many cases, traceability of the original requirements throughout all test artifacts is desired. That means that for each test (case), test implementation, test execution, and test result it is known which requirement(s) they validate against and vice versa.

Example: Test on Adaptive Cruise Control
Consider a simplified adaptive cruise control (ACC) system as part of the ADAS functionality of a modern vehicle. One of its requirements shall be “At a speed of more than 70 km/h, when the preceding vehicle is more than 25 km/h slower than the ego vehicle and less than 80 m ahead, the ACC triggers a warning sound.”

The ACC system is developed in a model-based fashion where productive controller code is auto-generated from a functional model and linked against an auxiliary, handwritten library. The ACC model has input ports for the current ego speed (Vego) and for the distance to the preceding vehicle (dpred). It provides an output for the aforementioned warning sound.
The test engineer derives an open-loop test specification from the above requirement. In order to provoke the ACC’s sound trigger, a drop of dpred below 80 m at a rate of at least 25 km/h ≈ 6.94 m/s is required. He/she formulates the following specification that comprises a drop of 10 m/s:

  • Stimulate Vego with a constant value of X km/h, where X is calibratable.
  • Stimulate dpred with a constant value of 90 m for 2 seconds. Then ramp down to 70 m within 2 seconds and keep that value for a further 2 seconds.

The expected output depends on the calibration. For X > 70 km/h, the sound trigger must rise from false to true after exactly 3 seconds. That is when dpred falls below the 80 m threshold. For X ≤ 70 km/h, the trigger must stay false during the entire test. Further initial conditions are not required in this simple scenario. Three calibrations are specified: {X=100}, {X=60}, and {X=0}.

The test harness is automatically created for the given model under test. The test tool derives one test script for each calibration, executes the test script, and evaluates the results against the specified expected behavior. The generated report contains the actual data that has been used for stimulation, so its validity only depends on the soundness of the simulator.
If the expected behavior deviates from the captured results, the report can be passed back to the model developer, who can trace back the failure with the data. Otherwise, it can be archived.

With proper tooling, MIL testing allows the simulation of the model under test embedded into an environment or plant model with less effort than with software-in-the-loop (SIL) or HIL testing.

In the above example of an ACC controller, such an environment model would also contain and thus simulate the remaining components of the ego car. Thus it could, for example, calculate dpred based on the ego speed, which in turn is derived from the ACC’s output values for brakes and throttle, and feed this back to the ACC’s input. This way, the indirect influence of the outputs of the model under test on its own stimulation can be tested and evaluated.

Requirements concerning Test Specifications
The table below lists some requirements and characteristics of the presented example to allow a quick comparison to other user journeys based on common criteria.


Requirement Evaluation
Requires scenario Optional
Requires hardware Simulation platform (usually a PC, server or computing cluster)
Requires coordination of tools and models Yes: Test-harness creation and feedback to evaluation algorithms (usually both automated)
Requires exchangeability among test instances Not required but beneficial
Configuration of stubs/drivers/mock-up In large parts automatable/derivable from the model
Kind of interface between test and scenario Only indirectly via system requirements that manifest themselves in a specific scenario

Focus of Use Case 4:

  • Testing of driving functions without scenarios, e.g. open-loop component testing in the field of SIL/MIL