EVOLVING LANDSCAPES OF COLLABORATIVE TESTING FOR ADAS & AD

A Blueprint for New ADAS/AD Test Strategies

AI Specific Aspects

The algorithm paradox

Although the boundaries of artificial intelligence have tended to shift over time, a core objective of AI research has been to automate or replicate intelligent behavior. Conditions encountered while driving are arbitrarily complex, and infinite-dimensional. As such, manually encapsulating and defining generalized rules that dictate safe and effective driving becomes impossible. By its ability to automatically learn complex rules from data, Machine Learning has emerged as the major paradigm to create ADAS/AD systems. For highly automated vehicles, especially on SAE Level 4 or 5, this means AI applications can enable the processing, selection or extraction and interpretation of data during tests in real-time, while at the same time monitoring itself.

Machine Learning (ML) Modules in ADAS/AD Software Stack
ML usually takes the form of scene understanding and ego-vehicle planning to various degrees.

  • Scene understanding/perception: Understanding the world and recreating it in a model. This involves the two further steps of perception and behavior prediction
  • Motion/trajectory planning: Navigating using the model as a proxy for the world

Source: Waymo

Development of ML Modules
As shown in figure, ML modules are created mainly using the steps in order of data collection labeling, data split training and evaluation.

Impact of ML on Traditional ADAS/AD Testing Methodologies
ML development for ADAS/AD needs diverse driving data where ideally every scenario that might show up after deployment in the ODD of interest are present in a statistically significant way in the data set. However, the lack of a credible method to measure and leverage diversity coverage has necessitated blunt testing to ensure nothing is left out. In practice, it has started to manifest in two ways:

  • On-road testing → high milage
  • Simulation → Combinatorial combination of parameter values to create scenarios. The gold standard for this is scenario-based testing, which is duly expanded in the following figure

Suggestions for ML specific Testing Methodologies
While there has been significant progress in testing ML modules for ADAS/AV, they are all far from perfect. A few broad areas where considerable progress is still to be made are:

  • Rigorous processes that align ODD and intended functionalities with data specification, data and ML architecture selection, training, evaluation, and monitoring
  • Data specification: It has to align with ODD and intended functionalities. OpenLabel might be useful here. However, the challenge would be how OpenLabel can be made versatile enough for features of all kinds (visual, spatial relationships, temporal, and causality)
  • Mapping data: Representativeness of the ODD to ensure sufficient coverage
  • Unmodeled concepts: Potential solutions might include keeping OpenXOntology open including more objects. OpenLabel may support unknown objects, and containerize for example if any object does not fit the description
  • Data augmentation by generation of synthetic data to fill in the gaps in the data diversity. This means artificially increasing the training data on what is missing – resize, rotate, brightness. Domain adaptation/randomization further helps in this regard as it is similar to concrete scenario generation for prediction, e.g. random vehicle textures (sometimes not even there in the data set). The whole process may lead to robustness which may be extended to all input modalities. The main difficulty is identifying the relevance of the variation dimension (e.g. contours might matter more than textures)
  • Need for realistic graphics to validate perception-based ML modules in simulation. Data redundancy strategies such as sensor fusion and ensemble concepts. Redundancy always helps, such as in an airplane that has three systems working at any point
  • Coverage argument for the ODD where data is used for both the training and evaluation of ML modules

ML Can Also Be Used to Make Traditional Testing Better

  • Modern techniques like OpenScenario allow mapping an infinite-dimensional domain (highways for example) into millions of concrete scenarios by allowing easy definition of abstract scenarios with constraints. This concrete scenario generation is too tedious to evaluate and many of them are irrelevant in the ODD of interest
  • ML can help take clues from driving data and help trim down the millions of scenarios to a few ones that are relevant to a given ODD

Challenges to Increasing Trust in AI

  • AVs are safety-critical and hence it has to be provably safe, which increases the trust of all stakeholders including government and the public. Hence, the AI-based systems must be explainable, so that their behavior can be understood and assessed.
  • They have to be robust against adversarial attacks and against perturbations in the input data that could potentially be caused naturally, such as by slight snow or soiling on the sensor.
  • Behavior should be in bounds to the specification of the intended functionalities.
  • Perceived safety is important because no matter how safe certain vehicle behaviors are, things like sudden jerks due to braking or acceleration or aggressive overtaking have the potential to scare the users.
  • V&V strategies for ML specific aspects must be defined and must improve the trust into AI.