Abstract
Cyber–physical systems are commonly highly configurable. Testing such systems is particularly challenging because they comprise numerous heterogeneous components that can be configured and combined in different ways. Despite a plethora of work investigating software testing in general and software product line testing in particular, variability in tests and how it is applied in industry with cyber–physical systems is not well understood. In this paper, we report on a multiple case study with four companies maintaining highly configurable cyber–physical systems focusing on their testing practices, with a particular focus on variability. Based on the results of the multiple case study, we conducted an interactive survey with experienced engineers from eight companies, including the initial four. We reflect on the lessons learned from the multiple case study. We conclude that experience-based selection of configurations for testing is currently predominant. We learned that variability modeling techniques and tools are not utilized and the dependencies between configuration options are only partially modeled at best using custom artifacts such as spreadsheets or configuration files. Another finding is that companies have the need and desire to cover more configuration combinations by automated tests. Our findings raise many questions interesting to the scientific community and motivating future research.
Users
Please
log in to take part in the discussion (add own reviews or comments).