Hanna Taller is a content creator for PTC’s ALM Marketing team. She is responsible for increasing brand awareness and driving thought leadership for Codebeamer. Hanna is passionate about creating insightful content centered around ALM, life sciences, automotive technology, and avionics.
Developers the world over are relying on simulation to accelerate the delivery of safe self-driving cars. While using virtual scenarios certainly has its limitations, it’s one of the best ways to train autonomous algorithms to ensure safety in real-world traffic.
Autonomous driving technology is the single most important area of innovation that automotive development companies are currently investing in. As the development of Advanced Driver Assistance Systems (ADAS) slowly gives way to self-driving technologies, automotive companies need to find new ways to ensure the safety and reliability of these novel products.
Experts of autonomous driving solutions claim that 8 billion kilometres of accident-free driving is required before a self-driving algorithm can be considered safe enough to hit the road as a commercial product. Covering that via physical test drives would require gargantuan effort, which obviously translate into enormous costs for the developer. The solution? Mobility innovators are increasingly relying on virtual simulation testing to perfect their autonomous driving algorithms.
Virtual scenarios – real solutions?
Simulation testing is widely used in commercial aviation, and some claim that it has contributed to making flying one of the safest forms of travel.
Building virtual “practice” environments and traffic situations for testing self-driving algorithms can greatly contribute to their cost-efficient development. Using simulation, developers can scale their testing efforts to ensure safety while keeping costs low by providing virtually limitless simulated training data for the neural networks that make autonomous driving.
But virtual scenarios have other benefits, too. For instance, it is easy to generate edge cases in simulation: traffic situations that are rarely encountered during physical driving tests, but that the autonomous algorithm has to be able to handle without issues.
Similarly, simulation enables testing the algorithm in all kinds of weather and road conditions, traffic characteristics, and even with the partial or full failure of self-driving sensors. For instance, sun glare, thick fog, or night lights can affect the way cameras, LiDARs, and radars work in an autonomous car. Preparing for such situations is crucial to ensuring safety, and these attributes are relatively easily modelled in a simulated environment. With simulation, developers are able to test and repeat these extreme situations any number of times so as to achieve “deterministic running”, or consistent responses from the system.
How simulation works
To start simulation testing, developers will first build the virtual environment by mapping or importing real-world driving scenarios, and populating them with characters and artifacts (trees, road signs, etc). Based on these, innovative VR autonomous driving simulators can automatically generate alternative scenarios with different weather and road conditions, lighting, etc.
According to the best practices of a leading autonomous technology innovator, testing itself happens in three distinct categories of virtual scenarios.
Verification scenarios
Certain new features (or new pieces of code) are tested in tailored verification scenarios. These are built specifically to verify the performance & correct functioning of the functionality being inspected. Consequently, verification scenarios may be used by developers as frequently as several times a day to execute testing of the functionality being delivered.
Real-world scenarios
Moving towards a more holistic system view of the self-driving solution, real-world virtual scenarios are used to test the whole self-driving stack. When road testing uncovers a problem the algorithm can’t handle safely, for example, developers can reproduce that situation and repeat testing until safe functioning is ensured.
Fault injection tests
As the top level of verification to ensure functional safety, the algorithm is tested in virtual situations where equipment (subsystem) failures or other adverse conditions may affect its functioning. A good example is the arising of issues with the self-driving car’s sensors, which may happen due to physical damage, software failure, or extreme weather conditions. Naturally, the system is expected to operate safely even if these hazardous situations occur.
Through these various types of testing, developers of autonomous technology are able to build a vast library of virtual scenarios with many different variations (including differences in driving culture, wildlife, weather characteristics, etc). These scenarios may then be reused in the future to increase the robustness of testing.
Limitations of simulation
Overall, simulation is a safe and cost-efficient way to train and test self-driving algorithms, but it does have its limitations. For one, it’s important to note that simulation is not sufficient on its own to train (and to verify the safety of) self-driving systems. No matter how realistic virtual scenarios are, simulation just can’t replace real-world testing.
In simulation testing, the quality of input data is enormously important. Developers have to consider a huge variety of edge cases that may occur in real life – a good example is “When the Pumpkins Take a Stroll”, e.g. children clad in Halloween costumes roaming the streets. Such outlier scenarios are hard to prepare for, but need to be taken into account before autonomous cars are allowed on our roads.
Start Your Free Trial of Codebeamer
Simplify complex product and software engineering at scale. Start your free trial of the Codebeamer open platform that extends ALM functionalities with product line configuration capabilities and provides unique configurations for complex processes.
Get Started