Browse Publications Technical Papers 2020-01-0737
2020-04-14

Using Reinforcement Learning and Simulation to Develop Autonomous Vehicle Control Strategies 2020-01-0737

While machine learning in autonomous vehicles development has increased significantly in the past few years, the use of reinforcement learning (RL) methods has only recently been applied. Convolutional Neural Networks (CNNs) became common for their powerful object detection and identification and even provided end-to-end control of an autonomous vehicle. However, one of the requirements of a CNN is a large amount of labeled data to inform and train the neural network. While data is becoming more accessible, these networks are still sensitive to the format and collection environment which makes the use of others’ data more difficult. In contrast, RL develops solutions in a simulation environment through trial and error without labeled data. Our research expands upon previous research in RL and Proximal Policy Optimization (PPO) and the application of these algorithms to 1/18th scale cars by expanding the application of this control strategy to a full-sized passenger vehicle. By using this method of unsupervised learning, our research demonstrates the ability to learn new control strategies while in a simulated environment without the need for large amounts of real-world data. The use of simulation environments for RL is important as the unsupervised learning methodology requires many trials to learn appropriate desired behavior. Running this in the real-world would be expensive and impractical, however the simulation enables the solutions to be developed at low cost and time as the process can be accelerated beyond real-time. The simulation environment has high-fidelity to model vehicle dynamics as well as rendering capability for domain adaptation, and guarantees successful simulation-to-real world transfer. Traditional control algorithms are used on the strategies developed to ensure a proper mapping to the physical vehicle it is being applied to. This approach results in a low cost, low data solution that enables control of a full-sized, self-driving passenger vehicle.

SAE MOBILUS

Subscribers can view annotate, and download all of SAE's content. Learn More »

Access SAE MOBILUS »

Members save up to 16% off list price.
Login to see discount.
Special Offer: Download multiple Technical Papers each year? TechSelect is a cost-effective subscription option to select and download 12-100 full-text Technical Papers per year. Find more information here.
We also recommend:
TECHNICAL PAPER

Vehicle Velocity Prediction and Energy Management Strategy Part 2: Integration of Machine Learning Vehicle Velocity Prediction with Optimal Energy Management to Improve Fuel Economy

2019-01-1212

View Details

TECHNICAL PAPER

Control Synthesis for Distributed Vehicle Platoon Under Different Topological Communication Structures

2019-01-0494

View Details

TECHNICAL PAPER

Noise analysis and modeling with neural networks and genetic algorithms

2000-05-0291

View Details

X