Autonomous Driving Analysis, sumarry And Conclusion

Aus HSHL Mechatronik
Zur Navigation springen Zur Suche springen

Autonomous Driving Neural Network Training

Experiment Analysis

Limitations

  • Hardware constraints limited the maximum processing speed(without GPU Coder).
  • Initial data quality was poor due to randomized manual steering input.
  • Better results were obtained after switching to discrete, consistent control signals.
  • However, simultaneous recording, steering, and saving caused performance bottlenecks due to the Jetson Nano’s limited resources.


Training Results

  • Models trained on discrete steering values showed significantly better results than regression-based approaches.
  • Grayscale recordings increased model stability.
  • Calibrated images helped reduce the steering error caused by lens distortion.
  • The project switched to **classification** using **discrete steering values**. with the following parameters as seen on the figure below:
Fig. 13: Camera setup for image recording during driving

More details of further work on Autonomous driving can be found here[].

Summary and Outlook

This experiment aimed to use the Jetson Nano and Matlab to complete various tasks from "Familiarization with the existing framework" to "Autonomous Driving". Although the final result of the car driving several labs in the right lane could not be attained, there is plenty of room for future improvements as far as:

  • Training model method with (more datas, different training algorithm like reinforcement learning etc)
  • Expanding the dataset with more diverse environmental conditions.
  • Flash working Model directly on jetson nano using gpu coder
  • Use accurate gamepad.
  • Improving automated labeling and refining the PD controller parameters for faster driving without loss of robustness.