Track: Sensors and Sensing
Abstract
During the past five years there has been many advances in sensor and compute technology that have made the development of Autonomous driving systems possible. Typical sensors driving the autonomous system perception system include cameras, LIDARs, ultrasonic sensors, and radars. Owing to relatively low cost, camera systems have been the preferred sensor for any automated systems. Perception and classification algorithms based on Machine Learning and Deep Learning have shown increasing accuracy and ability to detect and classify objects. Unfortunately since Machine Learning and Deep Learning algorithms mimic the human learning and execution capabilities, it is still not possible to achieve 100% accuracy. One of the goals of autonomous driving is to achieve and deliver greater safety systems around on-road vehicles. Recent updates to the automotive functional safety standard, the ISO 26262-2018 describe the safety mechanism to be used for such camera based perception system under the umbrella of Safety of the Intended Functionality (SOTIF). This paper will attempt to decompose an autonomous system camera based system into its various components, analyze the various sources of errors using FMEA, FMEDA, develop diagnostic coverage counter-measures to decrease residual errors, and provide an initial safety assessment of the prototype.