Lecture: Why is it so hard to make self-driving cars? (Trustworthy autonomous systems)
Why is it so hard to make self-driving cars? (Trustworthy autonomous systems)
Why is self-driving so hard? Despite the enthusiastic involvement of big technological companies and the massive investment of many billions of dollars, all the optimistic predictions about self-driving cars “being around the corner” went utterly wrong.
I argue that these difficulties emblematically illustrate the challenges raised by the vision for trustworthy autonomous systems. These are critical systems intended to replace human operators in complex organizations, very different from other intelligent systems such as game-playing robots or intelligent personal assistants. They have to understand dynamically changing situations in unpredictable dynamically changing environments. They have to manage many different potentially conflicting goals and plan actions for achieving them. Finally yet importantly, they have to interact safely with human operators.
I discuss complexity limitations inherent to autonomic behavior but also to integration in complex cyber-physical and human environments. I argue that traditional model-based critical systems engineering techniques fall short of meeting the complexity challenge. I also argue that emerging end-to-end AI-enabled solutions currently developed by industry fail to provide the required strong trustworthiness guarantees.
I conclude that building trustworthy autonomous systems goes far beyond the current AI vision and advocate a new scientific and engineering foundation addressing this unique and groundbreaking challenge.