Indian American Researchers Develop Model for Safer Self-Driving Cars
January 28, 2019 10:29(Image source from: The New York Times)
In order to avoid driverless vehicles from dangerous errors in the world, a team of Indian American researchers has developed a new model that uses human inputs to uncover Artificial Intelligence (AI) "blind spots" in self-driving cars.
Developed by MIT and Microsoft researchers, the model identifies instances in which autonomous systems have "learned" from training examples that don't match what's actually happening in the reality. Engineers could use this model to improve the safety of AI systems, such as driverless vehicles and autonomous robots.
The AI systems powering driverless cars are trained across-the-board in virtual simulations to prepare the vehicle for nearly every single event on the road.
"The model helps autonomous systems better know what they don't know," said first author Ramya Ramakrishnan from Computer Science and Artificial Intelligence Laboratory at MIT. "Many times, when these systems are deployed, their trained simulations don't match the real-world setting [and] they could make mistakes, such as getting into accidents.
"The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors," explained Ramakrishnan.
But now and then the car makes an unexpected error in the real world because an event occurs that should, but doesn't, alter the car's behavior. The researchers validated their method using video games, with a simulated human correcting the learned path of an on-screen character. The next step is to incorporate the model with conventional training and testing approaches for autonomous robots and cars with human feedback.
Co-authors on the papers are Julie Shah, an associate professor in the Department of Aeronautics and Astronautics and head of the CSAIL's Interactive Robotics Group; and Ece Kamar, Debadeepta Dey, and Eric Horvitz - all from Microsoft Research. "When the system is deployed into the real world, it can use learned model to act more cautiously and intelligently," said Ramakrishnan.
-Sowmya Sangam