Top

Beyond self-driving simulations: teaching machines to learn

Training and testing self-driving cars in simulators can help deploy safer vehicles for real-world testing. File photo.

-By Sonia Turosienski, KAUST News

Teaching has the power to test the limits of one's knowledge. Teaching algorithms to learn using machine learning is making it possible for cars to do away with human drivers in the near future, but this has also opened up new questions about the limits of our knowledge of the brain and learning.

Bernard Ghanem, KAUST associate professor of electrical engineering from the University's Visual Computing Center and principal investigator of the Image and Video Understanding Lab, applies machine learning techniques to computer vision for automated navigation (for example, self-driving cars and self-flying unmanned aerial vehicles [UAVs]), video content searching and object tracking, among other topics. 

The team from the University's Image and Video Understanding Lab tested their algorithm on the roads at KAUST. Photo by Sarah Munshi.

"Our brains can do complicated tasks like activity and object recognition with low energy consumption, yet we are doing research to understand how we can replicate this functionality in a machine. I find it very interesting that we are aiming to imitate a system that we have inside [of] us. It basically means we don't really understand how our brains work," explained Ghanem.

 Man vs. machine

"Creativity is a specific intelligence that is central to what it means to be human. A creative algorithm would be real artificial intelligence—possessing the ability to create. Humans are able to transfer skills from task to task, evolving knowledge without direct training. That's creative because something was learned without being taught. We'd love to reach creativity in machine learning, but we're not at that point yet," Ghanem said.

When developing machine learning algorithms with the ability to create, researchers must first understand what mechanism underlies the transfer of knowledge from one task to another. There are a variety of challenges to overcome, but Ghanem surmises that perhaps researchers need to question the fundamentals—is deep learning the right technique?; what type of learning does this skill require?; and even if we assume deep neural networks is the best method, how do you actually teach transfer?

Safety has recently become a top concern for programmers and manufacturers. File photo.

While algorithms are beginning to outperform humans at our own games, most of machine learning still begins with human input. Selection of a reward function is a crucial aspect in programming algorithms.

"It comes down to design choice. We have to start with the question [of] what reward function do we need to program in order to get our desired outcome? This can be a very complex question to answer," Ghanem said.

Despite being inspired by the brain's ability to perform complicated tasks, Ghanem doesn't think human cognitive strategies and processes are necessarily the only—or even most efficient—outcome of machine learning. The brain is one example of a successful model that machine learning wants to imitate, but, as Ghanem noted, "You can reach the peak of a mountain through many, many roads. It's about reaching the peak."

Driving tests for self-driving cars

Since Google and other companies started making impressive leaps forward in self-driving cars, the field of autonomous vehicles has experienced an explosion of investment, interest and technological innovation. A recent incident resulting in a fatality in Arizona involving an Uber self-driving car, however, brought the issue of safety to the fore. Ghanem explained that the industry is now wondering whether there should be a testing process before self-driving cars are allowed to perform field tests on the road. He and his team are well placed to deliver a solution.

KAUST Ph.D. student Matthias Mueller from the University's Image and Video Understanding Lab test drives the simulator on campus. Photo by Sarah Munshi.

The team spent the past year developing intelligent automated navigation algorithms learned from simulation (for example, those used in self-driving cars) that teach the vehicle to properly react in dangerous or unusual situations before hitting the road. Some of this work was developed in collaboration with Intel Labs in Germany and tested on the roads near the company's research center in Munich. The same remote control car was also tested on KAUST roads.

The team from the KAUST Image and Video Understanding Lab used video game engine Unreal Engine to create realistic and highly customizable simulations of road conditions for self-driving cars. Photo courtesy of Bernard Ghanem.

Ghanem's simulator is based on the photo-realistic computer game engine Unreal. The team designed a typical street setting placing a simulated car on the road. The simulator can generate any scenario the designers want to learn and test with the algorithm. Once the algorithm learns—and in a sense "passes"—how to, for example, avoid pedestrians crossing the street, the team then transfers the learned method to the algorithm in the real-world self-driving car.

"Many machine learning algorithms learn through reward and punishment incurred through episodes of a scenario. So, teaching it not to hit a bicyclist requires the self-driving car to be in such a scenario during training. For this reason, it's better and safer to be doing the basic learning in a simulator," Ghanem concluded. Proprietary testing processes exist, but no standardized measures of safety exist industry-wide.

Cars are not the only method of transport getting an autonomous update. Ghanem and postdoctoral fellow Silvio Giancola are collaborating with Boeing to develop machine learning algorithms for airplanes that would allow them to land and taxi without pilot intervention.

Related stories: