A Driverless Future

This content requires cookies to view

Accept All Cookies

Open Cookie Preferences

Today’s technologies would be considered magic to people just a few decades past, but the ideas behind them are often far from new. The promise of a driverless future, for example, may seem to many like it arrived in the last decade, but it’s been both “just around the corner” and symbolic of the future for the past century.

The first driverless ground vehicle technically appeared in 1904, a radio-controlled tricycle developed by Leonardo Torres-Quevedo. In the 1920s, remote-controlled “phantom autos” drove through Ohio that could reportedly be operated from up to five miles away. The concept of a self-driving or ‘autonomous’ car then entered the mainstream in 1939, in an exhibit at New York World’s Fair that predicted America’s future in 1960.

After early prototypes debuted in the 1960s and 70s, the capability of autonomous vehicles has slowly improved alongside developments in parallel technologies. Today, the basic hardware is well established – almost all vehicles come with a combination of radar, cameras, LIDAR, GPS, and so on – and rapid advances in computing power have significantly improved the software side by making deep neural networks much more practical.

When the driverless future does become a reality, then it could cause paradigm shifts at multiple levels of society. It’s more than just a source for convenience, it could democratize transportation, reduce emissions, help to improve agricultural yield, and more.

The barrier to getting there is safety. One of the main reasons why it’s so difficult to build a commercial product is that it's no longer about just demonstrating that it works – it’s about guaranteeing that it works safely, and reliably. This isn’t even just in terms of the driving itself; the surrounding infrastructure, and potential problems with hacking and privacy breaches, are equally important factors.

So how long will it be until we get this peace of mind and, when we do, will people still retain some level of control? How do self-driving cars even work, for that matter?

In this episode, we discuss all of this and more with a “rockstar” of autonomous vehicles: Nvidia’s Justyna Zander. We explore why the driverless future has been slower to arrive than expected, the future of autonomous transport and its benefits, and the differences between a machine-based driver and a human.


About the guest

California-based Dr Justyna Zander is an engineer, researcher, designer, public speaker, author, and programmer... among other things. She is often cited as one of the most powerful women in engineering – an expert in artificial intelligence and machine learning with a total of six degrees: a PhD, MSc, and two BSc degrees in Computer Science and Electrical Engineering. Justyna has worked at Intel, Harvard University, and the White House, speaks five languages, and is also a licenced Zumba instructor. She currently works as Global Head of Verification and Validation | Simulation Architecture for Autonomous Driving at Nvidia.


Episode Quotes

  • "The moment of autonomy is here, let's say in about 25 years or so. You're just going to roll into your car, and it's going to take you wherever you want to go. And you can focus on whatever occupies your mind at the moment ... you'll get this, this guarantee that it's going to be safe. So all of a sudden, it does not depend on your mental state or on your driving ability. It's just going to happen as a safe thing that you take for granted. That's actually a very useful advantage for humanity."
  • "25 years ago, we used to use a lot of controllers and software was built to control the controlling units. Nowadays, the software is built differently. Nowadays, autonomous driving is built using neural networks. It's a different computer science paradigm than it used to be."
  • "Basically, your car is like a robot. It's basically a robot that is performing a certain task for you. Now, this robot has to sense the way humans sense, so if you compare the driverless car to a human, it would be the sensors on the car that are perceiving the environment around itself ... for humans, the sensors are eyes, ears, whereas for the car, the sensors are cameras, radars, or LIDARs, depending on the technology that is in use."
  • "In artificial intelligence people say that this [an autonomous vehicle] is the most complicated system humans have ever designed."
  • "There's a difference between doing things in abstraction and building stuff. When I encountered electrical engineering and mechanical engineering, to me, it was like: 'finally, you can actually build stuff' ... that was a discovery to me."

The Create the Future Podcast is available to listen on: