In the past five or six years, artificial intelligence has been rapidly evolving in part thanks to a close connection to gaming. In this episode of Mastering Innovation on SiriusXM Channel 132, Business Radio Powered by The Wharton School, Danny Lange, Vice President of AI and Machine Learning at Unity Technologies, discusses the gaming platform’s foray into AI, its new partnership with Google’s DeepMind, and the differences between biological and artificial intelligence.
When experiments in artificial intelligence began in the 1950s, its initial promise was to create a synthetic reproduction of the human brain. More recently, the introduction of smartphones, increased processing power, and big data has kicked the pace of machine learning development into high gear. Lange joined Unity at the end of 2016 after leading the development of General Motors’ OnStar in the 1990s and managing machine learning at Microsoft, Amazon, and Uber. Using the realistic world-building capabilities of Unity’s gaming platform, his team has been able to experiment with and train AI through millions of repeated simulations.
An excerpt of the interview is transcribed below. Listen to more episodes here.
Transcript
Nicolaj Siggelkow: Unity Technologies is a game development platform that’s used by half of the world’s mobile games. Maybe you can tell our listeners a little bit more about Unity and what you’re doing there.
Danny Lange: Unity is the most popular gaming platform today. It’s widely used. What is more natural then to start thinking about putting some of the latest AI capabilities at the fingertips of all those game developers and allowing them to create more impressive and innovative games?
Siggelkow: Someone with your CV, you can basically pick your job, right? What attracted you to Unity?
Lange: You ask in a very nice way. Some other people ask differently and say, “What are you doing there? Why didn’t you go to Facebook or somewhere else?” It’s actually the best place to be. I’m not just pitching Unity, but it’s the fact that people overlook gaming as the number one driver for leading-edge AI.
When you work on AI in Amazon and you try to do book recommendations, it’s a very abstract thing. If you are trying to build a self-driving car? Well, that’s a really dangerous thing. When you look at gaming, it’s a virtual biodome of the real world that we live in. You can basically simulate all kinds of scenarios that are natural to us as human beings and experiment with the algorithms that can solve these problems. That is putting gaming and the game engine at the core of leading-edge AI.
“Gaming allows us to create these massive simulations of realistic 3D problems in the world we live in.” – Danny Lange
Siggelkow: It’s AI as facilitator for simulation. Should I put it that way, gaming as a big simulation?
Lange: Yes. Gaming allows us to create these massive simulations of realistic 3D problems in the world we live in. How do you navigate a maze? How do we solve a problem that has sequences? How do you find the key to the treasure chest, then take what’s in the chest and bring it somewhere and trade it for something else? All those things are real-world things, but in games, you can do it at high speed on thousands of servers and challenge your algorithms to the absolute edge of what is possible.
Siggelkow: AI has been around for a long time, but it’s really in the last couple of years that we’ve seen really amazing progress. What’s the key application driver of AI? My list was games, autonomous driving, or expert systems in medicine or conversational platforms. Do you think it has been games in the last five or six years that’s been pushing AI?
Lange: Games has been the leader, but I would also say that the reason that everybody is talking about AI is that it has really spilled over into the rest of society. One of the key things that happened was that people realized that AI was not about having human beings be super geniuses and make the computer smart. What we looked at instead was if we just let the computer experience enough through simulations, it can learn how to deal with real-world situations. Say I want to build a self-driving car: what people think is that I’m going to drive the car around a lot, and then eventually it is going to learn to drive on a real street.
What we’re doing is we build a game-like situation: we build a city, we put pedestrians in there, we put other cars in there — cars that drive, cars that are parked. Then you put your virtual self-driving car in there and you train on millions if not billions of miles of virtually-driven cars on roads in cities that do not exist. That’s the way you train these systems, the onboard computer in the car, to deal with real-world situations. That shift is what has happened over the last five years.
Siggelkow: It’s using 3D virtual reality to create virtual training sets for AI?
Lange: Yes, absolutely.
“At the end of the day, the only real intelligence we know is our own intelligence, the biological intelligence.” – Danny Lange
Siggelkow: I just saw that you engaged in a partnership with DeepMind. Is that the direction where that joint venture is going?
Lange: Yeah. The DeepMind relationship is very interesting. We’ll be partnering with DeepMind to enable their development of virtual worlds and virtual environments so that they can develop new algorithms and solve problems in these virtual environments at scale. The fantastic thing about DeepMind is that they, from the beginning, realized that biology plays an important role here. A lot of people are always asking me, “What is artificial intelligence? How do we define it?” We get into this philosophical discussion of, “What is intelligence?”
At the end of the day, the only real intelligence we know is our own intelligence, the biological intelligence. What the founders at DeepMind realized early on was that if we look over the shoulder of nature, start looking at some of the challenges that biological organisms may meet, and try to see if the computer can learn to solve those obstacles or tasks, then we are now replicating and relearning some of the things that nature has figured out but doing it in computers.
Siggelkow: That was the early promise of neural networks, right? “Let’s recreate a synthetic brain so the brain neurons talk to each other.” This promise goes back to the ’50s, ’60s. What has changed? Is it new processing power and data availability that has been making these ideas that have been around for a long time fruitful?
Lange: I’m almost saying yes, it is as easy and simple as that. We tend to think that the world we live in was like this all the time, but think about it: there was no concept of apps as we know it today a little over a decade ago. It came literally with the iPhone. What do apps do? They track where we are every second of the day; they capture data all the time.
A little over 10 years ago, apps, this new relic, were not around. They didn’t capture data. They didn’t capture location. That’s the data side. The other thing is that few people realize that the mobile phone you carry is likely to have the computing power of thousands of supercomputers from the late ’90s. Computing power and data is a new thing, and it’s why all this has happened over the last decade.
About Our Guest
Dr. Danny B. Lange is VP of AI and Machine Learning at Unity Technologies. Formerly, Danny was Head of Machine Learning at Uber where he led an effort to build the world’s most versatile Machine Learning platform to support Uber’s rapid growth. With the help of this branch of Artificial Intelligence including Deep Learning, Uber can provide an even better service to its customers. Previously, Danny was the General Manager of Amazon Machine Learning – an AWS product that offers Machine Learning as a Cloud Service. Prior to Amazon, Danny was Principal Development Manager at Microsoft where he was leading a product team focused on large-scale Machine Learning for Big Data. Danny started his career as a Computer Scientist at IBM Research and has a Ph.D. in Computer Science from the Technical University of Denmark.