Most of us are familiar with the algorithms that recommend what we should buy on Amazon or watch on Netflix, but as computers gain more and more sophistication, how many of us really grasp the influence they may someday have over our lives? In this episode of Mastering Innovation on SiriusXM Channel 132, Business Radio Powered by The Wharton School, Wharton professor Kartik Hosanagar, author of A Human’s Guide to Machine Intelligence, walks through the evolution of artificial intelligence and points to the developments that lie ahead.
According to Hosanagar, the performance of algorithms and machines boils down to two factors: brilliant trade secrets and quantities of powerful data. By teaching machines to identify patterns within the latter, we’re beginning to move away from codifying every step to creating the machine learning abilities that will lead to complex skills like autonomous driving. Hosanagar describes the limitations we face today in our current “narrow” AI, implying that the future may lead to “true” AI and discussing how we humans can stay relevant when AI becomes that smart.
An excerpt of the interview is transcribed below. Listen to more episodes here.
Transcript
Kartik Hosanagar: In terms of how algorithms work today, it’s a combination of two approaches. It used to be that in past algorithms, the sequences of steps were completely determined by an engineer. You and I have done this before, where we’ve written a lot of software, and we would have to come up with every condition that the software might encounter: “If this happens, then you do this. If that happens, then you do this.” That works reasonably well for many tasks, but for truly complex tasks, it’s hard to come up with all of those steps. If I asked you, Harbir, to give me all the rules for driving a car, you could spend hours to give me many rules, but if I unleash that car, it’s going to have an accident in 15 minutes because there’s a lot of knowledge you have that is tacit and cannot be easily expressed. It’s actually known as Pollyanna’s Paradox.
Harbir Singh: Tacit knowledge. Let’s talk about that for just a second. You’re really talking about how we make judgments. A good example of tacit knowledge is somebody trying to teach someone else how to ride a bicycle. It’s impossible to codify. First of all, it is impossible to articulate that knowledge fully because there are many intuitive things that people learn from what is essentially an apprenticeship model. Let me ask you this question: Can an AI model teach someone how to ride a bicycle, or is it only for particular codifiable knowledge?
Hosanagar: It’s very task dependent, Harbir. You brought up riding a bike, and that’s really interesting because a close analogy may be driving a car as well. People have tried to build these kinds of algorithms.
“The hope is that, much like a child learns to recognize faces, people, or objects, the machine will do the same.” – Kartik Hosanagar
Singh: But in a car, there is the acceleration, and all the controls are really mechanical. Here, it’s the human being making the adjustments. If you cycle too slowly, you fall. Another one is a tennis coach knowing that the ball was hit wrong and not hit solidly from the sound on the strings. I’m just trying to explore the boundaries of the knowledge we can get a machine to absorb.
Hosanagar: Right. Can it absorb all of these very subtle things?
Singh: Yes. We define expert-level people with more than 10,000 hours [of practice], but that’s really in tasks which are not necessarily codifiable. An expert who is using codified knowledge, in this case a machine with much more than programming steps – like you said, decision rules – might get there.
Hosanagar: Yes, I’m going to say machines can do that, but the caveat I’m going to provide is that it’s not going to come in the form of codified rules, just to reiterate what you said. Another example is people working on things like face recognition algorithms.
Singh: That’s a very interesting example. Tell me more. I’ve come across that before, but you may have the newest version of it.
Hosanagar: There, again, we used to struggle if you codified the rules for face recognition. Certainly, if you’re writing software that can do taxes you can do it with rules because it’s very codifiable, but not with face recognition. The old way of building this AI, which was what we called expert systems (essentially, just a series of rules), could not handle those kinds of tasks because what we mostly use for face recognition is tacit knowledge. We can’t express the rules for that. Riding a bike or even coaching, as you were mentioning, are other good examples of that.
What has happened in the last few years is that we said, “Okay. Is there another way to build AI without giving it rules, without codifying the rules?” That other way is machine learning. We will, in fact, not provide any knowledge. We will just give the system a lot of data and ask it to recognize patterns in the data. The hope is that, much like a child learns to recognize faces, people, or objects, the machine will do the same. As a child, you initially see a four-legged animal. Somebody says, “That’s a cat.” The next time you see one, you say, “That’s a cat,” and somebody says, “No, that’s a dog.” Now, you suddenly realize whiskers matter. The shape of the face matters.
“It’s essentially all about seeing more examples and recognizing the patterns. That’s modern machine learning.” – Kartik Hosanagar
Singh: It’s a labeling and matching pattern recognition process. Kartik, you were saying something very interesting about a child learning how to recognize patterns. Then, from there, how we can mimic that?
Hosanagar: The child now sees more examples. I mentioned that, initially, the child doesn’t know the difference between a cat and dog, but once they see enough examples, they understand the difference. Next, let’s say the child sees a photograph of a tiger and says, “That’s a cat.” Then, you say, “Well, that is of the same family, but it’s not the same as a domesticated cat. This is a wild cat,” and then, the child understands that. It’s essentially all about seeing more examples and recognizing the patterns. That’s modern machine learning.
About Our Guest
Kartik Hosanagar is the John C. Hower Professor of Technology and Digital Business and a Professor of Marketing at The Wharton School of the University of Pennsylvania. Kartik’s research work focuses on the digital economy, in particular the impact of analytics and algorithms on consumers and society, Internet media, Internet marketing and e-commerce.
Kartik has been recognized as one of the world’s top 40 business professors under 40. He is a ten-time recipient of MBA or Undergraduate teaching excellence awards at the Wharton School. His research has received several best paper awards. Kartik cofounded and developed the core IP for Yodle Inc, a venture-backed firm that was acquired by Web.com. Yodle was listed by Inc. Magazine among America’s fastest growing private companies. He is a cofounder of SmartyPal Inc. He has served on the advisory boards of Milo (acq. by eBay) and Monetate and is involved with many other startups as either an investor or board member. His past consulting and executive education clients include Google, American Express, Citi and others. Kartik was a co-host of the SiriusXM show The Digital Hour. He currently serves as a department editor at the journal Management Science and has previously served as a Senior Editor at the journals Information Systems Research and MIS Quarterly.
Kartik graduated at the top of his class with a Bachelors degree in Electronics Engineering and a Masters in Information Systems from Birla Institute of Technology and Sciences (BITS, Pilani), India, and he has an MPhil in Management Science and a PhD in Management Science and Information Systems from Carnegie Mellon University.
Mastering Innovation is live on Thursdays at 4:00 p.m. ET. Listen to more episodes here.