Adoption of Predictive Analytics: Impact of Model Interpretability

Kartik Hosanagar, Operations, Information, and Decisions, The Wharton School

Abstract: One of the most important trends in business in recent years has been the growth of Big Data and predictive analytics. The trend started with traditional analytics and the emergence of decision support systems. With advances in machine learning (ML), systems can now take in large amounts of data, learn how human decision-makers have made decisions in the past, and make decisions autonomously (achieving human-level or superhuman performance in many activities). The success of these systems can be seen in systems ranging from autonomous cars and medical diagnostics systems like Watson to robo-advisors and job screening software.

Despite the success of these systems, one concern with many of the machine learning systems has been the lack of interpretability. Some of the best performing machine learning systems – for example, systems based on deep learning – are often the most opaque. It is sometimes not clear what variables were considered by the algorithm, which ones impacted a decision the most, and whether the associations are causal in nature. The lack of interpretability is important in light of some well documented failures including racism in certain job screening software, gender bias in online ad targeting, racism in criminal sentencing software, etc. These kinds of failures represent significant reputational and legal risk to firms. Furthermore, in business settings, the lack of interpretability also hurts the adoption of predicted analytics by managers who don’t trust a system whose decision process they don’t understand.

This research proposal seeks to evaluate how firms trade off the risks (potential biases, lack of interpretability) with the performance of ML-based decision systems.