Research Spotlight: Walter W. Zhang on the “Black Box” of Algorithmic Decision-Making 

Algorithms shape countless decisions in our digital lives, powering everything from personalized ads to loan approvals. Many of the most powerful algorithms are so-called “black box” algorithms, meaning that no one, not even the developers building them or the businesses deploying them, fully understands how they work. 

While these hyper-complex algorithms can deliver highly targeted results (like offering you a discount on an item you’ve been browsing, nudging you to finally make a purchase) their opacity can have serious downsides. If an algorithm rejects your credit card application or offers you a higher mortgage rate, you probably won’t accept “I don’t know why that happened” as an answer.

In this interview, Wharton’s Professor Walter Zhang discusses this trade-off in the context of increasing calls to regulate black box algorithms. His new research explores the costs of dialing back algorithmic complexity and what that might mean for businesses and the people they serve. 

Professor Walter W. Zhang

Q: Your research responds to new regulations in Europe, like GDPR and the AI Act, which require companies to make algorithmic decisions more transparent, especially in high-stakes areas like finance and hiring. These new regulations posit that consumers have a “right to explainability,” which is not currently possible when using a black box algorithm. 

The core issue is that many of today’s targeting systems are too complex for a human to understand, but they make real decisions that affect people’s lives: credit limits, mortgage approvals, or credit scores. In some cases, they can even influence legal decisions, like bail decisions or trial recommendations. These are serious outcomes, and they’re being driven by systems no one fully understands. The pushback from regulators is understandable: if we don’t understand how an algorithm makes decisions, how can we trust it?   

These new European laws—and ones being debated in the U.S., like the proposed California AB 2930—call for algorithmic decisions to be “explainable.” In my research, I take that literally, asking what happens if firms are required to use only simple rules or decision trees for their algorithmic targeting. In other words, what happens when you replace the highly complex algorithmic model with a straightforward sentence? For example: if you live in Philadelphia, you get 10 percent off.  

By doing this, I can calculate the economic impact of these regulations. The trade-off is between profitability and explainability. The black box models are great at pattern recognition, but not at transparency. In most of the U.S., firms are still allowed to use black box methods. But if regulations push them toward using simpler tools, they’ll likely take a hit to profitability. 

Q: Profitability is one major trade-off. What other trade-offs should firms and regulators consider? 

The important thing is that there’s no one-size-fits-all answer. Sometimes banning black box targeting makes sense. Other times, it doesn’t. It really depends on the specifics of the market. 

For example, when there’s more competition between firms, black box targeting can actually be better for consumers. In competitive markets, firms are trying to offer better deals to attract customers, and using the most accurate algorithmic model helps them do that. Even though these models are opaque, the pressure of competition can lead firms to use them in a way that can be socially beneficial. 

However, in a monopoly or less competitive market, the dynamic changes. A firm might use that same black box model to extract as much surplus as possible from the consumer. In those cases, the lack of transparency can become harmful, especially if the model is using information from protected classes to directly discriminate among consumers. 

It’s also important to note that there isn’t a direct link between transparency and fairness. While simple rules may seem fairer in theory, in practice they can perform worse—and sometimes even lead to more discrimination, depending on how the data is structured. There are situations where a complex model can be less discriminatory than a simple rule, because it can make more nuanced, individualized distinctions. It’s counterintuitive, but it shows why this trade-off is so tricky. 

Q: What does explainability mean for consumers? Why should it matter to them? 

If an algorithm offers a fair and helpful recommendation (say, a discount on something you actually want) most consumers are happy. The algorithm has effectively solved a search problem for them. But when the outcome feels unfair—like being denied a mortgage or getting a lower credit limit for no clear reason—people want explanations. 

There have been high-profile cases highlighting this. In the U.S., a family of color was appraised for a lower home valuation than when their white friend posed as the owner. Or the Apple Card controversy, where a husband and wife applied for the same credit card and, despite having similar backgrounds, the husband was given a much higher credit limit.  

These biases aren’t intentional, but they arise because companies lack insight into how models make their decisions.. Right-to-explanation laws aim to prevent discrimination on protected attributes like race or gender by making algorithms more transparent and easier to audit.  

Q: You also calculate the cost of not complying with regulations like GDPR, citing examples of high-profile companies who chose to eat the fines to avoid losing revenue. Can you explain why you included that?

Firms should comply with regulation. But in practice, enforcement is uneven. Regulators tend to target large players like Apple or Facebook, because fines are a percentage of global revenue and thus very lucrative.  

Smaller firms with lower visibility might face a dilemma if the cost of compliance exceeds the risk of getting fined. For example, some websites still don’t ask for cookie consent, even though that’s required under GDPR. Maybe they’re not tracking anything, but in some cases, they probably are. 

 So, part of my paper explores a counterfactual scenario: how does a firm respond when faced this dilemma? And, more importantly, how can regulators set fines or enforcement probabilities to ensure compliance? 

To be clear, this is not about dodging regulations. It’s beneficial for both parties. Firms can evaluate the impact of these regulations on their bottom line, and regulators can design smarter enforcement strategies and improve their oversight. My research provides a framework to measure the localized impact of right-to-explanation laws, allowing everyone to make more informed decisions.