We start by helping corporations understand the history of housing and finance in the United States and how all of our housing and finance policies have been exacted through a racial lens. You can’t start at ground zero in terms of developing a system and think that system is going to be fair. You have to develop it in a way that utilizes antiracist technologies and methodologies.
McIlwain: Can we still realistically make a dent in this problem using the technological tools at our disposal? If so, where do we start?
Rice: Yes–once the 2008 financial crisis was over a little bit and we looked up, it was like the technology had overtaken us. And so we decided, maybe if we can’t beat it, maybe we’ll join. So we spent a lot of time trying to learn how algorithmic–based systems work, how AI works, and we actually have come to the point where we think we can now use technology to help diminish discriminatory outcomes.
If we understand how these systems manifest bias, we can get in the innards, hopefully, and then de-bias those systems, and build new systems that infuse the de-biasing techniques within them.
But when you think about how far behind the curve we are, it’s really daunting to think about all the work that needs to be done, all the research that needs to be done. We need more Bobbys of the world. But also all of the education that needs to be done so that data scientists understand these issues.
Rice: We’re trying to get regulators to understand how systems manifest bias. You know, we really don’t have a body of examiners at regulatory agencies who understand how to conduct an exam of a lending institution to ferret out whether or not its system–its automated underwriting system, its marketing system, its servicing system–is biased. But the institutions themselves develop their own organizational policies that can help.
The other thing that we have to do is really increase diversity in the tech space. We have to get more students from various backgrounds into STEM fields and into the tech space to help enact change. I can think of a number of examples where just having a person of color on the team made a profound difference in terms of increasing the fairness of the technology that was being developed.
McIlwain: What role does policy play? I get the sense that in the same way that civil rights organizations were behind the industry in terms of understanding how algorithmic systems work, many of our policymakers are behind the curve. I don’t know how much faith I would place in their ability to realistically serve as an effective check on the system, or on the new AI systems’ quickly making their way into the mortgage arena.
McIlwain: I remain skeptical. For now, for me, the magnitude of the problem still far exceeds both our collective human will and the capabilities of our technology. Bobby, do you think technology can ever help
Bartlett: I have to answer that with the lawyerly “It depends.” What we see, at least in the lending context, is that you can eliminate the source of bias and discrimination that you observed with face-to-face interactions through some sort of algorithmic decision making. The flip side is that if improperly implemented, you could end up with a decision–making apparatus that is as bad as a redlining regime. So it really depends on the execution, the type of technology, and the care with which it is deployed. But a fair lending regime that is operationalized through automated decision making? I think that’s a really challenging proposition. And I think that jury is still out.