‘Weapons of Math Destruction’ with Cathy O’Neil

Cathy O’Neil is founder of algorithmic consulting company ORCAA and author of New York Times Bestseller ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’. At the IoAI we seek out expert perspectives and insights into questions at the cutting edge of AI policy. The views of these experts do not necessarily reflect the Institute’s own position.

Working on Wall Street in the midst of the 2008 financial crash led Cathy O’Neil to uncover what she would go on to term “Weapons of Math Destruction”: algorithms whose “opacity, scale and damage” combine to pose a special threat.

“The crash made it all too clear that mathematics, once my refuge, was not only deeply entangled in the world’s problems but also fuelling them”. She identified algorithms that were “designed to fail but let people make a lot of money” and found that “people trust in them because they have mathematical authority”. They possessed what she calls the “authority of the inscrutable”.

O’Neil quit her job at a hedge fund and struck out on a different path to uncover more WMDs, spanning a range of industries from recidivism technologies with racial bias to hiring algorithms which favoured men. Everywhere she looked, algorithms had the potential to entrench bias and exacerbate inequality.

O’Neil tells us this cannot be defended on the basis of ‘accuracy’ as some working in tech may claim.

“There is often this argument from the designers of algorithms that there is this trade-off between accuracy and bias. I want to push back against this framing because the accuracy is often the problem. In fact, we don’t want it to be accurate. Because being accurate often means it is more biased. If you think about recidivism algorithms, which I write about in the book, accuracy means accurately predicting who policemen will arrest in the future, which as we know is extremely racist. So, would we want to trade some fairness for some accuracy. We would. Going with accuracy is going with unfairness”

Accuracy, she says is simply “too good a name”.

When writing the book, she feared she “was the only person who was scared about this kind of stuff” but now that has changed, and such discussions have entered the mainstream.

She is happy to see this shift, but not all algorithmic harms are equally apparent, and not all harms receive the attention they deserve.

“There is all sorts of scepticism about facial recognition, and I’m really glad to see it. I’d like to see more of that kind of thing. But it will be hard because facial recognition is obvious. It’s in your face. It’s visceral.”

Most of the algorithmic harms O’Neil exposes are not like this. They are invisible. This is a problem; one which O’Neil thinks is the hardest thing for politicians to grapple with.

“Politicians respond to their constituents moaning and groping about something that is hurting them. That is likely not going to happen as it normally would in this situation because many of the harms are invisible to those who are harmed. The people who didn’t get the job because they didn’t know the job existed, they aren’t going to complain to their representative that they have been harmed. But they have”

This means a significant part of the task ahead must be “making the harm visible”. This is what O’Neil means when she talks about the value of transparency. Transparency will not solve the problem, but it can help us pinpoint the harm.

“Transparency is not a panacea. One of the reasons I want transparency is I want people to know they are under the scrutiny of a risk scoring system. I think people should know they are under the scrutiny of a risk scoring system. I think people should know they are being measured by an opaque system”

Key to her work as an algorithmic auditor is asking the question: “for whom does this fail?” Who is disadvantaged by the results of the algorithm? In practice, asking this basic question of algorithmic developers can reveal some disappointing answers.

“If you ask that question, you would be surprised how many of the algorithmic failures we’ve already witnessed are embarrassing. If you just think about the facial recognition software that works for white men but not for black women, you just think: they never asked that question? For whom does this fail? Because if they had asked that they would have seen this problem before claiming it was great. Many of the algorithmic mistakes we now know about would have been resolved before deployment if people had just asked that basic question. I think they just ignore it. They don’t want to deal with it. They don’t think they have to. That’s what I would suggest for policymakers: a requirement to consider, for whom does this fail.”

But what do we do once we have made the harms visible? Once these discriminatory biases are exposed, O’Neil is clear that broad principles of AI ethics are not enough: “They’re too broad. They’re almost useless”.

To say an algorithm must be fair is not enough. “Fairness doesn’t even make sense as a word. You’ve got to say what it means in a given context. The work isn’t saying that [fairness] is important. The work is figuring out what that means in a given context. It needs to be effectively played out in a bunch of contexts, in high stakes contexts”.

“My feeling about this, strongly, is that auditing algorithms is extremely bespoke right now. It’s very tailored to the context. No general principle can address that really.”

What we are dealing with “is not an abstract question, it is an actual question that we have to answer.”

While the topic has certainly become more mainstream since the publication of “Weapons of Math Destruction” in 2016, there is still a long way to go. And it is in this contemporary context that O’Neil has a final plea.

“For companies who are using these high stakes algorithms, ask for whom does this fail. Show us the dead bodies by the side of the road. Even if they don’t know that they are dead. Show it to us. Because then we can get a handle on how these things work. As long as the harm is invisible, they might as well be working perfectly for all anyone knows, but we know they are not perfect. So, we need to get a handle on the harm.”

Previous
Previous

The Global Dynamics of the TikTok Dispute – Emma Wright

Next
Next

Legislator Roundtable: COVID-19 Mobile Tracing Apps