Is Big Data Making You a Victim of Racism? The Shocking Cost of Algorithmic Bias Revealed

Uncovering the Cost of Algorithmic Bias: Big Data and Racism
Amit Founder & COO cisin.com
❝ At the heart of our mission is a commitment to providing exceptional experiences through the development of high-quality technological solutions. Rigorous testing ensures the reliability of our solutions, guaranteeing consistent performance. We are genuinely thrilled to impart our expertise to youβ€”right here, right now!! ❞


Contact us anytime to know more β€” Amit A., Founder & COO CISIN

 

Few are much better positioned to wrestle with these types of queries than Cathy O'Neil. She's a Harvard-trained mathematician who traveled to work as a hedge fund quant after finishing her Ph.D.

She recently she wrote a book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, about her own experience.

In it, she raises a seldom discussed worry about large info -- not only can it be subject to our biases, but additionally, it may actually legitimize and amplify them.

An algorithm, she argues, can completely be racist.


Big Data, large misconceptions

Big Data, large misconceptions

 

In a fascinating interview with the Harvard Gazette, '' O'Neil explains how her experience of operating in the fund during the financial meltdown opened her eyes to the fact that information, which she'd previously seen as neutral and apolitical, could be "weaponized." Math, she heard, can be forced to lie.

But the problem goes deeper than simply bad religion actors knowingly manipulating calculations to find the results they want (such as higher ratings than were warranted for mortgage-backed securities).

The larger problem is that even quants and their companies who act in good faith can wind up doing deep harm. O'Neil describes:

"Substantial data essentially is a way of dividing winners and losers. Substantial data profiles people. It has all kinds of advice about the - consumer behavior, everything readily available in public records, votes, demography.

It profiles people and it forms people into winners and losers within a variety of manners. Are you currently persuadable like a voter or are you not persuadable as a voter? Are you likely to be exposed to your payday advance advertisement or are you currently impervious to this payday loan advertising?"

OK good, you might say, no one likes to be labeled a loser, but chat about the whole process with less inflammatory language and you might see how sorting clients could spare company a lot of wasted effort and clients a lot of annoying, irrelevant marketing.

But that misses the fact that enormous data isn't just used to choose which vouchers to offer shoppers, or which flyer to mail to a particular voter.

The public, O'Neil asserts, does not "quite understand how pernicious [algorithms] can be, frequently, that is because we are not typically subject to the worst of all the algorithms: those that keep people from having jobs since they do not pass the personality test, those who sentence criminal defendants for longer in jail if they are termed a high recidivism danger, or even those that are random punishments for schoolteachers."

This sorting process isn't always a win-win for the sprinkled and the sorter. It can be adversarial, benefiting the business or institution and harming the person profiled.

Many men and women, O'Neil suggests, gloss over this reality.


Disguising bias

Disguising bias

 

And worse yet, quants and companies often bake bias to the calculations we use to form people in these adversarial and large stakes situations.

The factors used to separate the 'winners' and the 'champions' (or whatever form label you'd like to utilize), may consist of characteristics like gender and race that people know are highly susceptible to bias - so highly vulnerable, in reality, which you are legally banned from using them to create many of the most consequential decisions.

You can not say you did not give a household a loan because they were black without confronting a lawsuit. But many algorithms, even those employed by peer-reviewed lending marketplaces, make conclusions based, in part, on race all the time.

We take them as they're dressed up in a veneer of mathematics.

With her critiques, O'Neil says, she wishes to begin a dialog about "what it means for an algorithm to be racist."

Big data, '' she concludes, holds enormous potential.

Used thoughtfully, algorithms can actually strip individual prejudice from the conclusion. But you have to be aware of the difficulty to correct for it. And many people, for example, many quants, are not.