Skip to main content

Verified by Psychology Today

Bias

Moral Algorithms

“I’m sorry, Dave; I’m afraid I can’t do that.”

Thus spake HAL 9000 in Stanley Kubrick’s 1968 masterpiece 2001: A Space Odyssey. The computer saw Dave’s request as jeopardizing the mission and acted accordingly. HAL’s algorithms were moral.

Not to be outdone by science fiction, Congress last year introduced something called the Algorithmic Accountability Act, a novel attempt to hold computer programs accountable for immoral behavior. The New York Times (5/7/19) labeled it “The Legislation that Targets the Racist Impacts of Tech.” The bill seems to have died, but the ideas behind it are worth exploring.

The NYT article begins with a helpful description of the task faced by the algorithm designer:

When creating a machine-learning algorithm, designers have to [decide] what data to train it on, what specific questions to ask… These choices leave room for discrimination… against people who have been discriminated against in the past. For example, training an algorithm to select potential medical students on a data set that reflects longtime biases against women and people of color may make these groups less likely to be admitted… “White male doctors in, white male doctors out.”

This all seems totally reasonable. But is it? An example may help. Fishermen are subject to all sorts of limits on what they can keep. Currently, in North Carolina waters, fishermen are not supposed to retain King mackerel shorter than 24 inches in length. At a given age, female Kings tend to be larger than males.

It is likely, therefore, that a law-abiding fisherman—or an algorithm—will catch and keep more females than males, even though he is paying no attention to sex. Obviously, he (or it) cannot be accused of discriminating by sex. The sex disparity reflects a disparity in the criterion measure between the sexes. It is intrinsic, not extrinsic, built-in, not imposed by the fisherman.

The same caveat applies to apparently racist decisions by an algorithm. If the criteria for making a decision—grades and test scores for admission to medical school, for example—do not include, say, gender, then a result that shows gender disparities may well be the result of real differences rather than gender discrimination. These differences may be associated with gender, like the King mackerel weight differences. But the gender disparity is a secondary effect. If there is “bias” in the fisherman’s choice, it is caused by the fish, not the fisherman.

The same may be true for the medical school applicants, although it’s tougher to know for sure since each applicant’s gender is known to the school making the selection. Even if gender is excluded in the inputs to the selection algorithm, there may be other reliable cues: The school the applicant went to—is it a girls’ school? Does the applicant claim to be a football player? Does the applicant excel at athletics or writing? The point, then, is not “gender bias or not gender bias,” but are these non-gender factors reliable indicators of success in medical school? Which is a difficult question to answer.

Reflect for a moment on what “training” entails. The algorithm, like the admissions office, is presented with a set of variables on past applicants: age, grades, test scores, recommendation letters—and sex/gender. For the algorithm, the next step is to correlate these variables, in an opaque, “black box” fashion, with some measure of college success: What combination of variables best predicts success in college? Once this set of criteria has been developed, the final step is to present the algorithm with a test set of new applicants and let it select those who should be admitted.

There are built-in problems in this process, not least deducing what we mean by “success in college/medical school” from a population of admits that has already been selected according to some criterion (selection bias). Selection bias is a problem for any process, human or machine, that must make decisions on the basis of non-random (pre-selected) data. NBA players are mostly tall, but you might not discover that height is an advantage in basketball by looking just at NBA players.

This example, in fact, contradicts the suggestion of the NYT article that: “…training an algorithm to select potential medical students on a data set that reflects longtime biases against women and people of color may make these groups less likely to be admitted.” Imagine a case of extreme selection bias in which the training data set of past admits contains no women at all. Since females are excluded, the algorithm must have learned to select admits on the basis of characteristics other than sex. If women are now included in the test set of new applicants, they will have a fair shot if their academic and other scores are comparable to the men. In other words, the bias in the training set need not mean unfair treatment in the future.

Algorithms, though, do have one key advantage over a human selection committee: Their input can be controlled. If the algorithm is not given information about the sex or gender of the applicant, any disparity that results is unlikely to be a real gender bias.

Financial matters are easier in this respect. Some years ago, economist Tom Sowell pointed out in connection with mortgage loan “redlining” that proof of racial bias would be that blacks who actually got loans were better qualified than successful whites. Their loans should perform better and have lower default rates, meaning that blacks are held to a higher standard than whites.

A 1994 study[1] showed that this was not true: “empirical findings fail to support the theoretical predictions that observed default rates are relatively lower among minority borrowers or neighborhoods.” These neighborhoods were apparently held to the same standards as others (though published comments questioned how much could be concluded about racial discrimination more generally based on the results).

The same may be true, though much harder to demonstrate, for medical school applicants. If the training data set of successful applicants is the result of discrimination against women, then the “women and people of color” that it contains are likely to be better qualified than the white men. In fact, women now do better at getting into medical school than men. Whether they do better in school than male students is uncertain.

The New York Times article is right in one respect: The problem of detecting, and correcting, bias in an artificial intelligence algorithm is no different from the problem of correcting bias in human beings, The one thing that gives algorithms the edge is that we can control their input. If the algorithm doesn’t “see” race or gender, it is harder to attribute the disparities that result to bias: The cause is probably in the fish, not the fisherman. On the other hand, it is probably just as well that the Algorithmic Accountability Bill seems to have died since algorithms are likely if anything to be less biased than the admissions committees they may replace, if only because they can be forced to ignore factors that might be a source of bias.

References

[1] Berkovec, J. A., Canner, G. B., Gabriel, S. A. & Hannan, T. H. (1994) Race, redlining, and residential mortgage loan performance. The Journal of Real Estate Finance and Economics volume 9, pages 263–294. See also Holmes, A. & Horvitz, P. (1994) Mortgage Redlining: Race, Risk, and Demand. in the Journal of Finance.

advertisement
More from John Staddon, Ph.D.
More from Psychology Today