Should-Read: Iason Gabriel: The case for fairer algorithms
Should-Read: Iason Gabriel: The case for fairer algorithms: “Software used to make decisions and allocate opportunities has often tended to mirror the biases of its creators, extending discrimination into new domains…
…Job search tools have been shown to offer higher paid jobs to men, a programme used for parole decisions mistakenly identified more black defendants as ‘high risk’ than other racial categories, and image recognition software has been shown to work less-well for minorities and disadvantaged groups…. A better understanding is needed of how bias enters algorithmic decisions…. The data used to train machine learning models is often incomplete or skewed…. Data… frequently contains the imprint of historical and structural patterns of discrimination…. Statistically unbiased and properly coded datasets… may still contain correlations between gender and pay, or race and incarceration, which stem from entrenched patterns of historical discrimination… Against this backdrop, it would be a serious mistake to think that technologists are not responsible for algorithmic bias or to conclude that technology itself is neutral….
Even when explicit information about race, gender, age and socioeconomic status is withheld from models, part of the remaining data often continues to correlate with these categories, serving as a proxy for them…. Patterns of discrimination intersect with each other, placing particular burdens on groups such as immigrants or single-parent families who conventionally fall outside the ‘protected category’ framework….
Be transparent about the limitations of datasets…. Conduct research and develop techniques to mitigate bias…. Deploy responsibly…. Increase awareness…. Research will contribute to our understanding of problem and potential solutions, but certain principles are already clear. We need new standards of public accountability…. And we need technologists to take responsibility for the impact of their work…