Algorithms are not the impartial arbiters of truth we were hoping for.

In 2018, MIT computer scientist Joy Buolamwini discovered something disturbing: Leading facial recognition systems often failed to identify darker-skinned women, with error rates as high as 34%, while barely missing lighter-skinned men (~0.8%). It was relatively early days in the AI race, but it was a concerning find nonetheless. Fast forward to today, and AI models have been repeatedly shown to discriminate based on gender and race in everything from job selection to healthcare. 

This week at the HLFF Blog, Andrei Mihai examines the problem of discrimination among AI models.

Check out the full article here: HLFF Blog

Image caption: Image credits: Jr Corpa (CC BY 3.0)