Despite advancements in AI, new research reveals that large language models continue to perpetuate harmful racial biases, particularly against speakers of African American English.
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.
While it may be obvious to you, most people don’t have the data literacy to understand this, let alone use this information to decide where it can/should be implemented and how to counteract the baked in bias. Unfortunately, as is mentioned in the article, people believe the problem is going away when it is not.
True, and it upsets me because we can’t even get a baseline agreement from the masses to correct systemic inequality.
…yet, simultaneously we’re investing academic effort into correcting symptoms spawned by the problem (that many believe doesn’t exist).
To put this another way. Imagine you’re a car mechanic, someone brings you a 1980s vehicle, you diagnose that it is low on oil, and in response the customer says, “Oil isn’t real.” That’s an impasse, conversation not found, user too dumb to continue.
I suppose to wrap up my whole message in one closing statement : people who deny systematic inequality are braindead and for whatever reason, they were on my mind while reading this article.
I’ll be curious what they find out.
The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.