• leisesprecher@feddit.org
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    The real problem are implicit biases. Like the kind of discrimination that a reasonable user of a system can’t even see. How are you supposed to know, that applicants from “bad” neighborhoods are rejected at a higher rate, if the system is presented to you as objective? And since AI models don’t really explain how they got to a solution, you can’t even audit them.