A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.
A big biometric security company in the UK, Facewatch, is in hot water after their facial recognition system caused a major snafu - the system wrongly identified a 19-year-old girl as a shoplifter.
If it works anything like Apple’s Face ID twins don’t actually map all that similar. In the general population the probability of matching mapping of the underlying facial structure is approximately 1:1,000,000. It is slightly higher for identical twins and then higher again for prepubescent identical twins.
Yeah, people with totally different facial structures get identified as the same person all the time with the “AI” facial recognition, especially if your darker skinned. Luckily (or unluckily) I’m white as can be.
I’m assuming Apple’s software is a purpose built algorithm that detects facial features and compares them, rather than the black box AI where you feed in data and it returns a result. Thats the smart way to do it, but it takes more effort.
All the time, eh? Gonna need a citation on that. And I’m not talking about just one news article that pops up every six months. And nothing that links back to the UCLA’s 2018 misleading “report”.
You assume a lot here. People have this conception that all FR systems are trained blackbox models. This is true for some systems, but not all.
The system I worked with, which ranked near the top of the NIST FRVT reports, did not use a trained AI algorithm for matching.
I’m not doing a bunch of research to prove the point. I’ve been hearing about them being wrong fairly frequently, especially on darker skinned people, for a long time now. It doesn’t matter how often it is. It sounds like you have made up your mind already.
I’m assuming that of apple because it’s been around for a few years longer than the current AI craze has been going on. We’ve been doing facial recognition for decades now, with purpose built algorithms. It’s not mucb of leap to assume that’s what they’re using.
Meaning, 8’000 potential false positives per user globally. About 300 in US, 80 in Germany, 7 in Switzerland.
Might be enough for Iceland.
Yeah, which is a really good number and allows for near complete elimination of false matches along this vector.
I promise bro it’ll only starve like 400 people please bro I need this
Who’s getting starved because of this technology?
A single mum with no support network who can’t walk into any store without getting physically ejected, maybe?
You’re perfectly OK with 8000 people worldwide being able to charge you for their meals?
No you misunderstood. That is a reduction in commonality by a literal factor of one million. Any secondary verification point is sufficient to reduce the false positive rate to effectively zero.
Like, running a card sized piece of plastic across a reader?
It’d be nice if they were implementing this to combat credit card fraud or something similar, but that’s not how this is being deployed.
Which means the face recognition was never necessary. It’s a way for companies to build a database that will eventually get exploited. 100% guarantee.
And yet this woman was mistaken for a 19-year-old 🤔
Shitty implementation doesn’t mean shitty concept, you’d think a site full of tech nerds would understand such a basic concept.
Pretty much everyone here agrees that it’s a shitty concept. Doesn’t solve anything and it’s a privacy nightmare.
Well I guess we’re lucky that no one on Lemmy has any power in society.