Does it really work like that? I would say that they are not trying to fool any test, just getting harder to be detected. The goal being looking completely realistic.
Looking completely realistic and being able to discern between real and fake are competing goals. If you can discern the difference, then it does not look completely realistic.
Another arm in the arms race. The next gen of face generation will have this mastered.
Does it really work like that? I would say that they are not trying to fool any test, just getting harder to be detected. The goal being looking completely realistic.
Looking completely realistic and being able to discern between real and fake are competing goals. If you can discern the difference, then it does not look completely realistic.
I think what they’re alluding to is generative adversarial networks https://en.m.wikipedia.org/wiki/Generative_adversarial_network where creating a better discriminator that can detect a good image from bad is how you get a better image.
This is one of the basic techniques to spot AI fakes:
The “test” they’re trying to fool, is kind of the Turing test: whether humans can tell them apart.
Consistent illumination and shadows is a rabbit hole we really don’t want to hop into.
Outside of very obvious anomalies even a trained eye will have a hard time discerning what’s going.