We should not normalize the use of photorealistic AI-generated images of people
I like the fact that if you need a quick drawing of an icon or object, AI can create it. With a prompt, you can have it drawn for you in seconds. DALL-E is the best. For those who can’t create those images, they no longer need to pay for the rights for the stock images or icons.
Everything that is made is inspired by something else.
If I draw a flower, the likelihood that my flower matches with someone else’s flower is high. If AI draws a flower, the likelihood that its flower matches with someone else’s flower is high. Both I and the AI are drawing inspiration from something else. No one would know unless you told them.
I do not like the fact you can create a photorealistic image of a person. X’s Grok AI does this. I think it sets a bad precedent. It deceives people into thinking the person in the image is doing what is being shown in the image, and it takes money away from photographers.
A person is unique.
Would you want someone to create a photorealistic image of yourself performing an action that you didn’t perform? It could be as small as a facial expression you didn’t make and as large as an obscene gesture you didn’t perform.
Someone could easily manufacture a scandal using one of these generated images, and the internet is complicit in those future scandals because we normalized it.
If you want to create a photorealistic image of a mountain or a stadium, a place, that’s fair game. I don’t see the problem with that, but not a person. And you can kind of tell it’s AI-generated now, but soon you won’t be able to.