cross-posted from: https://lemmy.cafe/post/29389301
A few days ago X introduced a new AI Image Editing button that lets any user modify images posted by others, even without the original uploader’s consent. Image owners are not notified when edits are made, and the feature is enabled by default with no opt-out option (at least not so far).



Surely you have an example where it’s appropriate for a service to generate nonconsensual deepfakes of people then? Because last I checked, that’s what the post’s topic is.
And yes, children are people. And yes, it’s been used that way.
Edit: as for guardrails, yes any service should have that. We all know what Grok’s are though, coming from Elon “anti-censorship” Musk. I mentioned ChatGPT also generating images, and they have very strict guardrails. They still make mistakes though, and it’s still unacceptable. Also, any amount of local finetuning of these models can delete their guardrails accidentally, so yeah.