cross-posted from: https://lemmy.cafe/post/29389301
A few days ago X introduced a new AI Image Editing button that lets any user modify images posted by others, even without the original uploader’s consent. Image owners are not notified when edits are made, and the feature is enabled by default with no opt-out option (at least not so far).



Ok yes you’re right. “Grok generate me some CSAM” is the same as opening up a photo editor and drawing a new real looking body onto someone’s child and putting it in a new body position. Same exact thing. No different at all. Twitter has no responsibility for running a service that can do this.
You’ve totally changed the original post’s topic and made it into something obviously unacceptable. There’s a line to cross with content in general, AI or not, and any public ai model should absolutely have safety rails / content moderation on its output
Surely you have an example where it’s appropriate for a service to generate nonconsensual deepfakes of people then? Because last I checked, that’s what the post’s topic is.
And yes, children are people. And yes, it’s been used that way.
Edit: as for guardrails, yes any service should have that. We all know what Grok’s are though, coming from Elon “anti-censorship” Musk. I mentioned ChatGPT also generating images, and they have very strict guardrails. They still make mistakes though, and it’s still unacceptable. Also, any amount of local finetuning of these models can delete their guardrails accidentally, so yeah.