

Yep, I understand but disagree. Maybe it comes from working with so many ESL coders, but I’ll happily accept typo corrections because it’s not always obvious what words should be if you’re not steeped in the culture.
Yep, I understand but disagree. Maybe it comes from working with so many ESL coders, but I’ll happily accept typo corrections because it’s not always obvious what words should be if you’re not steeped in the culture.
“comments must be accurate,” is not a rule you should bend. Bending it even a little leads to last programming and shit code.
We could be sure of it if AI curated it’s inputs, which really isn’t too much to ask.
Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.
So take that video and modify it a bit. Color correct or something. That’s still abuse, right?
So the question is, at what point in modifying the video does it become not abuse? When you can’t recognize the person? But I think simply blurring the face wouldn’t suffice. So when?
That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?
I can’t make that call. And because I can’t make that call, I can’t support the concept.
without a victim
It was trained on something.
I upgraded immich without breaking everything. That’s always reason to celebrate.