It can generate combinations of things that it is not trained on, so not necessarily a victim. But of course there might be something in there, I won’t deny that.
However the act of generating something does not create a new victim unless there is someones likeness and it is shared? Or is there something ethical here, that I am missing?
(Yes, all current AI is basically collective piracy of everyones IP, but besides that)
Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.
So take that video and modify it a bit. Color correct or something. That’s still abuse, right?
So the question is, at what point in modifying the video does it become not abuse? When you can’t recognize the person? But I think simply blurring the face wouldn’t suffice. So when?
That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?
I can’t make that call. And because I can’t make that call, I can’t support the concept.
With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.
Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.
It was trained on something.
It can generate combinations of things that it is not trained on, so not necessarily a victim. But of course there might be something in there, I won’t deny that.
However the act of generating something does not create a new victim unless there is someones likeness and it is shared? Or is there something ethical here, that I am missing?
(Yes, all current AI is basically collective piracy of everyones IP, but besides that)
Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.
So take that video and modify it a bit. Color correct or something. That’s still abuse, right?
So the question is, at what point in modifying the video does it become not abuse? When you can’t recognize the person? But I think simply blurring the face wouldn’t suffice. So when?
That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?
I can’t make that call. And because I can’t make that call, I can’t support the concept.
With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.
There is no ethical consumption while living a capitalist way of life.
😆as if this has something to do with that
But to your argument: It is perfectly possible to tune capitalism using laws to get veerry social.
I mean every “actually existing communist country” is in its core still a capitalist system, or how you argue against that?
ML always there to say irrelevant things
Yes?
We could be sure of it if AI curated it’s inputs, which really isn’t too much to ask.
Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.