• surewhynotlem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Watching videos of rape doesn’t create a new victim. But we consider it additional abuse of an existing victim.

    So take that video and modify it a bit. Color correct or something. That’s still abuse, right?

    So the question is, at what point in modifying the video does it become not abuse? When you can’t recognize the person? But I think simply blurring the face wouldn’t suffice. So when?

    That’s the gray area. AI is trained on images of abuse (we know it’s in there somewhere). So at what point can we say the modified images are okay because the abused person has been removed enough from the data?

    I can’t make that call. And because I can’t make that call, I can’t support the concept.

    • Petter1@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      With this logic, any output of any pic gen AI is abuse… I mean, we can 100% be sure that there are CP in training data (it would be a very bug surprise if not) and all output is result of all training data as far as I understand the statistical behaviour of photo gen AI.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          😆as if this has something to do with that

          But to your argument: It is perfectly possible to tune capitalism using laws to get veerry social.

          I mean every “actually existing communist country” is in its core still a capitalist system, or how you argue against that?

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 day ago

          Well AI is by design not able to curate its training data, but companies training the models would in theory be able to. But it is not feasible to sanitise this huge stack of data.