

That gives a whole new twist to “you’ll own nothing and be happy”
That gives a whole new twist to “you’ll own nothing and be happy”
Yep, essentially. But that’s for the hyperrealistic one.
Open-source models exist and can be forked
Some form of digital signatures for allowed services?
Sure, it will limit the choice of where to legally generate content, but it should work.
The thing is, banning is also a consequential action.
And based on what we know about similar behaviors, having an outlet is likely to be good.
Here, the EU takes an approach of “banning just in case” while also ignoring the potential implications of such bans.
Aha, I see. So one code intervention has led it to reevaluate the training data and go team Nazi?
“Bizarre phenomenon”
“Cannot fully explain it”
Seriously? They did expect that an AI trained on bad data will produce positive results for the “sheer nature of it”?
Garbage in, garbage out. If you train AI to be a psychopathic Nazi, it will be a psychopathic Nazi.
That’s exactly how they work. According to many articles I’ve seen in the past, one of the most common models used for this purpose is Stable Diffusion. For all we know, this model was never fed with any CSAM materials, but it seems to be good enough for people to get off - which is exactly what matters.
I actually do not agree with them being arrested.
While I recognize the issue of identification posed in the article, I hold a strong opinion it should be tackled in another way.
AI-generated CSAM might be a powerful tool to reduce demand for the content featuring real children. If we leave it legal to watch and produce, and keep the actual materials illegal, we can make more pedophiles turn to what is less harmful and impactful - a computer-generated image that was produced with no children being harmed.
By introducing actions against AI-generated materials, they make such materials as illegal as the real thing, and there’s one less reason for an interested party not to go to a CSAM site and watch actual children getting abused, perpetuating the cycle and leading to more real-world victims.
I’m afraid Europol is shooting themselves in the foot here.
What should be done is better ways to mark and identify AI-generated content, not a carpet ban and criminalization.
Let whoever happens to crave CSAM (remember: sexuality, however perverted or terrible it is, is not a choice) use the most harmless outlet - otherwise, they may just turn to the real materials, and as continuous investigations suggest, there’s no shortage of supply or demand on that front. If everything is illegal, and some of that is needed anyway, it’s easier to escalate, and that’s dangerous.
As sickening as it may sound to us, these people often need something, or else things are quickly gonna go downhill. Give them their drawings.
And if you want to have two weekends, 60 hours in 5 days is 12 hours of work a day, minus 8 hours for sleep you get 4 hours, minus ~2 hours commute you get 2 hours, and the rest is basic cooking and eating. This leaves 0 hours for anything else, including rest or even any other duties that you’ll end up resolving throughout the weekends. This will absolutely kill you in the long run.
If only we had more content not related to “look we’re free!”, “look Linux is freedom”, “free free free!”, “MAGA bad, but we’re independent and free!”, it would be even more awesome (not a pun to your side, just a piece of frustration)
Also, for those saying “create it yourself” - I do