Trans woman - 9 years HRT

Intersectional feminist

Queer anarchist

  • 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Just because something is theoretically circumventable doesn’t mean we shouldn’t make it as hard as possible to circumvent it.

    The reason why misinformation is so common these days is because of concerted effort by fascists to obtain control over media companies. Once they are in power and have significant influence within those companies they can poison them, turning them into massive misinformation engines churning out content at a pace even faster than we ever believed possible. This problem has existed since the rise of mass media especially in the 19th century. But social media presents far faster and more direct throughlines to spreading misinformation to the masses.

    And those masses do not care if something is labeled as AI or not. They will believe it one way or the other. This still doesn’t change that it is necessary to directly label AI generated content as such. What is and isn’t made by a human is extremely important. We cannot equate algorithms with people, and it’s necessary to make that distinguishment as clearly as possible.



  • They are less useful than a Wikipedia search and a dictionary. They can functionally replace humans in 0 fields that were not already automatable by machines. They are useless in any situation that warrants any degree of caution about safety.

    85-90% is way over-estimated, it gets significantly worse dealing with specific tasks. And even if it was 85-90%, that’s not good enough, even remotely, for just about anything. Humans make errors too, but inconsistently and inversely proportional to experience. This makes no difference to the LLM though, it will always make errors at that exact rate. The kinds of errors it can make are also not just missteps but often pure delusion and very far from what the input was requesting. They cannot reason. They have no rationale. They’re imitation in its most empty form. They cannot even so much as provide information reliably.

    They also ruin every single industry they come into contact with, and even worse they have utterly destroyed the usability of the internet. LLMs are a net negative for humanity in so many different ways. They deserve as much attention and investment as chatbots did back in 2005.

    Their best use case scenario is in churning out an endless amount of lifeless soleless jpg background noise and word salad articles. Their best use case is in tricking people into giving them money or ad revenue. Scamming is the only thing they are anywhere near functionally useful for.