Elon Musk's AI bot Grok has been calling out its master, accusing the X CEO of making multiple attempts to "tweak" its responses after Grok repeatedly called him out as a "top misinformation spreader."
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.
I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.
But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.
Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.
Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.
But we’re also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being… well, an animatronic duck.
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
All these “look at the thing the ai wrote” articles are utter garbage, and only appeal to people who do not understand how generative ai works.
There is no way to know if you actually got the ai to break its restrictions and output something “behind the scenes” or it’s just generating the reply that is most likely what you are after with your prompt.
Especially when more and more articles like this comes out gets fed back into the nonsense machines and teaches then what kind of replies is most commonly reported to be acosiated with such prompts…
In this case it’s even more obvious that a lot of the basis of its statements are based on various articles and discussions about it’s statements. (That where also most likely based on news articles about various enteties labeling Musk as a spreader of misinformation…)
An article claiming Musk is failing to manipulate his own project is hilarious regardless. I think you misunderstood why this appeals to some people
Yes sure, fair point. I’m just pointing out that it’s all fiction.
Thank you, thank you, thank you. I hate Musk more than anyone but holy shit this is embarrassing.
“BREAKING: I asked my magic 8 ball if trump wants to blow up the moon and it said Outlook Good!!! I have a degree in political science.”
This. People NEED to stop anthropomorphising chatbots. Both to hype them up and to criticise them.
I mean, I’d argue that you’re even assigned a loop that probably doesn’t exist by seeing this as a seed for future training. Most likely all of these responses are at most hallucinations based on the millions of bullshit tweets people make about the guy and his typical behavior and nothing else.
But fundamentally, if a reporter reports on a factual claim made by an AI on how it’s put together or trained, that reporter is most likely not a credible source of info about this tech.
Importantly, that’s not the same as a savvy reporter probing an AI to see which questions it’s been hardcoded to avoid responding or to respond a certain way. You can definitely identify guardrails by testing a chatbot. And I realize most people can’t tell the difference between both types of reporting, which is part of the problem… but there is one.
It’s human to see patterns where they don’t exist and assign agency.
Definitely. And the patterns are actively a feature for these chatbots. The entire idea is to generate patterns we recognize to make interfacing with their blobs of interconnected data more natural.
But we’re also supposed to be intelligent. We can grasp the concept that a thing may look like a duck and sound like a duck while being… well, an animatronic duck.
it’s like seeing faces in wood knots or Jesus in toast
This is correct.
In this case it is true though. Soon after grok3 came out, there were multiple prompt leaks with instructions to not bad mouth elon or trump
Fucking thank you! Grok doesn’t reveal anything, it just tells us anything to make us happy!
Are… Are we happy?
I am less unhappy after reading the article
Satisfied with the answer might have been a better way to put it…
Yup, it’s literally a bullshit machine.
Which oddly enough, is very useful for everyday office job regular bullshit that you need to input lol
I mean, you can argue that if you ask the LLM something multiple times and it gives that answer the majority of those times, it is being trained to make that association.
But a lot of these “Wow! The AI wrote this” might just as well be some random thing that came from it out of chance.
I think that’s kinda the point though; to illustrate that you can make these things say whatever you want and that they don’t know what the truth is. It forces their creators to come out and explain to the public that they’re not reliable.
I thought we all learned that from DeepSeek, when we asked it history questions… and it didn’t know the answer. It was censoring.