

1·
1 day agoI’d say it ends when you can’t predict with 100% accuracy 100% of the time how an entity will react to a given stimuli. With current LLMs if I run it with the same input it will always do the same thing. And I mean really the same input not putting the same prompt into chat GPT twice and getting different results because there’s an additional random number generator I don’t have access too.
So I’d go with no at the moment because I can easily get an LLM to contradict itself repeatedly in increcibly obvious ways.
I had a long ass post but I think it comes down to that we don’t know what conciousness or self awareness even are and just kind of collectively agree upon it when we think we see it, sort of like how morality is pretty much a mutable group consensus.
The only way I think we could be truly sure would be to stick it in a simulated environment and see how it reacts over a few thousand simulated years to figure out wether its one of the following:
Now personally I think that test is likely impractical so we’re probably going to default to its concious when it can convince the majority of people that its concious for a sustained period… So I guess it has free will when it can start or at least spark a large grass roots civil rights movement?