• 0 Posts
  • 9 Comments
Joined 2 years ago
cake
Cake day: June 21st, 2023

help-circle


  • A false statement would be me saying that the color of a light that I cannot see and have never seen that is currently red is actually green without knowing. I am just as easily probably right as I am probably wrong, statistics are involved.

    A lie would be me knowing that the color of a light that I am currently looking at is currently red and saying that it is actually green. No statistics, I’ve done this intentionally and the only outcome of my decision to act was that I spoke a falsehood.

    AIs can generate false statements, yes, but they are not capable of lying. Lying requires cognition, which LLMs are, by their own admission and by the admission of the companies developing them, at the very least not currently capable of, and personally I believe that it’s likely that LLMs never will be.







  • I think the important point is that LLMs as we understand them do not have intent. They are fantastic at providing output that appears to meet the requirements set in the input text, and when they actually do meet those requirements instead of just seeming to they can provide genuinely helpful info and also it’s very easy to not immediately know the difference between output that looks correct and satisfies the purpose of an LLM vs actually being correct and satisfying the purpose of the user.