• Tar_Alcaran@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    … damn, these results are shockingly good, for a robot being given several English sentences. For zero human effort being put in - these models reproduced all the basic functionality of the game in question?

    Obviously not. The models were trained online, and that includes the code to hundreds of pacman clones on GitHub, all accompanies by the info in the prompt used.

    Considering they had perfect training data, the output is shockingly bad

    • mindbleach@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Okay, so what’s an “astronaut riding a horse” example? What would demonstrate how the model actually works, without people scoffing like it can only copy-paste whole things that already exist?

      • Tar_Alcaran@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        without people scoffing like it can only copy-paste whole things that already exist?

        I’m actually scoffing it’s not even capable of doing that very well.

        It’s a pretty simple question though, it just requires sanitized training data. Don’t include Pacman code, and then ask it for that.

        • mindbleach@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 day ago

          Oh sure, let’s just retrain a zillion-parameter model on 99.999% of its original corpus. I’m sure that’s easier than making up a game.

          Does any one of these AI outputs look like a specific GitHub project? Because we already have the technology to just copy-paste a whole thing. It’s called git. This is doing something different.