• WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    I think most ML experts (that weren’t being paid out the wazoo for saying otherwise) have been saying we’re on the tail end of the LLM technology sigma curve. (Basically treating an LLM as a stochastic index, the actual measure of training algorithm quality is query accuracy per training datum)

    Even with deepseek’s methodology, you see smaller and smaller returns on training input.

    • MDCCCLV@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      At this point, it is useful for doing some specific things so the way to make it great is making it cheap and accessible. Being able to run it locally would be way more useful.

      • makyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        100% this. Wouldn’t it be something if they weren’t overtly running their companies to replace all of us? If feel like focusing instead on creating great personal assistants that make our lives easier in various ways would get a lot of support from the public.

        And don’t get me wrong, these LLMs are great at helping people already but that’s definitely not the obvious end goal of OpenAI or any of the others.