Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).

(header photo by Brian Maffitt)

  • 0 Posts
  • 4 Comments
Joined 2 years ago
cake
Cake day: June 17th, 2023

help-circle
  • So they literally agree not using an LLM would increase your framerate.

    Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.

    Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?

    If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.

    For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.


  • It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:

    Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.

    When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)


  • Eh, I think that one’s mostly on the community / players giving up games as soon as anything bad happens (making the 30-70 and 40-60 games where you still have decent odds of winning more like 5-95 games which become a self-fulfilling prophecy), plus regular players getting better over time (mistakes and misplays are more likely to be punished and leads are more likely to be capitalized on).

    The give-up culture wasn’t as bad much earlier in the game’s life, at least in my NA-centric exposure to solo queue.