LLM Determinism Debate
The cluster focuses on debates about whether Large Language Models (LLMs) produce deterministic outputs, highlighting issues like inherent randomness, sampling methods, seeds, floating-point variations, and execution environments that lead to inconsistent results for identical inputs.
Activity Over Time
Top Contributors
Keywords
Sample Comments
That only works if you have some level of determinism.
The LLMs most of us are using have some element of randomness to every token selected, which is non-deterministic. You can try to attempt to corral that, but statistically, with enough iteration, it may provide nonsense, unintentional, dangerous, opposite solutions/answers/action, even if you have system instructions defining otherwise and a series of LLMs checking themselves. Be sure that you fully understand this. Even if you could make it fully deterministic, it would be determinist
Are you trying to say I'm old? Machines are deterministic.. LLM's are very much not.
So in other words it's not perfectly deterministic at all?
Interesting -- is there any impact from LLM outputs not being deterministic?
I think the better statement is likely "LLMs are typically not executed in a deterministic manner", since you're right there are no non deterministic properties interment to the models themselves that I'm aware of
Not sure that things are that deterministic.
Or it's just non-deterministic, like with every LLM.
Why wouldn't it be deterministic?
This won’t ever work as long as LLMs are non deterministic