Local LLM Support
The cluster focuses on users expressing interest in, inquiring about, and discussing support for running Large Language Models (LLMs) locally on personal hardware rather than using cloud-based services like OpenAI or ChatGPT, often citing privacy, cost, and capability reasons.
Activity Over Time
Top Contributors
Keywords
Sample Comments
There are local LLMs but my understand is they are not as good.
Nice project, any plans to make it work with local LLMs rather than "open"AI?
Anything that you wouldn't have run on chatgpt/claude and would need local LLM?Or you using local LLM as you have option to run it locally?
Any plans for local LLMs?Thanks for making this!
Not all LLM models are remote. You can do just fine with local ones.
Locally-running LLM's might be good enough to do a decent enough job at this point... or soon will be.
I wonder if they have plans for allowing the usage of a locally hosted LLM?
Great but personally I am interested in locally runnable LLM models instead of sending data to the cloud service like chatGPT.
You can always run models locally? Local models will become cheaper and faster over time. Don't panic just yet
Is there any product that allows me to plug in my own Open source LLM and run it locally?