Open Source AI Debate
The cluster discusses the tension between open source AI models and proprietary ones from companies like OpenAI, focusing on safety concerns, business model threats, potential regulation, and the shift from openness to control.
Activity Over Time
Top Contributors
Keywords
Sample Comments
If open source AI becomes good enough would this model hold? I guess they will try to shut down the open models as they come close?
Who cares if it is open source model and weights are available. Nightmare scenario is that AI is behind some paywall and some entity can decide what goes in and what goes out.
Openai owns the intelligence until it doesnt, and the open source model is good enough
It's one thing for, eg, OpenAI to decide a model is too dangerous to release. I don't really care, they don't owe anyone anything. It's more that open source is going to catch up, and it's a slippery slope into legal regulation that stifles innovation, competition, and won't meaningfully stop hackers from getting these models.
It’s mit licensed, go run it on your own hardware if you are worried about data leaks etc.That’s what breaks open ai, anthropic’s business models. Anyone can offer services using these models - hence they are commodities.
Who makes money from closed-source AI models?
There's been some discussion below on OpenAI not being open. Contrary to most arguments, I have to share that I like their current approach. We can all easily imagine the sort of negative things that could come out of the misuse of their models, and it is their responsibility to ensure it drips into public domain rather than just putting it out there. It also allows them to more carefully consider any implications that they might have missed during development. And even when you do have acc
good grief! people are okay with it when OpenAI and Google do it, but as soon as open source providers do it, people get defensive about it...
I don't want to be too cynical, but OpenAI used to be more open too until they decided releasing weights was too dangerous (/not profitable enough?), what guarantee is there that Eleuther doesn't also close their doors at some point?
Question to AI/ML folks : Is there no comparable open source model? Is the future going to be controlled by big corporations who own the models themselves? If models are so computationally intensive to produce does it mean than the more computational power a company has the better its models will be?