RAG in LLMs
The cluster centers on discussions about Retrieval-Augmented Generation (RAG), including its necessity, implementations, limitations, alternatives, and tools like RAGFlow and llama_index in the context of large language models.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Wouldn't this just be foundational model + RAG in the limit?
Doesn't Claude already use RAG on the backend?
Aren't the LLM's already trained on the whole web? no need to RAG, in theory.
Interesting..would you like to share some technical details? it did not seem you have used RAG here?
Trust me bro, you don't need RAG, just stuff your entire codebase into the prompt (also we charge per input token teehee)
Do you get meaningful insights with current RAG solutions?
Have you tried RAG on the docs?
Is RAG just a fancy term for sticking an LLM in front of a search engine?
How is this different than using RAG with my own data?
no need for separate RAG tools anymore?