LLM Fine-Tuning

Cluster focuses on discussions about fine-tuning large language models (LLMs), including techniques, use cases, tools, alternatives like in-context learning or LoRA, costs, and comparisons to prompting or retraining.

➡️ Stable 0.6x AI & Machine Learning
4,176
Comments
12
Years Active
5
Top Authors
#8414
Topic ID

Activity Over Time

2011
1
2016
2
2017
10
2018
16
2019
50
2020
77
2021
44
2022
115
2023
1,807
2024
1,008
2025
997
2026
51

Keywords

OP LLM GGU ML RAG T5 openai.com turing.com JavaScript LLMS fine tuning tuning fine models training lora llms tuned model tune

Sample Comments

serjester Nov 9, 2025 View on HN

Did you try fine tuning the LLMs?

oezi Mar 19, 2023 View on HN

Don't forget the ability to finetune the LLM.

amelius Dec 27, 2024 View on HN

Can't we just finetune the model based on the LLM's output? Has anyone tried it?

simonw Jul 21, 2025 View on HN

Have you had any success finetuning models? What did you do?

Sabinus Aug 9, 2025 View on HN

What do you think the main use case for fine tuning small language models is?

leobg May 15, 2023 View on HN

No mention of training / fine tuning it through transformers?

is there a well-established tool-chain for finetuning these models?

rcxdude Aug 22, 2025 View on HN

There are LLM finetunes which do this, it is very far from watertight.

A possible alternative to fine-tuning is in-context learning, especially if you are using a model with long context where you can provide a lot of examples. Models can do one/few-shot learning, but in-context learning improves the more examples you give. You could experiment cheaply with Claude Haiku to see if this works for you.

CuriouslyC Oct 30, 2025 View on HN

In the interest of transparency you should update your post with the model you fine tuned, it matters.