Discussion about this post

User's avatar
Pawel Jozefiak's avatar

"Start with prompting, advance to RAG, then consider fine-tuning only when earlier approaches prove insufficient" - this ladder is the answer to 90% of "should I fine-tune" questions I hear.

The mistake I keep seeing: teams jump to fine-tuning because it feels more technical and impressive, not because prompting or RAG actually failed them. Fine-tuning a model that was never given proper context is optimizing the wrong layer entirely.

One thing worth adding to the framework: the hybrid approach works but evaluation complexity compounds fast. You need separate metrics for whether RAG retrieves the right context AND whether the fine-tuned model handles it correctly. Debug surface area doubles overnight.

1 more comment...

No posts

Ready for more?