Update: turns out we are using Axolotl in this course. I do think that catching up the gap between doing this and frontier model-training labs are non-trivial; maybe there will come a time I actually face this harsh truth.


I joined Hamel et al’s LLM training course (later turning into a conference); I hoped to reinforce and learn something new about fine-tuning language models, given that previously I only fine-tuned one using Axolotl. Oddly, I wasn’t convinced by the $1k (at the time) credits, but by being able to know and practice the instrumentation part of the training pipeline that Hamel discussed in his tweets.

Anyway, I will just toss my notes here.

3 items under this folder.