Following Hugging Face’s Zephyr recipe

Generated with DALL-E

Finding good training hyperparameters for new LLMs is always difficult and time-consuming. With Zephyr Gemma 7B, Hugging Face seems to have found a good recipe for fine-tuning Gemma. They used a combination of distilled supervised fine-tuning and DPO similar to what they did for their original Zephyr based on Mistral 7B. However, training Gemma with DPO on consumer hardware is challenging due to its memory consumption.

In this article, I first review the recipe used by Hugging Face to train Zephyr Gemma 7B. Then, I show how to use this recipe with Unsloth, a framework implementing various optimizations for fast and memory-efficient training. The method presented in this article has a peak memory consumption of 19 GB of VRAM and a total training time of only 8 hours. In other words, DPO training for Gemma is possible on consumer hardware.

Supervised Fine-tuning (SFT)

DPO must use for reference a model trained with supervised fine-tuning (SFT) on an instruction dataset. Hugging Face also released this SFT model:

For SFT, they used deita-10k which is a small instruction dataset of 9.5k examples:

A wide variety of LLMs have generated all the examples in this dataset (GPT-4, GPT-3.5, Claude, Vicuna, Llama 2, Mistral 7B, Zephyr, etc.). For SFT training, they used a special data format that we will also use.

Hugging Face used the hyperparameters referenced in this configuration file from their alignment handbook. They didn’t use LoRA or quantization. It means that they probably used many A100/H100 GPUs for training Zephyr Gemma. Note: In the model card, they wrote “16 devices” but they don’t say what are these devices.

To run this recipe on consumer hardware, we will use LoRA and quantization, i.e., QLoRA. I’ll detail the LoRA configuration in the next section.