Model Stock: All we need is just a few fine-tuned models
Paper • 2403.19522 • Published • 14
This is a merge of pre-trained language models created using mergekit.
This model was first merged using the Model Stock merge method using NousResearch/Meta-Llama-3.1-8B as a base.
This model was merged using the NuSLERP merge method using NousResearch/Meta-Llama-3.1-8B as a base.
The following models were merged to helpful using the Model Stock merge method using NousResearch/Meta-Llama-3.1-8B as a base:
The following models were merged to immersive using the Model Stock merge method using NousResearch/Meta-Llama-3.1-8B as a base:
Finally, the two models were merged using the NuSLERP merge method.
The following YAML configuration was used to produce this model: ./config-helpful.yaml ./config-immersive.yaml ./config-basemerge.yaml