KaidenRp2400_12b_v1

Merged using mergekit. we first merge some models in merge1 then we megre another list of models in merge2 then we merged them together.

here are intermediate merges: (kainatq/KaidenRp2400_12b_v1_m1](https://huggingface.co/kainatq/KaidenRp2400_12b_v1_m1) https://huggingface.co/kainatq/KaidenRp2400_12b_v1_m2

Usage:

follow mistral-nemo-base-2407 (chatml) instructions.

Merge Configuration

Merge 1:

merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
tokenizer_source: union
parameters:
  density: 0.5
  weight: 1.0
models:
  - model: Gryphe/Pantheon-RP-1.5-12b-Nemo
    parameters:
      weight: 0.33
  - model: kainatq/RP-king-12b-II
    parameters:
      weight: 0.33
  - model: elinas/Chronos-Gold-12B-1.0
    parameters:
      weight: 0.34

Merge 2:

merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
tokenizer_source: union
parameters:
  density: 0.5
  weight: 1.0
models:
  - model: mergekit-community/MN-Sappho-g2-12B
    parameters:
      weight: 0.33
  - model: nbeerbower/Nemoties-ChatML-12B
    parameters:
      weight: 0.33
  - model: pbevan11/Mistral-Nemo-Baseline-SFT
    parameters:
      weight: 0.34

final model:

merge_method: dare_ties
base_model: mistralai/Mistral-Nemo-Base-2407
tokenizer_source: union
parameters:
  density: 0.5
  weight: 1.0
models:
  - model: kainatq/KaidenRp2400_12b_v1_m1
    parameters:
      weight: 0.40
  - model: kainatq/KaidenRp2400_12b_v1_m2
    parameters:
      weight: 0.40
  - model: mergekit-community/MN-Sappho-j-12B
    parameters:
      weight: 0.20
Downloads last month
43
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for kainatq/KaidenRp2400_12b_v1

Finetuned
(82)
this model
Quantizations
3 models