payelb's picture
Upload PPO-aligned TinyLlama-1.1B model using WoN DeBERTa reward model on UltraFeedback_openbmb
d3203c7 verified

payelb/UltraFeedback_openbmb_TinyLlama-1.1B_aligned_with_WoN_deberta_RM

Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

Alignment dataset: openbmb/UltraFeedback

Reward model: payelb/UltraFeedback_openbmb_reward-model-deberta-v3-base_1k_fixed_WoN

Method: PPO alignment with LoRA adapters.

Notes:

  • Reward normalization and clipping enabled
  • KL control enabled
  • pad_token_id/eos_token_id explicitly set
  • DeBERTa RM loaded on a single device (no device_map='auto')