| --- |
| language: en |
| license: apache-2.0 |
| library_name: pytorch |
| tags: |
| - deep-reinforcement-learning |
| - reinforcement-learning |
| - DI-engine |
| - Pendulum-v1 |
| benchmark_name: OpenAI/Gym/Box2d |
| task_name: Pendulum-v1 |
| pipeline_tag: reinforcement-learning |
| model-index: |
| - name: MuZero |
| results: |
| - task: |
| type: reinforcement-learning |
| name: reinforcement-learning |
| dataset: |
| name: Pendulum-v1 |
| type: Pendulum-v1 |
| metrics: |
| - type: mean_reward |
| value: -280.77 +/- 446.69 |
| name: mean_reward |
| --- |
| |
| # Play **Pendulum-v1** with **MuZero** Policy |
|
|
| ## Model Description |
| <!-- Provide a longer summary of what this model is. --> |
|
|
| This implementation applies **MuZero** to the OpenAI/Gym/Box2d **Pendulum-v1** environment using [LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine). |
|
|
| **LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348). |
|
|
| ## Model Usage |
| ### Install the Dependencies |
| <details close> |
| <summary>(Click for Details)</summary> |
|
|
| ```shell |
| # install huggingface_ding |
| git clone https://github.com/opendilab/huggingface_ding.git |
| pip3 install -e ./huggingface_ding/ |
| # install environment dependencies if needed |
| |
| pip3 install DI-engine[common_env,video] |
| pip3 install LightZero |
| |
| ``` |
| </details> |
|
|
| ### Git Clone from Huggingface and Run the Model |
|
|
| <details close> |
| <summary>(Click for Details)</summary> |
|
|
| ```shell |
| # running with trained model |
| python3 -u run.py |
| ``` |
| **run.py** |
| ```python |
| from lzero.agent import MuZeroAgent |
| from ding.config import Config |
| from easydict import EasyDict |
| import torch |
| |
| # Pull model from files which are git cloned from huggingface |
| policy_state_dict = torch.load("pytorch_model.bin", map_location=torch.device("cpu")) |
| cfg = EasyDict(Config.file_to_dict("policy_config.py").cfg_dict) |
| # Instantiate the agent |
| agent = MuZeroAgent( |
| env_id="Pendulum-v1", exp_name="Pendulum-v1-MuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict |
| ) |
| # Continue training |
| agent.train(step=5000) |
| # Render the new agent performance |
| agent.deploy(enable_save_replay=True) |
| |
| ``` |
| </details> |
|
|
| ### Run Model by Using Huggingface_ding |
| |
| <details close> |
| <summary>(Click for Details)</summary> |
| |
| ```shell |
| # running with trained model |
| python3 -u run.py |
| ``` |
| **run.py** |
| ```python |
| from lzero.agent import MuZeroAgent |
| from huggingface_ding import pull_model_from_hub |
| |
| # Pull model from Hugggingface hub |
| policy_state_dict, cfg = pull_model_from_hub(repo_id="OpenDILabCommunity/Pendulum-v1-MuZero") |
| # Instantiate the agent |
| agent = MuZeroAgent( |
| env_id="Pendulum-v1", exp_name="Pendulum-v1-MuZero", cfg=cfg.exp_config, policy_state_dict=policy_state_dict |
| ) |
| # Continue training |
| agent.train(step=5000) |
| # Render the new agent performance |
| agent.deploy(enable_save_replay=True) |
|
|
| ``` |
| </details> |
| |
| ## Model Training |
| |
| ### Train the Model and Push to Huggingface_hub |
| |
| <details close> |
| <summary>(Click for Details)</summary> |
| |
| ```shell |
| #Training Your Own Agent |
| python3 -u train.py |
| ``` |
| **train.py** |
| ```python |
| from lzero.agent import MuZeroAgent |
| from huggingface_ding import push_model_to_hub |
|
|
| # Instantiate the agent |
| agent = MuZeroAgent(env_id="Pendulum-v1", exp_name="Pendulum-v1-MuZero") |
| # Train the agent |
| return_ = agent.train(step=int(500000)) |
| # Push model to huggingface hub |
| push_model_to_hub( |
| agent=agent.best, |
| env_name="OpenAI/Gym/Box2d", |
| task_name="Pendulum-v1", |
| algo_name="MuZero", |
| github_repo_url="https://github.com/opendilab/LightZero", |
| github_doc_model_url=None, |
| github_doc_env_url=None, |
| installation_guide=''' |
| pip3 install DI-engine[common_env,video] |
| pip3 install LightZero |
| ''', |
| usage_file_by_git_clone="./muzero/pendulum_muzero_deploy.py", |
| usage_file_by_huggingface_ding="./muzero/pendulum_muzero_download.py", |
| train_file="./muzero/pendulum_muzero.py", |
| repo_id="OpenDILabCommunity/Pendulum-v1-MuZero", |
| platform_info="[LightZero](https://github.com/opendilab/LightZero) and [DI-engine](https://github.com/opendilab/di-engine)", |
| model_description="**LightZero** is an efficient, easy-to-understand open-source toolkit that merges Monte Carlo Tree Search (MCTS) with Deep Reinforcement Learning (RL), simplifying their integration for developers and researchers. More details are in paper [LightZero: A Unified Benchmark for Monte Carlo Tree Search in General Sequential Decision Scenarios](https://huggingface.co/papers/2310.08348).", |
| create_repo=True |
| ) |
| |
| ``` |
| </details> |
| |
| **Configuration** |
| <details close> |
| <summary>(Click for Details)</summary> |
| |
| |
| ```python |
| exp_config = { |
| 'main_config': { |
| 'exp_name': 'Pendulum-v1-MuZero', |
| 'seed': 0, |
| 'env': { |
| 'env_id': 'Pendulum-v1', |
| 'continuous': False, |
| 'manually_discretization': True, |
| 'each_dim_disc_size': 11, |
| 'collector_env_num': 8, |
| 'evaluator_env_num': 3, |
| 'n_evaluator_episode': 3, |
| 'manager': { |
| 'shared_memory': False |
| } |
| }, |
| 'policy': { |
| 'on_policy': False, |
| 'cuda': True, |
| 'multi_gpu': False, |
| 'bp_update_sync': True, |
| 'traj_len_inf': False, |
| 'model': { |
| 'observation_shape': 3, |
| 'action_space_size': 11, |
| 'model_type': 'mlp', |
| 'lstm_hidden_size': 128, |
| 'latent_state_dim': 128, |
| 'self_supervised_learning_loss': True |
| }, |
| 'use_rnd_model': False, |
| 'sampled_algo': False, |
| 'gumbel_algo': False, |
| 'mcts_ctree': True, |
| 'collector_env_num': 8, |
| 'evaluator_env_num': 3, |
| 'env_type': 'not_board_games', |
| 'action_type': 'fixed_action_space', |
| 'battle_mode': 'play_with_bot_mode', |
| 'monitor_extra_statistics': True, |
| 'game_segment_length': 50, |
| 'transform2string': False, |
| 'gray_scale': False, |
| 'use_augmentation': False, |
| 'augmentation': ['shift', 'intensity'], |
| 'ignore_done': False, |
| 'update_per_collect': 200, |
| 'model_update_ratio': 0.1, |
| 'batch_size': 256, |
| 'optim_type': 'Adam', |
| 'learning_rate': 0.003, |
| 'target_update_freq': 100, |
| 'target_update_freq_for_intrinsic_reward': 1000, |
| 'weight_decay': 0.0001, |
| 'momentum': 0.9, |
| 'grad_clip_value': 10, |
| 'n_episode': 8, |
| 'num_simulations': 50, |
| 'discount_factor': 0.997, |
| 'td_steps': 5, |
| 'num_unroll_steps': 5, |
| 'reward_loss_weight': 1, |
| 'value_loss_weight': 0.25, |
| 'policy_loss_weight': 1, |
| 'policy_entropy_loss_weight': 0, |
| 'ssl_loss_weight': 2, |
| 'lr_piecewise_constant_decay': False, |
| 'threshold_training_steps_for_final_lr': 50000, |
| 'manual_temperature_decay': False, |
| 'threshold_training_steps_for_final_temperature': 100000, |
| 'fixed_temperature_value': 0.25, |
| 'use_ture_chance_label_in_chance_encoder': False, |
| 'use_priority': True, |
| 'priority_prob_alpha': 0.6, |
| 'priority_prob_beta': 0.4, |
| 'root_dirichlet_alpha': 0.3, |
| 'root_noise_weight': 0.25, |
| 'random_collect_episode_num': 0, |
| 'eps': { |
| 'eps_greedy_exploration_in_collect': False, |
| 'type': 'linear', |
| 'start': 1.0, |
| 'end': 0.05, |
| 'decay': 100000 |
| }, |
| 'cfg_type': 'MuZeroPolicyDict', |
| 'reanalyze_ratio': 0, |
| 'eval_freq': 2000, |
| 'replay_buffer_size': 1000000 |
| }, |
| 'wandb_logger': { |
| 'gradient_logger': False, |
| 'video_logger': False, |
| 'plot_logger': False, |
| 'action_logger': False, |
| 'return_logger': False |
| } |
| }, |
| 'create_config': { |
| 'env': { |
| 'type': |
| 'pendulum_lightzero', |
| 'import_names': |
| ['zoo.classic_control.pendulum.envs.pendulum_lightzero_env'] |
| }, |
| 'env_manager': { |
| 'type': 'subprocess' |
| }, |
| 'policy': { |
| 'type': 'muzero', |
| 'import_names': ['lzero.policy.muzero'] |
| } |
| } |
| } |
| |
| ``` |
| </details> |
| |
| **Training Procedure** |
| <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
| - **Weights & Biases (wandb):** [monitor link](<TODO>) |
| |
| ## Model Information |
| <!-- Provide the basic links for the model. --> |
| - **Github Repository:** [repo link](https://github.com/opendilab/LightZero) |
| - **Doc**: [Algorithm link](<TODO>) |
| - **Configuration:** [config link](https://huggingface.co/OpenDILabCommunity/Pendulum-v1-MuZero/blob/main/policy_config.py) |
| - **Demo:** [video](https://huggingface.co/OpenDILabCommunity/Pendulum-v1-MuZero/blob/main/replay.mp4) |
| <!-- Provide the size information for the model. --> |
| - **Parameters total size:** 13553.29 KB |
| - **Last Update Date:** 2023-12-21 |
| |
| ## Environments |
| <!-- Address questions around what environment the model is intended to be trained and deployed at, including the necessary information needed to be provided for future users. --> |
| - **Benchmark:** OpenAI/Gym/Box2d |
| - **Task:** Pendulum-v1 |
| - **Gym version:** 0.25.1 |
| - **DI-engine version:** v0.5.0 |
| - **PyTorch version:** 2.0.1+cu117 |
| - **Doc**: [Environments link](<TODO>) |
| |