torchrl.trainers.algorithms.configs.trainers.PPOTrainerConfig¶
- class torchrl.trainers.algorithms.configs.trainers.PPOTrainerConfig(collector: Any, total_frames: int, optim_steps_per_batch: int | None, loss_module: Any, optimizer: Any, logger: Any, save_trainer_file: Any, replay_buffer: Any, frame_skip: int = 1, clip_grad_norm: bool = True, clip_norm: float | None = None, progress_bar: bool = True, seed: int | None = None, save_trainer_interval: int = 10000, log_interval: int = 10000, create_env_fn: Any = None, actor_network: Any = None, critic_network: Any = None, num_epochs: int = 4, _target_: str = 'torchrl.trainers.algorithms.configs.trainers._make_ppo_trainer')[原始碼]¶
PPO(Proximal Policy Optimization,近端策略最佳化)訓練器的配置類。
此類定義了用於建立 PPO 訓練器的配置引數,包括必需欄位和具有合理預設值的可選欄位。