快捷方式

torchrl.trainers.algorithms.configs.collectors.AsyncDataCollectorConfig

class torchrl.trainers.algorithms.collectors.AsyncDataCollectorConfig(create_env_fn: ConfigBase = <factory>, policy: Any = None, policy_factory: Any = None, frames_per_batch: int | None = None, init_random_frames: int | None = 0, total_frames: int = -1, device: str | None = None, storing_device: str | None = None, policy_device: str | None = None, env_device: str | None = None, create_env_kwargs: dict | None = None, max_frames_per_traj: int | None = None, reset_at_each_iter: bool = False, postproc: ConfigBase | None = None, split_trajs: bool = False, exploration_type: str = 'RANDOM', set_truncated: bool = False, use_buffers: bool = False, replay_buffer: ConfigBase | None = None, extend_buffer: bool = False, trust_policy: bool = True, compile_policy: Any = None, cudagraph_policy: Any = None, no_cuda_sync: bool = False, weight_updater: Any = None, _target_: str = 'torchrl.collectors.aSyncDataCollector')[原始碼]

非同步資料收集器的配置。

文件

訪問全面的 PyTorch 開發者文件

檢視文件

教程

為初學者和高階開發者提供深入的教程

檢視教程

資源

查詢開發資源並讓您的問題得到解答

檢視資源