LogScalar¶
- class torchrl.trainers.LogScalar(key: NestedKey = ('next', 'reward'), logname: str | None = None, log_pbar: bool = False, include_std: bool = True, reduction: str = 'mean')[源]¶
用於記錄批次中任何張量值的通用標量記錄器鉤子。
此鉤子可以記錄收集的批次資料中的任何標量值,包括獎勵、動作範數、完成狀態以及任何其他指標。它會自動處理掩碼並計算均值和標準差。
- 引數:
key (NestedKey) – 在輸入批次中查詢值的鍵。對於簡單鍵可以是字串,對於巢狀鍵可以是元組。預設是torchrl.trainers.trainers.REWARD_KEY(= (“next”, “reward”))。
logname (str, optional) – 要記錄的指標的名稱。如果為 None,則使用鍵作為日誌名稱。預設是 None。
log_pbar (bool, optional) – 如果為
True,則值將記錄在進度條上。預設是False。include_std (bool, optional) – 如果為
True,還將記錄值的標準差。預設是True。reduction (str, optional) – 要應用的歸約方法。可以是“mean”、“sum”、“min”、“max”。預設是“mean”。
示例
>>> # Log training rewards >>> log_reward = LogScalar(("next", "reward"), "r_training", log_pbar=True) >>> trainer.register_op("pre_steps_log", log_reward)
>>> # Log action norms >>> log_action_norm = LogScalar("action", "action_norm", include_std=True) >>> trainer.register_op("pre_steps_log", log_action_norm)
>>> # Log done states (as percentage) >>> log_done = LogScalar(("next", "done"), "done_percentage", reduction="mean") >>> trainer.register_op("pre_steps_log", log_done)
- register(trainer: Trainer, name: Optional[str] = None)[源]¶
Registers the hook in the trainer at a default location.
- 引數:
trainer (Trainer) – the trainer where the hook must be registered.
name (str) – the name of the hook.
注意
To register the hook at another location than the default, use
register_op().