TD1Estimator¶
- class torchrl.objectives.value.TD1Estimator(*args, **kwargs)[原始碼]¶
\(\infty\)-時間差 (TD(1)) 優勢函式估計。
- 關鍵字引數:
gamma (scalar) – exponential mean discount.
value_network (TensorDictModule) – 用於檢索值估計的值運算子。
average_rewards (bool, 可選) – 如果為
True,則在計算 TD 之前會對獎勵進行標準化。differentiable (bool, optional) –
if
True, gradients are propagated through the computation of the value function. Default isFalse.注意
The proper way to make the function call non-differentiable is to decorate it in a torch.no_grad() context manager/decorator or pass detached parameters for functional modules.
skip_existing (bool, optional) – 如果設定為
True,值網路將跳過輸出已存在於 tensordict 中的模組。預設為None,即tensordict.nn.skip_existing()的值不受影響。advantage_key (str or tuple of str, optional) – [Deprecated] the key of the advantage entry. Defaults to
"advantage".value_target_key (str or tuple of str, optional) – [已棄用] 優勢項的鍵。預設為
"value_target"。value_key (str or tuple of str, optional) – [已棄用] 從輸入 tensordict 讀取的值鍵。預設為
"state_value"。shifted (bool, optional) – 如果設定為
True,值和下一個值將透過對值網路的單次呼叫來估計。這更快,但僅在以下情況下有效:(1)"next"值僅偏移一步(例如,對於多步值估計則不適用),並且 (2) 在時間t和t+1使用的引數相同(在使用目標引數時則不適用)。預設為False。device (torch.device, optional) – 緩衝區將被例項化的裝置。預設為
torch.get_default_device()。time_dim (int, optional) – 輸入 tensordict 中對應時間的維度。如果未提供,則預設為標記為
"time"名稱的維度(如果存在),否則為最後一個維度。可以在呼叫value_estimate()時覆蓋。負維度將相對於輸入 tensordict 進行考慮。deactivate_vmap (bool, 可選) – 是否停用 vmap 呼叫並用普通 for 迴圈替換它們。預設為
False。
- forward(tensordict=None, *, params: TensorDictBase | None = None, target_params: TensorDictBase | None = None)[原始碼]¶
在 tensordict 中給定資料的情況下,計算 TD(1) 優勢。
If a functional module is provided, a nested TensorDict containing the parameters (and if relevant the target parameters) can be passed to the module.
- 引數:
tensordict (TensorDictBase) – 包含用於計算值估計和 TDEstimate 的資料的 TensorDict(一個觀察鍵,
"action",("next", "reward"),("next", "done"),("next", "terminated"),以及從環境中返回的"next"tensordict 狀態)。傳遞給此模組的資料應結構化為[*B, T, *F],其中B是批次大小,T是時間維度,F是特徵維度。tensordict 的形狀必須為[*B, T]。- 關鍵字引數:
params (TensorDictBase, optional) – A nested TensorDict containing the params to be passed to the functional value network module.
target_params (TensorDictBase, optional) – A nested TensorDict containing the target params to be passed to the functional value network module.
- 返回:
An updated TensorDict with an advantage and a value_error keys as defined in the constructor.
示例
>>> from tensordict import TensorDict >>> value_net = TensorDictModule( ... nn.Linear(3, 1), in_keys=["obs"], out_keys=["state_value"] ... ) >>> module = TDEstimate( ... gamma=0.98, ... value_network=value_net, ... ) >>> obs, next_obs = torch.randn(2, 1, 10, 3) >>> reward = torch.randn(1, 10, 1) >>> done = torch.zeros(1, 10, 1, dtype=torch.bool) >>> terminated = torch.zeros(1, 10, 1, dtype=torch.bool) >>> tensordict = TensorDict({"obs": obs, "next": {"obs": next_obs, "done": done, "reward": reward, "terminated": terminated}}, [1, 10]) >>> _ = module(tensordict) >>> assert "advantage" in tensordict.keys()
The module supports non-tensordict (i.e. unpacked tensordict) inputs too
示例
>>> value_net = TensorDictModule( ... nn.Linear(3, 1), in_keys=["obs"], out_keys=["state_value"] ... ) >>> module = TDEstimate( ... gamma=0.98, ... value_network=value_net, ... ) >>> obs, next_obs = torch.randn(2, 1, 10, 3) >>> reward = torch.randn(1, 10, 1) >>> done = torch.zeros(1, 10, 1, dtype=torch.bool) >>> terminated = torch.zeros(1, 10, 1, dtype=torch.bool) >>> advantage, value_target = module(obs=obs, next_reward=reward, next_done=done, next_obs=next_obs, next_terminated=terminated)
- value_estimate(tensordict, target_params: TensorDictBase | None = None, next_value: torch.Tensor | None = None, time_dim: int | None = None, **kwargs)[原始碼]¶
Gets a value estimate, usually used as a target value for the value network.
如果狀態值鍵存在於
tensordict.get(("next", self.tensor_keys.value))下,則將使用此值,而無需呼叫值網路。- 引數:
tensordict (TensorDictBase) – the tensordict containing the data to read.
target_params (TensorDictBase, optional) – A nested TensorDict containing the target params to be passed to the functional value network module.
next_value (torch.Tensor, optional) – 下一個狀態或狀態-動作對的值。與
target_params互斥。**kwargs – the keyword arguments to be passed to the value network.
Returns: a tensor corresponding to the state value.