MaskedOneHotCategorical¶
- class torchrl.modules.MaskedOneHotCategorical(logits: torch.Tensor | None = None, probs: torch.Tensor | None = None, mask: torch.Tensor = None, indices: torch.Tensor = None, neg_inf: float = - inf, padding_value: int | None = None, grad_method: ReparamGradientStrategy = ReparamGradientStrategy.PassThrough)[原始碼]¶
MaskedCategorical 分佈。
參考: https://www.tensorflow.org/agents/api_docs/python/tf_agents/distributions/masked/MaskedCategorical
- 引數:
logits (torch.Tensor) – 事件的對數機率(未歸一化)
probs (torch.Tensor) – 事件機率。如果提供,掩碼項的相應機率將為零,並在其最後一個維度上重新歸一化機率。
- 關鍵字引數:
mask (torch.Tensor) – 一個布林掩碼,形狀與
logits/probs相同,其中False條目是要被掩碼的。或者,如果sparse_mask為 True,它將表示分佈中的有效索引列表。與indices互斥。indices (torch.Tensor) – 一個密集索引張量,表示必須考慮哪些動作。與
mask互斥。neg_inf (
float, optional) – 分配給無效(超出掩碼)索引的對數機率值。預設為 -inf。padding_value – 當 sparse_mask == True 時,掩碼張量中的填充值,padding_value 將被忽略。
grad_method (ReparamGradientStrategy, optional) –
用於收集重引數化樣本的策略。
ReparamGradientStrategy.PassThrough將計算樣本梯度使用 softmax 值對數機率作為樣本梯度的代理。
ReparamGradientStrategy.RelaxedOneHot將使用torch.distributions.RelaxedOneHot從分佈中取樣。
示例
>>> torch.manual_seed(0) >>> logits = torch.randn(4) / 100 # almost equal probabilities >>> mask = torch.tensor([True, False, True, True]) >>> dist = MaskedOneHotCategorical(logits=logits, mask=mask) >>> sample = dist.sample((10,)) >>> print(sample) # no `1` in the sample tensor([[0, 0, 1, 0], [0, 0, 0, 1], [1, 0, 0, 0], [0, 0, 1, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 1, 0], [1, 0, 0, 0], [0, 0, 1, 0], [0, 0, 1, 0]]) >>> print(dist.log_prob(sample)) tensor([-1.1203, -1.0928, -1.0831, -1.1203, -1.1203, -1.0831, -1.1203, -1.0831, -1.1203, -1.1203]) >>> sample_non_valid = torch.zeros_like(sample) >>> sample_non_valid[..., 1] = 1 >>> print(dist.log_prob(sample_non_valid)) tensor([-inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf, -inf]) >>> # with probabilities >>> prob = torch.ones(10) >>> prob = prob / prob.sum() >>> mask = torch.tensor([False] + 9 * [True]) # first outcome is masked >>> dist = MaskedOneHotCategorical(probs=prob, mask=mask) >>> s = torch.arange(10) >>> s = torch.nn.functional.one_hot(s, 10) >>> print(dist.log_prob(s)) tensor([ -inf, -2.1972, -2.1972, -2.1972, -2.1972, -2.1972, -2.1972, -2.1972, -2.1972, -2.1972])
- rsample(sample_shape: torch.Size | Sequence = None) torch.Tensor[原始碼]¶
生成 sample_shape 形狀的重引數化樣本,如果分佈引數是批處理的,則生成 sample_shape 形狀的重引數化樣本批次。
- sample(sample_shape: torch.Size | Sequence[int] | None = None) torch.Tensor[原始碼]¶
生成 sample_shape 形狀的樣本,如果分佈引數是批處理的,則生成 sample_shape 形狀的樣本批次。