快捷方式

MultiAgentConvNet

class torchrl.modules.MultiAgentConvNet(n_agents: int, centralized: bool | None = None, share_params: bool | None = None, *, in_features: int | None = None, device: DEVICE_TYPING | None = None, num_cells: Sequence[int] | None = None, kernel_sizes: Sequence[int | Sequence[int]] | int = 5, strides: Sequence | int = 2, paddings: Sequence | int = 0, activation_class: type[nn.Module] = <class 'torch.nn.modules.activation.ELU'>, use_td_params: bool = True, **kwargs)[原始碼]

多智慧體 CNN。

在 MARL 設定中,智慧體可能共享相同的動作策略,也可能不共享:我們稱之為引數可以共享或不共享。同樣,一個網路可以接收所有智慧體的整個觀測空間,或按每個智慧體的基礎來計算其輸出,我們分別稱之為“集中式”和“非集中式”。

它期望輸入的形狀為 (*B, n_agents, channels, x, y)

注意

要使用 torch.nn.init 模組初始化 MARL 模組引數,請參考 get_stateful_net()from_stateful_net() 方法。

引數:
  • n_agents (int) – 代理數量。

  • centralized (bool) – 如果為 True,則每個智慧體將使用所有智慧體的資料來計算其輸出,輸入形狀為 (*B, n_agents * channels, x, y)。否則,每個智慧體將只使用自己的資料作為輸入。

  • share_params (bool) – 如果為 True,則將使用相同的 ConvNet 來為所有智慧體執行前向傳播(同質策略)。否則,每個智慧體將使用不同的 ConvNet 來處理其輸入(異質策略)。

關鍵字引數:
  • in_features (int, optional) – 輸入特徵的維度。如果留空為 None,則使用懶惰模組。

  • device (strtorch.device, optional) – 建立模組的裝置。

  • num_cells (intSequence[int], optional) – 輸入和輸出之間各層的單元數。如果提供整數,則所有層都將具有相同的單元數。如果提供可迭代物件,則線性層的 out_features 將與 num_cells 的內容匹配。

  • kernel_sizes (int, Sequence[Union[int, Sequence[int]]) – 卷積網路的核大小。預設為 5

  • strides (intSequence[int]) – 卷積網路的步長。如果為可迭代物件,則長度必須與由 num_cells 或 depth 引數定義的深度匹配。預設為 2

  • activation_class (Type[nn.Module]) – 要使用的啟用類。預設為 torch.nn.ELU

  • use_td_params (bool, optional) – 如果為 True,則引數可以在 self.params 中找到,它是一個 TensorDictParams 物件(它同時繼承自 TensorDictnn.Module)。如果為 False,則引數包含在 self._empty_net 中。總的來說,這兩種方法應該大致相同,但不可互換:例如,使用 use_td_params=True 建立的 state_dict 不能在 use_td_params=False 時使用。

  • **kwargs – 可以將 ConvNet 的引數傳遞給它,以自定義 ConvNet。

示例

>>> import torch
>>> from torchrl.modules import MultiAgentConvNet
>>> batch = (3,2)
>>> n_agents = 7
>>> channels, x, y = 3, 100, 100
>>> obs = torch.randn(*batch, n_agents, channels, x, y)
>>> # Let's consider a centralized network with shared parameters.
>>> cnn = MultiAgentConvNet(
...     n_agents,
...     centralized = True,
...     share_params = True
... )
>>> print(cnn)
MultiAgentConvNet(
    (agent_networks): ModuleList(
        (0): ConvNet(
        (0): LazyConv2d(0, 32, kernel_size=(5, 5), stride=(2, 2))
        (1): ELU(alpha=1.0)
        (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (3): ELU(alpha=1.0)
        (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (5): ELU(alpha=1.0)
        (6): SquashDims()
        )
    )
)
>>> result = cnn(obs)
>>> # The final dimension of the resulting tensor would be determined based on the layer definition arguments and the shape of input 'obs'.
>>> print(result.shape)
torch.Size([3, 2, 7, 2592])
>>> # Since both observations and parameters are shared, we expect all agents to have identical outputs (eg. for a value function)
>>> print(all(result[0,0,0] == result[0,0,1]))
True
>>> # Alternatively, a local network with parameter sharing (eg. decentralized weight sharing policy)
>>> cnn = MultiAgentConvNet(
...     n_agents,
...     centralized = False,
...     share_params = True
... )
>>> print(cnn)
MultiAgentConvNet(
    (agent_networks): ModuleList(
        (0): ConvNet(
        (0): Conv2d(4, 32, kernel_size=(5, 5), stride=(2, 2))
        (1): ELU(alpha=1.0)
        (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (3): ELU(alpha=1.0)
        (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (5): ELU(alpha=1.0)
        (6): SquashDims()
        )
    )
)
>>> print(result.shape)
torch.Size([3, 2, 7, 2592])
>>> # Parameters are shared but not observations, hence each agent has a different output.
>>> print(all(result[0,0,0] == result[0,0,1]))
False
>>> # Or multiple local networks identical in structure but with differing weights.
>>> cnn = MultiAgentConvNet(
...     n_agents,
...     centralized = False,
...     share_params = False
... )
>>> print(cnn)
MultiAgentConvNet(
    (agent_networks): ModuleList(
        (0-6): 7 x ConvNet(
        (0): Conv2d(4, 32, kernel_size=(5, 5), stride=(2, 2))
        (1): ELU(alpha=1.0)
        (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (3): ELU(alpha=1.0)
        (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (5): ELU(alpha=1.0)
        (6): SquashDims()
        )
    )
)
>>> print(result.shape)
torch.Size([3, 2, 7, 2592])
>>> print(all(result[0,0,0] == result[0,0,1]))
False
>>> # Or where inputs are shared but not parameters.
>>> cnn = MultiAgentConvNet(
...     n_agents,
...     centralized = True,
...     share_params = False
... )
>>> print(cnn)
MultiAgentConvNet(
    (agent_networks): ModuleList(
        (0-6): 7 x ConvNet(
        (0): Conv2d(28, 32, kernel_size=(5, 5), stride=(2, 2))
        (1): ELU(alpha=1.0)
        (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (3): ELU(alpha=1.0)
        (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2))
        (5): ELU(alpha=1.0)
        (6): SquashDims()
        )
    )
)
>>> print(result.shape)
torch.Size([3, 2, 7, 2592])
>>> print(all(result[0,0,0] == result[0,0,1]))
False

文件

訪問全面的 PyTorch 開發者文件

檢視文件

教程

為初學者和高階開發者提供深入的教程

檢視教程

資源

查詢開發資源並讓您的問題得到解答

檢視資源