MultiAgentConvNet¶
- class torchrl.modules.MultiAgentConvNet(n_agents: int, centralized: bool | None = None, share_params: bool | None = None, *, in_features: int | None = None, device: DEVICE_TYPING | None = None, num_cells: Sequence[int] | None = None, kernel_sizes: Sequence[int | Sequence[int]] | int = 5, strides: Sequence | int = 2, paddings: Sequence | int = 0, activation_class: type[nn.Module] = <class 'torch.nn.modules.activation.ELU'>, use_td_params: bool = True, **kwargs)[原始碼]¶
多智慧體 CNN。
在 MARL 設定中,智慧體可能共享相同的動作策略,也可能不共享:我們稱之為引數可以共享或不共享。同樣,一個網路可以接收所有智慧體的整個觀測空間,或按每個智慧體的基礎來計算其輸出,我們分別稱之為“集中式”和“非集中式”。
它期望輸入的形狀為
(*B, n_agents, channels, x, y)。注意
要使用 torch.nn.init 模組初始化 MARL 模組引數,請參考
get_stateful_net()和from_stateful_net()方法。- 引數:
- 關鍵字引數:
in_features (int, optional) – 輸入特徵的維度。如果留空為
None,則使用懶惰模組。device (str 或 torch.device, optional) – 建立模組的裝置。
num_cells (int 或 Sequence[int], optional) – 輸入和輸出之間各層的單元數。如果提供整數,則所有層都將具有相同的單元數。如果提供可迭代物件,則線性層的
out_features將與num_cells的內容匹配。kernel_sizes (int, Sequence[Union[int, Sequence[int]]) – 卷積網路的核大小。預設為
5。strides (int 或 Sequence[int]) – 卷積網路的步長。如果為可迭代物件,則長度必須與由 num_cells 或 depth 引數定義的深度匹配。預設為
2。activation_class (Type[nn.Module]) – 要使用的啟用類。預設為
torch.nn.ELU。use_td_params (bool, optional) – 如果為
True,則引數可以在 self.params 中找到,它是一個TensorDictParams物件(它同時繼承自 TensorDict 和 nn.Module)。如果為False,則引數包含在 self._empty_net 中。總的來說,這兩種方法應該大致相同,但不可互換:例如,使用use_td_params=True建立的state_dict不能在use_td_params=False時使用。**kwargs – 可以將
ConvNet的引數傳遞給它,以自定義 ConvNet。
示例
>>> import torch >>> from torchrl.modules import MultiAgentConvNet >>> batch = (3,2) >>> n_agents = 7 >>> channels, x, y = 3, 100, 100 >>> obs = torch.randn(*batch, n_agents, channels, x, y) >>> # Let's consider a centralized network with shared parameters. >>> cnn = MultiAgentConvNet( ... n_agents, ... centralized = True, ... share_params = True ... ) >>> print(cnn) MultiAgentConvNet( (agent_networks): ModuleList( (0): ConvNet( (0): LazyConv2d(0, 32, kernel_size=(5, 5), stride=(2, 2)) (1): ELU(alpha=1.0) (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (3): ELU(alpha=1.0) (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (5): ELU(alpha=1.0) (6): SquashDims() ) ) ) >>> result = cnn(obs) >>> # The final dimension of the resulting tensor would be determined based on the layer definition arguments and the shape of input 'obs'. >>> print(result.shape) torch.Size([3, 2, 7, 2592]) >>> # Since both observations and parameters are shared, we expect all agents to have identical outputs (eg. for a value function) >>> print(all(result[0,0,0] == result[0,0,1])) True
>>> # Alternatively, a local network with parameter sharing (eg. decentralized weight sharing policy) >>> cnn = MultiAgentConvNet( ... n_agents, ... centralized = False, ... share_params = True ... ) >>> print(cnn) MultiAgentConvNet( (agent_networks): ModuleList( (0): ConvNet( (0): Conv2d(4, 32, kernel_size=(5, 5), stride=(2, 2)) (1): ELU(alpha=1.0) (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (3): ELU(alpha=1.0) (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (5): ELU(alpha=1.0) (6): SquashDims() ) ) ) >>> print(result.shape) torch.Size([3, 2, 7, 2592]) >>> # Parameters are shared but not observations, hence each agent has a different output. >>> print(all(result[0,0,0] == result[0,0,1])) False
>>> # Or multiple local networks identical in structure but with differing weights. >>> cnn = MultiAgentConvNet( ... n_agents, ... centralized = False, ... share_params = False ... ) >>> print(cnn) MultiAgentConvNet( (agent_networks): ModuleList( (0-6): 7 x ConvNet( (0): Conv2d(4, 32, kernel_size=(5, 5), stride=(2, 2)) (1): ELU(alpha=1.0) (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (3): ELU(alpha=1.0) (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (5): ELU(alpha=1.0) (6): SquashDims() ) ) ) >>> print(result.shape) torch.Size([3, 2, 7, 2592]) >>> print(all(result[0,0,0] == result[0,0,1])) False
>>> # Or where inputs are shared but not parameters. >>> cnn = MultiAgentConvNet( ... n_agents, ... centralized = True, ... share_params = False ... ) >>> print(cnn) MultiAgentConvNet( (agent_networks): ModuleList( (0-6): 7 x ConvNet( (0): Conv2d(28, 32, kernel_size=(5, 5), stride=(2, 2)) (1): ELU(alpha=1.0) (2): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (3): ELU(alpha=1.0) (4): Conv2d(32, 32, kernel_size=(5, 5), stride=(2, 2)) (5): ELU(alpha=1.0) (6): SquashDims() ) ) ) >>> print(result.shape) torch.Size([3, 2, 7, 2592]) >>> print(all(result[0,0,0] == result[0,0,1])) False