Conv3dNet¶
- class torchrl.modules.Conv3dNet(in_features: int | None = None, depth: int | None = None, num_cells: Sequence[int] | int = None, kernel_sizes: Sequence[int] | int = 3, strides: Sequence[int] | int = 1, paddings: Sequence[int] | int = 0, activation_class: type[nn.Module] | Callable = <class 'torch.nn.modules.activation.ELU'>, activation_kwargs: dict | list[dict] | None = None, norm_class: type[nn.Module] | Callable | None = None, norm_kwargs: dict | list[dict] | None = None, bias_last_layer: bool = True, aggregator_class: type[nn.Module] | Callable | None = <class 'torchrl.modules.models.utils.SquashDims'>, aggregator_kwargs: dict | None = None, squeeze_output: bool = False, device: DEVICE_TYPING | None = None)[原始碼]¶
一個 3D 卷積神經網路。
- 引數:
in_features (int, optional) – 輸入特徵的數量。如果未提供,將使用自動檢索輸入大小的懶惰實現。
depth (int, optional) – 網路的深度。深度為
1將產生一個具有所需輸入大小的單個線性層網路,輸出大小等於num_cells引數的最後一個元素。如果未指定depth,則depth資訊應包含在num_cells引數中(見下文)。如果num_cells是一個可迭代物件且指定了depth,則兩者都應匹配:len(num_cells)必須等於depth。num_cells (int or sequence of int, optional) – 輸入和輸出之間每一層的單元數。如果提供了一個整數,則每一層將具有相同數量的單元,並且深度將從
depth中檢索。如果提供了一個可迭代物件,則線性層out_features將匹配 num_cells 的內容。預設為[32, 32, 32]或depth不為None時為[32] * depth。kernel_sizes (int, sequence of int, optional) – 卷積網路的核大小。如果為可迭代物件,其長度必須與由
num_cells或 depth 引數定義的深度匹配。預設為3。strides (int or sequence of int) – 卷積網路的步長。如果為可迭代物件,其長度必須與由
num_cells或 depth 引數定義的深度匹配。預設為1。activation_class (Type[nn.Module] or callable) – 要使用的啟用類或建構函式。預設為
Tanh。activation_kwargs (dict or list of dicts, optional) – 要與啟用類一起使用的 kwargs。也可以提供一個長度為
depth的 kwargs 列表,其中每個元素對應一層。norm_class (Type or callable, optional) – 歸一化類(如果存在)。
norm_kwargs (dict or list of dicts, optional) – 要與歸一化層一起使用的 kwargs。也可以提供一個長度為
depth的 kwargs 列表,其中每個元素對應一層。bias_last_layer (bool) – 如果為
True,最後一個線性層將具有偏置引數。預設為True。aggregator_class (Type[nn.Module] or callable) – 在鏈末尾使用的聚合器類或建構函式。預設為
SquashDims。aggregator_kwargs (dict, optional) –
aggregator_class建構函式的 kwargs。squeeze_output (bool) – 輸出是否應擠壓其單例維度。預設為
False。device (torch.device, optional) – 建立模組的裝置。
示例
>>> # All of the following examples provide valid, working MLPs >>> cnet = Conv3dNet(in_features=3, depth=1, num_cells=[32,]) >>> print(cnet) Conv3dNet( (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (1): ELU(alpha=1.0) (2): SquashDims() ) >>> cnet = Conv3dNet(in_features=3, depth=4, num_cells=32) >>> print(cnet) Conv3dNet( (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (1): ELU(alpha=1.0) (2): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (3): ELU(alpha=1.0) (4): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (5): ELU(alpha=1.0) (6): Conv3d(32, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (7): ELU(alpha=1.0) (8): SquashDims() ) >>> cnet = Conv3dNet(in_features=3, num_cells=[32, 33, 34, 35]) # defines the depth by the num_cells arg >>> print(cnet) Conv3dNet( (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (1): ELU(alpha=1.0) (2): Conv3d(32, 33, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (3): ELU(alpha=1.0) (4): Conv3d(33, 34, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (5): ELU(alpha=1.0) (6): Conv3d(34, 35, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (7): ELU(alpha=1.0) (8): SquashDims() ) >>> cnet = Conv3dNet(in_features=3, num_cells=[32, 33, 34, 35], kernel_sizes=[3, 4, 5, (2, 3, 4)]) # defines kernels, possibly rectangular >>> print(cnet) Conv3dNet( (0): Conv3d(3, 32, kernel_size=(3, 3, 3), stride=(1, 1, 1)) (1): ELU(alpha=1.0) (2): Conv3d(32, 33, kernel_size=(4, 4, 4), stride=(1, 1, 1)) (3): ELU(alpha=1.0) (4): Conv3d(33, 34, kernel_size=(5, 5, 5), stride=(1, 1, 1)) (5): ELU(alpha=1.0) (6): Conv3d(34, 35, kernel_size=(2, 3, 4), stride=(1, 1, 1)) (7): ELU(alpha=1.0) (8): SquashDims() )