評價此頁

ConvTranspose1d#

class torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)[原始碼]#

在由多個輸入平面組成的輸入影像上應用一維轉置卷積運算元。有關輸入引數、引數和實現細節,請參閱 ConvTranspose1d

注意

目前僅實現了 QNNPACK 引擎。請設定 torch.backends.quantized.engine = ‘qnnpack’

特殊說明,請參閱 Conv1d

變數
  • weight (Tensor) – 來自可學習權重引數的打包張量。

  • scale (Tensor) – 輸出尺度的標量

  • zero_point (Tensor) – 輸出零點的標量

有關其他屬性,請參閱 ConvTranspose2d

示例

>>> torch.backends.quantized.engine = 'qnnpack'
>>> from torch.ao.nn import quantized as nnq
>>> # With square kernels and equal stride
>>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)
>>> # non-square kernels and unequal stride and with padding
>>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
>>> input = torch.randn(20, 16, 50)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> output = m(q_input)
>>> # exact output size can be also specified as an argument
>>> input = torch.randn(1, 16, 12)
>>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)
>>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)
>>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)
>>> h = downsample(q_input)
>>> h.size()
torch.Size([1, 16, 6])
>>> output = upsample(h, output_size=input.size())
>>> output.size()
torch.Size([1, 16, 12])