FakeQuantizedEmbedding¶
- class torchao.quantization.qat.FakeQuantizedEmbedding(num_embeddings: int, embedding_dim: int, padding_idx: Optional[int] = None, max_norm: Optional[float] = None, norm_type: float = 2.0, scale_grad_by_freq: bool = False, sparse: bool = False, weight_config: Optional[FakeQuantizeConfigBase] = None, *args, **kwargs)[原始碼]¶
具有偽量化權重的通用嵌入層。
特定的目標資料型別、粒度、方案等透過權重和啟用的單獨配置來指定。
使用示例
weight_config = IntxFakeQuantizeConfig( dtype=torch.int4, group_size=8, symmetric=True, ) fq_embedding = FakeQuantizedEmbedding(5, 10, weight_config) fq_embedding(torch.LongTensor([3]))