CosineEmbeddingLoss#
- class torch.nn.modules.loss.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')[原始碼]#
建立了一個標準,用於衡量輸入張量 and and a Tensor label with values 1 or -1. Use () to maximize the cosine similarity of two inputs, and () otherwise. This is typically used for learning nonlinear embeddings or semi-supervised learning.
The loss function for each sample is
- 引數
margin (float, optional) – Should be a number from to , to is suggested. If
marginis missing, the default value is .size_average (bool, optional) – 已棄用 (參見
reduction)。預設情況下,損失值在批次中的每個損失元素上取平均值。請注意,對於某些損失,每個樣本有多個元素。如果欄位size_average設定為False,則損失值在每個小批次中而是求和。當reduce為False時忽略。預設值:Truereduce (bool, optional) – 已棄用 (參見
reduction)。預設情況下,損失值在每個小批次中根據size_average對觀測值進行平均或求和。當reduce為False時,返回每個批次元素的損失值,並忽略size_average。預設值:Truereduction (str, optional) – Specifies the reduction to apply to the output:
'none'|'mean'|'sum'.'none': no reduction will be applied,'mean': the sum of the output will be divided by the number of elements in the output,'sum': the output will be summed. Note:size_averageandreduceare in the process of being deprecated, and in the meantime, specifying either of those two args will overridereduction. Default:'mean'
- 形狀
Input1: or , where N is the batch size and D is the embedding dimension.
Input2: or , same shape as Input1.
Target: or .
Output: If
reductionis'none', then , otherwise scalar.
示例
>>> loss = nn.CosineEmbeddingLoss() >>> input1 = torch.randn(3, 5, requires_grad=True) >>> input2 = torch.randn(3, 5, requires_grad=True) >>> target = torch.ones(3) >>> output = loss(input1, input2, target) >>> output.backward()