beginner/examples_nn/polynomial_nn
在 Google Colab 中執行
Colab
下載 Notebook
Notebook
在 GitHub 上檢視
GitHub
注意
前往底部 下載完整的示例程式碼。
PyTorch: nn#
建立日期:2020 年 12 月 03 日 | 最後更新:2025 年 09 月 29 日 | 最後驗證:2024 年 11 月 05 日
一個三階多項式,透過最小化歐幾里得距離平方,從 \(-\pi\) 到 \(\pi\) 訓練以預測 \(y=\sin(x)\)。
此實現使用 PyTorch 的 nn 包來構建網路。PyTorch 的 autograd 使得定義計算圖和計算梯度變得容易,但原始的 autograd 對於定義複雜的神經網路來說可能過於底層;這就是 nn 包可以提供幫助的地方。nn 包定義了一組 Modules,您可以將它們看作是神經網路層,它們根據輸入產生輸出,並且可能包含一些可訓練的權重。
99 289.04107666015625
199 197.71018981933594
299 136.2671661376953
399 94.89392852783203
499 67.00952911376953
599 48.198585510253906
699 35.49624252319336
799 26.91018295288086
899 21.100711822509766
999 17.165796279907227
1099 14.497730255126953
1199 12.686685562133789
1299 11.456018447875977
1399 10.618821144104004
1499 10.048646926879883
1599 9.659878730773926
1699 9.394503593444824
1799 9.213150024414062
1899 9.089075088500977
1999 9.004087448120117
Result: y = 0.011030223220586777 + 0.848137617111206 x + -0.0019028971437364817 x^2 + -0.0921066552400589 x^3
import torch
import math
# Create Tensors to hold input and outputs.
x = torch.linspace(-math.pi, math.pi, 2000)
y = torch.sin(x)
# For this example, the output y is a linear function of (x, x^2, x^3), so
# we can consider it as a linear layer neural network. Let's prepare the
# tensor (x, x^2, x^3).
p = torch.tensor([1, 2, 3])
xx = x.unsqueeze(-1).pow(p)
# In the above code, x.unsqueeze(-1) has shape (2000, 1), and p has shape
# (3,), for this case, broadcasting semantics will apply to obtain a tensor
# of shape (2000, 3)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. The Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
# The Flatten layer flatens the output of the linear layer to a 1D tensor,
# to match the shape of `y`.
model = torch.nn.Sequential(
torch.nn.Linear(3, 1),
torch.nn.Flatten(0, 1)
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-6
for t in range(2000):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(xx)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
if t % 100 == 99:
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
# You can access the first layer of `model` like accessing the first item of a list
linear_layer = model[0]
# For linear layer, its parameters are stored as `weight` and `bias`.
print(f'Result: y = {linear_layer.bias.item()} + {linear_layer.weight[:, 0].item()} x + {linear_layer.weight[:, 1].item()} x^2 + {linear_layer.weight[:, 2].item()} x^3')
指令碼總執行時間: (0 分鐘 0.559 秒)