常州網(wǎng)站制作公司多嗎寶雞網(wǎng)站開(kāi)發(fā)公司
注:書(shū)中對(duì)代碼的講解并不詳細(xì),本文對(duì)很多細(xì)節(jié)做了詳細(xì)注釋。另外,書(shū)上的源代碼是在Jupyter Notebook上運(yùn)行的,較為分散,本文將代碼集中起來(lái),并加以完善,全部用vscode在python 3.9.18下測(cè)試通過(guò)。
Chapter3 Linear Neural Networks
3.3 Concise Implementations of Linear Regression
import numpy as np
import torch
from torch.utils import data
from d2l import torch as d2ltrue_w=torch.tensor([2,-3.4])
true_b=4.2
features,labels=d2l.synthetic_data(true_w,true_b,1000)#構(gòu)造一個(gè)pytorch數(shù)據(jù)迭代器
def load_array(data_arrays,batch_size,is_train=True): #@savedataset=data.TensorDataset(*data_arrays)#"TensorDataset" is a class provided by the torch.utils.data module which is a dataset wrapper that allows you to create a dataset from a sequence of tensors. #"*data_arrays" is used to unpack the tuple into individual tensors.#The '*' operator is used for iterable unpacking.#Here, data_arrays is expected to be a tuple containing the input features and corresponding labels. The "*data_arrays" syntax is used to unpack the elements of the tuple and pass them as separate arguments.return data.DataLoader(dataset,batch_size,shuffle=is_train)#Constructs a PyTorch DataLoader object which is an iterator that provides batches of data during training or testing.
batch_size=10
data_iter=load_array([features,labels],batch_size)
print(next(iter(data_iter)))#調(diào)用next()函數(shù)時(shí)會(huì)返回迭代器的下一個(gè)項(xiàng)目,并更新迭代器的內(nèi)部狀態(tài)以便下次調(diào)用#定義模型變量,nn是神經(jīng)網(wǎng)絡(luò)的縮寫(xiě)
from torch import nn
net=nn.Sequential(nn.Linear(2,1))
#Creates a sequential neural network with one linear layer.
#Input size (in_features) is 2, indicating the network expects input with 2 features.
#Output size (out_features) is 1, indicating the network produces 1 output.#初始化模型參數(shù)
net[0].weight.data.normal_(0,0.01)#The underscore at the end (normal_) indicates that this operation is performed in-place, modifying the existing tensor in memory.
net[0].bias.data.fill_(0)#定義均方誤差損失函數(shù),也稱(chēng)平方L2范數(shù),返回所有樣本損失的平均值
loss=nn.MSELoss()#MSE:mean squared error #定義優(yōu)化算法(仍是小批量隨機(jī)梯度下降)
#update the parameters of the neural network (net.parameters()) using gradients computed during backpropagation.
trainer=torch.optim.SGD(net.parameters(),lr=0.03)#SGD:stochastic gradient descent(隨機(jī)梯度下降)#訓(xùn)練
num_epochs=3
for epoch in range(num_epochs):for X,y in data_iter:l=loss(net(X),y)trainer.zero_grad()l.backward()trainer.step()#Updates the model parameters using the computed gradients and the optimization algorithm.l=loss(net(features),labels)print(f'epoch {epoch+1},loss {l:.6f}')#{l:.f}表示將變量l格式化為小數(shù)點(diǎn)后有6位的浮點(diǎn)數(shù)。w=net[0].weight.data
print('w的估計(jì)誤差:',true_w-w.reshape(true_w.shape))
b=net[0].bias.data
print('b的估計(jì)誤差:',true_b-b)