国产亚洲精品福利在线无卡一,国产精久久一区二区三区,亚洲精品无码国模,精品久久久久久无码专区不卡

當(dāng)前位置: 首頁(yè) > news >正文

南寧哪里有做網(wǎng)站的公司關(guān)鍵詞優(yōu)化的方法有哪些

南寧哪里有做網(wǎng)站的公司,關(guān)鍵詞優(yōu)化的方法有哪些,80后陳某做盜版視頻網(wǎng)站,2022最新通道地址1目錄 一、利用預(yù)定義算子重新實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò) (1)使用pytorch的預(yù)定義算子來(lái)重新實(shí)現(xiàn)二分類任務(wù) (2)完善Runner類 (3) 模型訓(xùn)練 (4)性能評(píng)價(jià) 二、增加一個(gè)3個(gè)神經(jīng)元的隱藏層,再次實(shí)現(xiàn)二分類,并與1做對(duì)比 三、自定義隱藏層層數(shù)和每個(gè)隱藏層中的神經(jīng)元個(gè)數(shù),嘗…

目錄

一、利用預(yù)定義算子重新實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)

(1)使用pytorch的預(yù)定義算子來(lái)重新實(shí)現(xiàn)二分類任務(wù) 

 (2)完善Runner類

(3) 模型訓(xùn)練 

(4)性能評(píng)價(jià)

二、增加一個(gè)3個(gè)神經(jīng)元的隱藏層,再次實(shí)現(xiàn)二分類,并與1做對(duì)比

三、自定義隱藏層層數(shù)和每個(gè)隱藏層中的神經(jīng)元個(gè)數(shù),嘗試找到最優(yōu)超參數(shù)完成二分類??梢赃m當(dāng)修改數(shù)據(jù)集,便于探索超參數(shù)。


一、利用預(yù)定義算子重新實(shí)現(xiàn)前饋神經(jīng)網(wǎng)絡(luò)

點(diǎn)擊查看已經(jīng)實(shí)現(xiàn)的前饋神經(jīng)網(wǎng)絡(luò)

(1)使用pytorch的預(yù)定義算子來(lái)重新實(shí)現(xiàn)二分類任務(wù) 

導(dǎo)入必要的庫(kù)和模塊: 

from data import make_moons
from nndl import accuracy
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import torch
from Runner2_1 import RunnerV2_2
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt

定義的網(wǎng)絡(luò)結(jié)構(gòu) Model_MLP_L2_V2

class Model_MLP_L2_V2(nn.Module):def __init__(self, input_size, hidden_size, output_size):super(Model_MLP_L2_V2, self).__init__()# 定義第一層線性層self.fc1 = nn.Linear(input_size, hidden_size)# 使用正態(tài)分布初始化權(quán)重和偏置self.fc1.weight.data = torch.normal(mean=0.0, std=1.0, size=self.fc1.weight.data.size())self.fc1.bias.data.fill_(0.0)  # 常量初始化偏置為0# 定義第二層線性層self.fc2 = nn.Linear(hidden_size, output_size)self.fc2.weight.data = torch.normal(mean=0.0, std=1.0, size=self.fc2.weight.data.size())self.fc2.bias.data.fill_(0.0)  # 常量初始化偏置為0# 定義Logistic激活函數(shù)self.act_fn = torch.sigmoidself.layers = [self.fc1, self.act_fn, self.fc2,self.act_fn]# 前向計(jì)算def forward(self, inputs):z1 = self.fc1(inputs)a1 = self.act_fn(z1)z2 = self.fc2(a1)a2 = self.act_fn(z2)return a2

數(shù)據(jù)集構(gòu)建和劃分:

# 數(shù)據(jù)集構(gòu)建
n_samples = 1000
X, y = make_moons(n_samples=n_samples, shuffle=True, noise=0.2)
# 劃分?jǐn)?shù)據(jù)集
num_train = 640  # 訓(xùn)練集樣本數(shù)量
num_dev = 160    # 驗(yàn)證集樣本數(shù)量
num_test = 200   # 測(cè)試集樣本數(shù)量
# 根據(jù)指定數(shù)量劃分?jǐn)?shù)據(jù)集
X_train, y_train = X[:num_train], y[:num_train]  # 訓(xùn)練集
X_dev, y_dev = X[num_train:num_train + num_dev], y[num_train:num_train + num_dev]  # 驗(yàn)證集
X_test, y_test = X[num_train + num_dev:], y[num_train + num_dev:]  # 測(cè)試集
# 調(diào)整標(biāo)簽的形狀,將其轉(zhuǎn)換為[N, 1]的格式
y_train = y_train.reshape([-1, 1])
y_dev = y_dev.reshape([-1, 1])
y_test = y_test.reshape([-1, 1])可視化生成的數(shù)據(jù)集
plt.figure(figsize=(5, 5))  # 設(shè)置圖形大小
plt.scatter(x=X[:, 0], y=X[:, 1], marker='*', c=y, cmap='viridis')  # 繪制散點(diǎn)圖
plt.xlim(-3, 4)  # 設(shè)置x軸范圍
plt.ylim(-3, 4)  # 設(shè)置y軸范圍
plt.grid(True, linestyle='--', alpha=0.3)  # 添加網(wǎng)格
plt.show()  # 顯示圖形

 (2)完善Runner類

        基于上一節(jié)實(shí)現(xiàn)的 RunnerV2_1 類,本節(jié)的 RunnerV2_2 類在訓(xùn)練過(guò)程中使用自動(dòng)梯度計(jì)算;模型保存時(shí),使用state_dict方法獲取模型參數(shù);模型加載時(shí),使用set_state_dict方法加載模型參數(shù).

import os
import torch
class RunnerV2_2(object):def __init__(self, model, optimizer, metric, loss_fn, **kwargs):self.model = modelself.optimizer = optimizerself.loss_fn = loss_fnself.metric = metric# 記錄訓(xùn)練過(guò)程中的評(píng)估指標(biāo)變化情況self.train_scores = []self.dev_scores = []# 記錄訓(xùn)練過(guò)程中的評(píng)價(jià)指標(biāo)變化情況self.train_loss = []self.dev_loss = []def train(self, train_set, dev_set, **kwargs):# 將模型切換為訓(xùn)練模式self.model.train()# 傳入訓(xùn)練輪數(shù),如果沒(méi)有傳入值則默認(rèn)為0num_epochs = kwargs.get("num_epochs", 0)# 傳入log打印頻率,如果沒(méi)有傳入值則默認(rèn)為100log_epochs = kwargs.get("log_epochs", 100)# 傳入模型保存路徑,如果沒(méi)有傳入值則默認(rèn)為"best_model.pdparams"save_path = kwargs.get("save_path", "best_model.pdparams")# log打印函數(shù),如果沒(méi)有傳入則默認(rèn)為"None"custom_print_log = kwargs.get("custom_print_log", None)# 記錄全局最優(yōu)指標(biāo)best_score = 0# 進(jìn)行num_epochs輪訓(xùn)練for epoch in range(num_epochs):X, y = train_set# 獲取模型預(yù)測(cè)logits = self.model(X)# 計(jì)算交叉熵?fù)p失trn_loss = self.loss_fn(logits, y)self.train_loss.append(trn_loss.item())# 計(jì)算評(píng)估指標(biāo)trn_score = self.metric(logits, y)self.train_scores.append(trn_score)# 自動(dòng)計(jì)算參數(shù)梯度trn_loss.backward()if custom_print_log is not None:# 打印每一層的梯度custom_print_log(self)# 參數(shù)更新self.optimizer.step()# 清空梯度self.optimizer.zero_grad()dev_score, dev_loss = self.evaluate(dev_set)# 如果當(dāng)前指標(biāo)為最優(yōu)指標(biāo),保存該模型if dev_score > best_score:self.save_model(save_path)print(f"[Evaluate] best accuracy performence has been updated: {best_score:.5f} --> {dev_score:.5f}")best_score = dev_scoreif log_epochs and epoch % log_epochs == 0:print(f"[Train] epoch: {epoch}/{num_epochs}, loss: {trn_loss.item()}")# 模型評(píng)估階段,使用'paddle.no_grad()'控制不計(jì)算和存儲(chǔ)梯度@torch.no_grad()def evaluate(self, data_set):# 將模型切換為評(píng)估模式self.model.eval()X, y = data_set# 計(jì)算模型輸出logits = self.model(X)# 計(jì)算損失函數(shù)loss = self.loss_fn(logits, y).item()self.dev_loss.append(loss)# 計(jì)算評(píng)估指標(biāo)score = self.metric(logits, y)self.dev_scores.append(score)return score, loss# 模型測(cè)試階段,使用'paddle.no_grad()'控制不計(jì)算和存儲(chǔ)梯度@torch.no_grad()def predict(self, X):# 將模型切換為評(píng)估模式self.model.eval()return self.model(X)# 使用'model.state_dict()'獲取模型參數(shù),并進(jìn)行保存def save_model(self, saved_path):torch.save(self.model.state_dict(), saved_path)# 使用'model.set_state_dict'加載模型參數(shù)def load_model(self, model_path):state_dict = torch.load(model_path ,weights_only=True)self.model.load_state_dict(state_dict)

(3) 模型訓(xùn)練 

實(shí)例化RunnerV2類,并傳入訓(xùn)練配置,代碼實(shí)現(xiàn)如下:

# 定義訓(xùn)練參數(shù)
epoch_num = 1000  # 訓(xùn)練輪數(shù)
model_saved_dir = "best_model.pdparams"  # 模型保存目錄
# 網(wǎng)絡(luò)參數(shù)
input_size = 2  # 輸入層維度為2
hidden_size = 5  # 隱藏層維度為5
output_size = 1  # 輸出層維度為1
# 定義多層感知機(jī)模型
model = Model_MLP_L2_V2(input_size=input_size, hidden_size=hidden_size, output_size=output_size)
# 定義損失函數(shù)
loss_fn =F.binary_cross_entropy
# 定義優(yōu)化器,設(shè)置學(xué)習(xí)率
learning_rate = 0.2
optimizer = torch.optim.SGD(params=model.parameters(),lr=learning_rate)
# 定義評(píng)價(jià)方法
metric = accuracy
# 實(shí)例化RunnerV2_1類,并傳入訓(xùn)練配置
runner = RunnerV2_2(model, optimizer, metric, loss_fn)
# 訓(xùn)練模型
runner.train([X_train, y_train], [X_dev, y_dev], num_epochs=epoch_num, log_epochs=50, save_dir=model_saved_dir)

輸出結(jié)果:

[Evaluate] best accuracy performence has been updated: 0.00000 --> 0.48125
[Train] epoch: 0/1000, loss: 0.7482572793960571
[Evaluate] best accuracy performence has been updated: 0.48125 --> 0.50000
[Evaluate] best accuracy performence has been updated: 0.50000 --> 0.53750
[Evaluate] best accuracy performence has been updated: 0.53750 --> 0.60625
[Evaluate] best accuracy performence has been updated: 0.60625 --> 0.71250
[Evaluate] best accuracy performence has been updated: 0.71250 --> 0.73750
[Evaluate] best accuracy performence has been updated: 0.73750 --> 0.77500
[Evaluate] best accuracy performence has been updated: 0.77500 --> 0.78750
[Evaluate] best accuracy performence has been updated: 0.78750 --> 0.79375
[Evaluate] best accuracy performence has been updated: 0.79375 --> 0.80000
[Evaluate] best accuracy performence has been updated: 0.80000 --> 0.81250
[Train] epoch: 50/1000, loss: 0.4034937918186188
[Train] epoch: 100/1000, loss: 0.36812323331832886
[Train] epoch: 150/1000, loss: 0.3453332781791687
[Evaluate] best accuracy performence has been updated: 0.81250 --> 0.81875
[Evaluate] best accuracy performence has been updated: 0.81875 --> 0.82500
[Evaluate] best accuracy performence has been updated: 0.82500 --> 0.83125
[Evaluate] best accuracy performence h
http://aloenet.com.cn/news/41474.html

相關(guān)文章:

  • 什么網(wǎng)站可以快速做3d效果圖鄭州專業(yè)的網(wǎng)站公司
  • 哪里有做網(wǎng)站培訓(xùn)的百度熱議排名軟件
  • 動(dòng)漫設(shè)計(jì)與制作課程網(wǎng)站優(yōu)化設(shè)計(jì)公司
  • 重慶李家沱網(wǎng)站建設(shè)提高工作效率的方法不正確的是
  • 網(wǎng)站優(yōu)化包括整站優(yōu)化嗎惠州seo網(wǎng)站管理
  • 鎮(zhèn)平縣建設(shè)局網(wǎng)站企業(yè)seo顧問(wèn)服務(wù)阿亮
  • 網(wǎng)站制作的常見問(wèn)題微信廣告推廣價(jià)格表
  • wordpress創(chuàng)建搜索頁(yè)面天津海外seo
  • 設(shè)計(jì)制作網(wǎng)站制作市場(chǎng)營(yíng)銷四大基本策略
  • 蟲蟲wap建站源碼windows優(yōu)化大師官方下載
  • 響應(yīng)網(wǎng)站適合成人參加的培訓(xùn)班
  • 廣州做網(wǎng)站哪個(gè)公司做得好網(wǎng)頁(yè)制作教程
  • 廣州建設(shè)網(wǎng)站的公司外鏈下載
  • 網(wǎng)站描述代碼怎么寫市場(chǎng)調(diào)研的步驟
  • 佳木斯做網(wǎng)站免費(fèi)發(fā)布推廣信息的軟件
  • 合肥的網(wǎng)站建設(shè)windows永久禁止更新
  • 電影網(wǎng)站怎么做seo網(wǎng)絡(luò)營(yíng)銷帶來(lái)的效果
  • 淘寶客網(wǎng)站W(wǎng)ordPressseo常用工具包括
  • 寶山做網(wǎng)站公司南陽(yáng)網(wǎng)站seo
  • 做H5哪個(gè)網(wǎng)站字體漂亮一些濟(jì)南網(wǎng)站推廣公司
  • 沒(méi)網(wǎng)站能不能cpc廣告點(diǎn)擊賺錢做搜圖片找原圖
  • 給我一個(gè)免費(fèi)網(wǎng)站嗎互聯(lián)網(wǎng)推廣平臺(tái)有哪些公司
  • 工作室裝修網(wǎng)站源碼58同城網(wǎng)站推廣
  • 西安手機(jī)網(wǎng)站建設(shè)動(dòng)力無(wú)限推廣普通話黑板報(bào)
  • 珠海網(wǎng)站備案提交鏈接
  • 嘉興絲綢大廈做網(wǎng)站的公司seo網(wǎng)絡(luò)排名優(yōu)化方法
  • 網(wǎng)站建設(shè)與維護(hù)是什么內(nèi)容?十大軟件培訓(xùn)機(jī)構(gòu)
  • 平臺(tái)建設(shè)上線網(wǎng)站百度網(wǎng)盤app官網(wǎng)下載
  • 有官網(wǎng)建手機(jī)網(wǎng)站深圳網(wǎng)站建設(shè)公司
  • 南充響應(yīng)式網(wǎng)站建設(shè)2023年8月疫情又開始了嗎