国产亚洲精品福利在线无卡一,国产精久久一区二区三区,亚洲精品无码国模,精品久久久久久无码专区不卡

當(dāng)前位置: 首頁 > news >正文

創(chuàng)建網(wǎng)站的準(zhǔn)備網(wǎng)頁鏈接

創(chuàng)建網(wǎng)站的準(zhǔn)備,網(wǎng)頁鏈接,vi設(shè)計(jì)包含什么,vue做的網(wǎng)站百度抓取前言: onnx是microsoft開發(fā)的一個中間格式,而onnxruntime簡稱ort是microsoft為onnx開發(fā)的推理引擎。允許使用onnx作為輸入進(jìn)行直接推理得到結(jié)果。 py接口的推理過程: main函數(shù): if __name__ "__main__":session onn…

前言:

  • onnx是microsoft開發(fā)的一個中間格式,而onnxruntime簡稱ort是microsoft為onnx開發(fā)的推理引擎。
  • 允許使用onnx作為輸入進(jìn)行直接推理得到結(jié)果。

py接口的推理過程:

?main函數(shù):


if __name__ == "__main__":session = onnxruntime.InferenceSession("workspace/yolov5s.onnx", providers=["CPUExecutionProvider"])#建立一個InferenceSession,塞進(jìn)去的是onnx的路徑實(shí)際運(yùn)算image = cv2.imread("workspace/car.jpg")image_input, M, IM = preprocess(image)pred = session.run(["output"], {"images": image_input})[0]boxes = post_process(pred, IM)for obj in boxes:left, top, right, bottom = map(int, obj[:4])confidence = obj[4]label = int(obj[6])cv2.rectangle(image, (left, top), (right, bottom), (0, 255, 0), 2)cv2.putText(image, f"{label}: {confidence:.2f}", (left, top+20), 0, 1, (0, 0, 255), 2, 16)cv2.imwrite("workspace/python-ort.jpg", image)
 session = onnxruntime.InferenceSession("workspace/yolov5s.onnx", providers=["CPUExecutionProvider"])

建立一個InferenceSession,塞進(jìn)去的是onnx的路徑,實(shí)際運(yùn)算的后端選用的是CPU

也可以選用cuda等等

    image = cv2.imread("workspace/car.jpg")image_input, M, IM = preprocess(image)

之后就是預(yù)處理

    pred = session.run(["output"], {"images": image_input})[0]boxes = post_process(pred, IM)

session.run就是運(yùn)行的inference過程

?輸入第一個是output的name,決定了哪幾個節(jié)點(diǎn)作為輸出,就將這個名字傳遞給他

第二個是input的dict,這個意思就是如果有好多個輸入,那應(yīng)該是將名字與輸入進(jìn)行一一對應(yīng),比如"input1 ":input1? ,?? "input2":input2....

那么在這里output就是一個輸出的list,然后我們?nèi)〉?項(xiàng)

就是這個樣子。

預(yù)處理:


def preprocess(image, input_w=640, input_h=640):scale = min(input_h / image.shape[0], input_w / image.shape[1])ox = (-scale * image.shape[1] + input_w + scale  - 1) * 0.5oy = (-scale * image.shape[0] + input_h + scale  - 1) * 0.5M = np.array([[scale, 0, ox],[0, scale, oy]], dtype=np.float32)IM = cv2.invertAffineTransform(M)image_prep = cv2.warpAffine(image, M, (input_w, input_h), flags=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT, borderValue=(114, 114, 114))image_prep = (image_prep[..., ::-1] / 255.0).astype(np.float32)image_prep = image_prep.transpose(2, 0, 1)[None]return image_prep, M, IM

?

后處理:

def nms(boxes, threshold=0.5):keep = []remove_flags = [False] * len(boxes)for i in range(len(boxes)):if remove_flags[i]:continueib = boxes[i]keep.append(ib)for j in range(len(boxes)):if remove_flags[j]:continuejb = boxes[j]# class mismatch or image_id mismatchif ib[6] != jb[6] or ib[5] != jb[5]:continuecleft,  ctop    = max(ib[:2], jb[:2])#例子:#將 ib 的前兩個元素 [2, 3] 與 jb 的前兩個元素 [4, 1] 進(jìn)行比較,并取其中較大的值。所以結(jié)果是 [4, 3]。cright, cbottom = min(ib[2:4], jb[2:4])cross = max(0, cright - cleft) * max(0, cbottom - ctop)union = max(0, ib[2] - ib[0]) * max(0, ib[3] - ib[1]) + max(0, jb[2] - jb[0]) * max(0, jb[3] - jb[1]) - crossiou = cross / unionif iou >= threshold:remove_flags[j] = Truereturn keepdef post_process(pred, IM, threshold=0.25):# b, n, 85boxes = []for image_id, box_id in zip(*np.where(pred[..., 4] >= threshold)):item = pred[image_id, box_id]cx, cy, w, h, objness = item[:5]label = item[5:].argmax()confidence = item[5 + label] * objnessif confidence < threshold:continueboxes.append([cx - w * 0.5, cy - h * 0.5, cx + w * 0.5, cy + h * 0.5, confidence, image_id, label])boxes = np.array(boxes)lr = boxes[:, [0, 2]]tb = boxes[:, [1, 3]]boxes[:, [0, 2]] = lr * IM[0, 0] + IM[0, 2]boxes[:, [1, 3]] = tb * IM[1, 1] + IM[1, 2]# left, top, right, bottom, confidence, image_id, labelboxes = sorted(boxes.tolist(), key=lambda x:x[4], reverse=True)return nms(boxes)

?我們可以發(fā)現(xiàn),真正的onnxruntime只有兩行,一個onnxruntime.InferenceSession,一個run就結(jié)束了。其余的都是和之前一樣的,這是非常好用便捷的,所以如果有模型需要作測試,是非常推薦用onnxruntime的

CPP接口推理過程:

Inference:

在main函數(shù)中只有一個inference


int main(){inference();return 0;
}

所以我們直接來到inference的解讀中

 auto engine_data = load_file("yolov5s.onnx");
//讀onnx文件Ort::Env env(ORT_LOGGING_LEVEL_INFO, "onnx");
//設(shè)置打印的日志級別    Ort::SessionOptions session_options;
//定義sessionoptions 類似于python中的    session =     onnxruntime.InferenceSession("workspace/yolov5s.onnx", providers=["CPUExecutionProvider"])auto mem = Ort::MemoryInfo::CreateCpu(OrtArenaAllocator, OrtMemTypeDefault);
//設(shè)置MemoryInfosession_options.SetIntraOpNumThreads(1);session_options.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);
//啟動一些擴(kuò)展
    Ort::Session session(env, "yolov5s.onnx", session_options);//創(chuàng)建session,將選項(xiàng)傳進(jìn)去auto output_dims = session.GetOutputTypeInfo(0).GetTensorTypeAndShapeInfo().GetShape();//獲取output的shapeconst char *input_names[] = {"images"}, *output_names[] = {"output"};
    int input_batch = 1;int input_channel = 3;int input_height = 640;int input_width = 640;int64_t input_shape[] = {input_batch, input_channel, input_height, input_width};int input_numel = input_batch * input_channel * input_height * input_width;float* input_data_host = new float[input_numel];auto input_tensor = Ort::Value::CreateTensor(mem, input_data_host, input_numel, input_shape, 4);//創(chuàng)建一個Tensor,引用input_data_host中的數(shù)據(jù)

預(yù)處理:

///// letter boxauto image = cv::imread("car.jpg");float scale_x = input_width / (float)image.cols;float scale_y = input_height / (float)image.rows;float scale = std::min(scale_x, scale_y);float i2d[6], d2i[6];i2d[0] = scale;  i2d[1] = 0;  i2d[2] = (-scale * image.cols + input_width + scale  - 1) * 0.5;i2d[3] = 0;  i2d[4] = scale;  i2d[5] = (-scale * image.rows + input_height + scale - 1) * 0.5;cv::Mat m2x3_i2d(2, 3, CV_32F, i2d);cv::Mat m2x3_d2i(2, 3, CV_32F, d2i);cv::invertAffineTransform(m2x3_i2d, m2x3_d2i);cv::Mat input_image(input_height, input_width, CV_8UC3);cv::warpAffine(image, input_image, m2x3_i2d, input_image.size(), cv::INTER_LINEAR, cv::BORDER_CONSTANT, cv::Scalar::all(114));cv::imwrite("input-image.jpg", input_image);int image_area = input_image.cols * input_image.rows;unsigned char* pimage = input_image.data;float* phost_b = input_data_host + image_area * 0;float* phost_g = input_data_host + image_area * 1;float* phost_r = input_data_host + image_area * 2;for(int i = 0; i < image_area; ++i, pimage += 3){// 注意這里的順序rgb調(diào)換了*phost_r++ = pimage[0] / 255.0f;*phost_g++ = pimage[1] / 255.0f;*phost_b++ = pimage[2] / 255.0f;}///

制作輸出矩陣并運(yùn)行:

    // 3x3輸入,對應(yīng)3x3輸出int output_numbox = output_dims[1];int output_numprob = output_dims[2];int num_classes = output_numprob - 5;int output_numel = input_batch * output_numbox * output_numprob;float* output_data_host = new float[output_numel];int64_t output_shape[] = {input_batch, output_numbox, output_numprob};auto output_tensor = Ort::Value::CreateTensor(mem, output_data_host, output_numel, output_shape, 3);Ort::RunOptions options;session.Run(options, (const char* const*)input_names, &input_tensor, 1, (const char* const*)output_names, &output_tensor, 1);
//指定輸入輸出的name,tensor和個數(shù),傳入tensor進(jìn)行推理

后處理:

 // decode boxvector<vector<float>> bboxes;float confidence_threshold = 0.25;float nms_threshold = 0.5;for(int i = 0; i < output_numbox; ++i){float* ptr = output_data_host + i * output_numprob;float objness = ptr[4];if(objness < confidence_threshold)continue;float* pclass = ptr + 5;int label     = std::max_element(pclass, pclass + num_classes) - pclass;float prob    = pclass[label];float confidence = prob * objness;if(confidence < confidence_threshold)continue;float cx     = ptr[0];float cy     = ptr[1];float width  = ptr[2];float height = ptr[3];float left   = cx - width * 0.5;float top    = cy - height * 0.5;float right  = cx + width * 0.5;float bottom = cy + height * 0.5;float image_base_left   = d2i[0] * left   + d2i[2];float image_base_right  = d2i[0] * right  + d2i[2];float image_base_top    = d2i[0] * top    + d2i[5];float image_base_bottom = d2i[0] * bottom + d2i[5];bboxes.push_back({image_base_left, image_base_top, image_base_right, image_base_bottom, (float)label, confidence});}printf("decoded bboxes.size = %d\n", bboxes.size());// nmsstd::sort(bboxes.begin(), bboxes.end(), [](vector<float>& a, vector<float>& b){return a[5] > b[5];});std::vector<bool> remove_flags(bboxes.size());std::vector<vector<float>> box_result;box_result.reserve(bboxes.size());auto iou = [](const vector<float>& a, const vector<float>& b){float cross_left   = std::max(a[0], b[0]);float cross_top    = std::max(a[1], b[1]);float cross_right  = std::min(a[2], b[2]);float cross_bottom = std::min(a[3], b[3]);float cross_area = std::max(0.0f, cross_right - cross_left) * std::max(0.0f, cross_bottom - cross_top);float union_area = std::max(0.0f, a[2] - a[0]) * std::max(0.0f, a[3] - a[1]) + std::max(0.0f, b[2] - b[0]) * std::max(0.0f, b[3] - b[1]) - cross_area;if(cross_area == 0 || union_area == 0) return 0.0f;return cross_area / union_area;};for(int i = 0; i < bboxes.size(); ++i){if(remove_flags[i]) continue;auto& ibox = bboxes[i];box_result.emplace_back(ibox);for(int j = i + 1; j < bboxes.size(); ++j){if(remove_flags[j]) continue;auto& jbox = bboxes[j];if(ibox[4] == jbox[4]){// class matchedif(iou(ibox, jbox) >= nms_threshold)remove_flags[j] = true;}}}printf("box_result.size = %d\n", box_result.size());for(int i = 0; i < box_result.size(); ++i){auto& ibox = box_result[i];float left = ibox[0];float top = ibox[1];float right = ibox[2];float bottom = ibox[3];int class_label = ibox[4];float confidence = ibox[5];cv::Scalar color;tie(color[0], color[1], color[2]) = random_color(class_label);cv::rectangle(image, cv::Point(left, top), cv::Point(right, bottom), color, 3);auto name      = cocolabels[class_label];auto caption   = cv::format("%s %.2f", name, confidence);int text_width = cv::getTextSize(caption, 0, 1, 2, nullptr).width + 10;cv::rectangle(image, cv::Point(left-3, top-33), cv::Point(left + text_width, top), color, -1);cv::putText(image, caption, cv::Point(left, top-5), 0, 1, cv::Scalar::all(0), 2, 16);}cv::imwrite("image-draw.jpg", image);delete[] input_data_host;delete[] output_data_host;
}

小結(jié):

可以看到,這個與我們之前yolov5后處理沒什么太大的區(qū)別,關(guān)鍵只在于對于output_tensor和output作關(guān)聯(lián),input_tensor和input作關(guān)聯(lián)。

?

http://aloenet.com.cn/news/42633.html

相關(guān)文章:

  • wordpress css代碼背景色如何優(yōu)化標(biāo)題關(guān)鍵詞
  • 用java做電商網(wǎng)站廈門百度代理
  • 新余網(wǎng)站建設(shè)外鏈發(fā)布的平臺最好是
  • 怎么免費(fèi)自己做網(wǎng)站精準(zhǔn)信息300099
  • 去年做哪個網(wǎng)站能致富競價(jià)培訓(xùn)課程
  • 做網(wǎng)站九州科技sem優(yōu)化托管
  • excel做郵箱網(wǎng)站怎么加3www河南網(wǎng)站推廣那家好
  • 長春火車站什么時(shí)候通車湖南網(wǎng)站優(yōu)化
  • 廣州市做企業(yè)網(wǎng)站東莞seo培訓(xùn)
  • 長春網(wǎng)站開發(fā)senluowx免費(fèi)網(wǎng)站站長查詢
  • 免費(fèi)網(wǎng)站推廣服務(wù)軟文代理平臺
  • 石景山區(qū)城鄉(xiāng)建設(shè)委員會網(wǎng)站百度推廣入口官網(wǎng)
  • 長沙網(wǎng)站設(shè)計(jì)制作seo網(wǎng)站推廣專員招聘
  • 想開個網(wǎng)站不知怎樣做北京疫情最新消息情況
  • 仿58網(wǎng)站怎么做網(wǎng)站出售
  • 深圳網(wǎng)站建站建設(shè)新聞頭條新聞
  • wordpress使用培訓(xùn)seo公司彼億營銷
  • 網(wǎng)站空間租用有哪些服務(wù)seo搜索排名影響因素主要有
  • 范縣網(wǎng)站建設(shè)公司精準(zhǔn)營銷通俗來說是什么
  • vue可以做pc端網(wǎng)站嗎河南網(wǎng)站推廣
  • 旅游景區(qū)網(wǎng)站建設(shè)谷歌搜索指數(shù)查詢
  • 印刷網(wǎng)絡(luò)商城網(wǎng)站建設(shè)推廣新產(chǎn)品最好的方法
  • 信陽專業(yè)做網(wǎng)站公司客戶管理軟件
  • 網(wǎng)站制作軟件手機(jī)百度優(yōu)化軟件
  • 自己在家開網(wǎng)站做推廣百度優(yōu)化關(guān)鍵詞
  • 網(wǎng)站推介方案谷歌官方網(wǎng)站登錄入口
  • 單位網(wǎng)站開發(fā)合同泉州網(wǎng)站seo外包公司
  • 建設(shè)家具網(wǎng)站的目的及功能定位北京網(wǎng)絡(luò)營銷策劃公司
  • 選擇網(wǎng)站的關(guān)鍵詞cpu優(yōu)化軟件
  • wordpress備份網(wǎng)站南寧百度首頁優(yōu)化