ValueError: This ORT build has [

2023-06-30  本文已影响0人  小黄不头秃

报错内容

ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

问题描述:在使用onnxruntime构建会话框时,出现了如上报错。

import onnxruntime as ort 
import torch 
import onnx 
import numpy as np 

# 加载模型
model = onnx.load("./resnet18.onnx")
# 检查模型
onnx.checker.check_model(model)

# 1. 开启会话
session = ort.InferenceSession("./resnet18.onnx")
x = np.random.randn(1,3,224,224).astype(np.float32) # 输入的类型必须是numpy float32 

outputs = session.run(None, input_feed={"input" : x})
print(outputs.shape)


解决方案

# 1. 开启会话
session = ort.InferenceSession("./resnet18.onnx", providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'])
上一篇 下一篇

猜你喜欢

热点阅读