搭建基于tf serving模型推理框架

2020-06-26  本文已影响0人  georgeguo

1 tf serving简介

2 保存模型为tf serving需要的pb格式文件

保存含有自定义签名信息的模型【此处使用的是karas或tf 2.x的pb文件保存方式】

def save_pb_model(tf_model, save_path="./pb/wind_lstm/20200627"):

    if os.path.exists(save_path):
        shutil.rmtree(save_path)

    signature = tf.saved_model.signature_def_utils.predict_signature_def(
        inputs={'wind_seq': tf_model.input},
        outputs={'predict': tf_model.output, 'version': tf_model.version})
    builder = tf.saved_model.builder.SavedModelBuilder(save_path)

    builder.add_meta_graph_and_variables(
        tf.compat.v1.keras.backend.get_session(),
        tags=[tf.saved_model.tag_constants.SERVING],
        signature_def_map={
            'regression': signature,
        }
    )
    builder.save()

查看模型中的默认签名

saved_model_cli show --dir wind_lstm/ --all
[root@localhost pb]# saved_model_cli show --dir wind_lstm/20200625/ --all

MetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:

signature_def['regression']:
  The given SavedModel SignatureDef contains the following input(s):
    inputs['wind_seq'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2, 10)
        name: lstm_input:0
  The given SavedModel SignatureDef contains the following output(s):
    outputs['predict'] tensor_info:
        dtype: DT_FLOAT
        shape: (-1, 2)
        name: dense/BiasAdd:0
  Method name is: tensorflow/serving/predict

参考

3 启动tf serving

3.1 下载tf serving镜像

本文使用的是docker的tf serving镜像,镜像下载

docker pull tensorflow/serving:latest        # cpu 版本
docker pull tensorflow/serving:nightly-gpu    # gpu版本

3.2 部署一个模型

启动tf-serving

docker run -p 8500:8500 \
-v /pb/wind_lstm:/models \
--name wind_predict \
-it -d --entrypoint=tensorflow_model_server tensorflow/serving \
--port=8500  --rest_api_port=8501 \
--enable_batching=true --model_name=wind_lstm --model_base_path=/models

docker参数说明:

tensorflow_model_server参数说明:

3.3 部署多个模型

定义models.confg文件
部署多个模型需要定义一个models.confg文件,并把多个模型放在同一个目录。models.config文件的定义如下:

model_config_list:{
    config:{
      name:"wind_lstm",
      base_path:"/models/wind_lstm",
      model_platform:"tensorflow",
      model_version_policy:{
        all:{}
      }
    },
    config:{
      name:"wind_lstm1",
      base_path:"/models/wind_lstm1",
      model_platform:"tensorflow",
      model_version_policy:{
        all:{}
      }
    }
}

启动tf serving

docker run -p 8500:8500 -p 8501:8501 \
-v /pb:/models \
--name wind_predicts -it -d \
--entrypoint=tensorflow_model_server tensorflow/serving \
--port=8500  --rest_api_port=8501 \
--enable_batching=true \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60

tensorflow_model_server参数说明:

3.4 QA

启动报错:

No versions of servable wind_lstm found under base path /models

出现这个错误一般都是因为模型的路基没有映射正确导致。

4 访问tf serving服务

4.1 rest api访问

GET接口

{
 "model_version_status": [
  {
   "version": "20200626",
   "state": "AVAILABLE",
   "status": {
    "error_code": "OK",
    "error_message": ""
   }
  },
  {
   "version": "20200625",
   "state": "AVAILABLE",
   "status": {
    "error_code": "OK",
    "error_message": ""
   }
  }
 ]
}

POST接口
post方式主要用于调用模型的提供的接口。

行模式提交格式,需要提交的数据嵌入在instances的列表中。json格式。

{
    "instances": [
        {"wind_seq":[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}, 
        {"wind_seq":[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
    ], 
    "signature_name": "regression"
}

其中,signature为导出模型时的签名。

列模式提交格式,需要提交的数据嵌入在inputs的json中。json格式。

{
    "inputs": {
        "wind_seq":[
            [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 
            [[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
        ]
    }, 
    "signature_name": "regression"
}

4.2 python grpc访问

import grpc
import tensorflow as tf
import numpy as np
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc

if __name__ == "__main__":
    x = [[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]]
    x = np.array(x)
    x = x[np.newaxis, :]

    input_data = x[0:1, :, :]
    with grpc.insecure_channel(target='192.168.2.200:8501') as channel:
        stub = prediction_service_pb2_grpc.PredictionServiceStub(channel)
        request = predict_pb2.PredictRequest()
        request.model_spec.name = "wind_lstm"
        request.model_spec.signature_name = "regression"
        request.inputs["wind_seq"].CopyFrom(
            tf.make_tensor_proto(input_data, dtype=tf.float32, shape=input_data.shape))
        response = stub.Predict(request, 5.0)  # 5 secs timeout
        print(response)

注意:调用tf serving接口需要安装tensorflow-serving-api

pip install tensorflow-serving-api==1.15.0 
上一篇 下一篇

猜你喜欢

热点阅读