TensorFlow生成モデルをIntel OpenVINO用に変換

OpenVINOで作業するにあたり、pytorch版SSDで行き詰まったので、解決を諦めてTensorFlow版に走っています。
まずはssdlite_mobilenet_v2_cocoからモデルをダウンロードして以下のコマンドを実行。のちのちRaspberry Piで動かすことも意識して--data_type FP16オプションを付けています。

$ python /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --input_model frozen_inference_graph.pb  --data_type FP16 --tensorflow_object_detection_api_pipeline_config pipeline.config --reverse_input_channels --transformations_config /opt/intel/openvino/deployment_tools/model_optimizer/extensions/front/tf/ssd_v2_support.json
Model Optimizer arguments:
Common parameters:
    - Path to the Input Model:  /home/hajime/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/frozen_inference_graph.pb
    - Path for generated IR:    /home/hajime/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/.
    - IR output name:   frozen_inference_graph
    - Log level:    ERROR
    - Batch:    Not specified, inherited from the model
    - Input layers:     Not specified, inherited from the model
    - Output layers:    Not specified, inherited from the model
    - Input shapes:     Not specified, inherited from the model
    - Mean values:  Not specified
    - Scale values:     Not specified
    - Scale factor:     Not specified
    - Precision of IR:  FP16
    - Enable fusing:    True
    - Enable grouped convolutions fusing:   True
    - Move mean values to preprocess section:   False
    - Reverse input channels:   True
TensorFlow specific parameters:
    - Input model in text protobuf format:  False
    - Path to model dump for TensorBoard:   None
    - List of shared libraries with TensorFlow custom layers implementation:    None
    - Update the configuration file with input/output node names:   None
    - Use configuration file used to generate the model with Object Detection API:  /home/hajime/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
    - Use the config file:  None
Model Optimizer version:    2020.2.0-60-g0bc66e26ff
[ WARNING ]  
Detected not satisfied dependencies:
    tensorflow: installed: 2.1.0, required: < 2.0.0

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2020.2.120/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_tf.sh
Note that install_prerequisites scripts may install additional components.
The Preprocessor block has been removed. Only nodes performing mean value subtraction and scaling (if applicable) are kept.

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/hajime/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/./frozen_inference_graph.xml
[ SUCCESS ] BIN file: /home/hajime/Downloads/ssdlite_mobilenet_v2_coco_2018_05_09/./frozen_inference_graph.bin
[ SUCCESS ] Total execution time: 30.14 seconds. 
[ SUCCESS ] Memory consumed: 473 MB.

モデルデータとして、
frozen_inference_graph.xml
重み情報として
frozen_inference_graph.bin
が生成されました。

コメント

  1. […] 前の投稿で生成したIR(Intermediate Representation)を使って、人物検出をOpenVINOを使って試してみました。 […]

タイトルとURLをコピーしました