[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to read TFLite after conversion #50182

Closed
ankitShuklaDev opened this issue Jun 9, 2021 · 7 comments
Closed

Unable to read TFLite after conversion #50182

ankitShuklaDev opened this issue Jun 9, 2021 · 7 comments
Assignees
Labels
comp:lite TF Lite related issues TF 2.5 Issues related to TF 2.5 type:support Support issues

Comments

@ankitShuklaDev
Copy link

Problem Statement

I tried converting a frozen garph - https://github.com/blaueck/tf-mtcnn/blob/master/mtcnn.pb to tflite using the TF v2.5.0. The conversion was successful, but I am unable to load the generated TFLite for inference.

Code for conversion to tflite

import tensorflow as tf

input_arrays = ['input', 'min_size', 'thresholds', 'factor']

output_node_names = ['prob', 'landmarks', 'box']  # Output nodes

graph_def_file = 'mtcnn.pb'


converter = tf.compat.v1.lite.TFLiteConverter.from_frozen_graph(graph_def_file, input_arrays, output_node_names)
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS,  # enable TensorFlow Lite ops.
    tf.lite.OpsSet.SELECT_TF_OPS  # enable TensorFlow ops.
]

converter.optimizations = [tf.lite.Optimize.DEFAULT]

tflite_model = converter.convert()

if __name__ == "__main__":
    with tf.io.gfile.GFile('mtcnn.tflite', 'wb') as f:
        f.write(tflite_model)

Code for loading TFLite

    model_path = r'mtcnn.tflite'
    interpreter = tf.lite.Interpreter(model_path=model_path)

Error while loading TFLite

File "C:\Users\a84191678\Anaconda3\lib\site-packages\tensorflow\lite\python\interpreter.py", line 348, in __init__
    _interpreter_wrapper.CreateWrapperFromFile(
ValueError: Did not get operators or tensors in subgraph 1.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow installed from (source or binary): pip
  • TensorFlow version (use command below):2.5.0
  • Python version:3.8

No CUDA/GPU.

@abattery abattery self-assigned this Jun 9, 2021
@saikumarchalla saikumarchalla removed their assignment Jun 10, 2021
@saikumarchalla saikumarchalla added comp:lite TF Lite related issues TF 2.5 Issues related to TF 2.5 type:support Support issues labels Jun 10, 2021
@PINTO0309
Copy link
Contributor
PINTO0309 commented Jun 12, 2021

There seems to be no problem with Ubuntu. I'm just a hobby programmer.

  • Ubuntu20.04 (Docker + Ubuntu 18.04 + Python 3.6)
  • TensorFlow v2.5.0 (source build)

model_from_pb_float32.tflite.tar.gz

$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  pinto0309/openvino2tensorflow:latest
$ cd workdir
$ wget https://github.com/blaueck/tf-mtcnn/raw/master/mtcnn.pb
$ pb_to_tflite \
--pb_file_path mtcnn.pb \
--inputs input,min_size,thresholds,factor \
--outputs prob,landmarks,box
$ python3
>>> import tensorflow as tf
>>> model_path = 'saved_model_from_pb/model_from_pb_float32.tflite'
>>> interpreter = tf.lite.Interpreter(model_path=model_path)
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: Created TensorFlow Lite delegate for select TF ops.
2021-06-12 11:10:20.333686: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-12 11:10:20.336650: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-06-12 11:10:20.336665: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-06-12 11:10:20.336691: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: ubuntu2004
2021-06-12 11:10:20.336696: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: ubuntu2004
2021-06-12 11:10:20.336721: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
2021-06-12 11:10:20.336759: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 465.19.1
INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 46 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 97 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 116 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 2 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 63 nodes with 3 partitions.

>>> tf.__version__
'2.5.0'
>>> 

@ankitShuklaDev
Copy link
Author

Hi @PINTO0309

Thanks for responding, I tried the same on Ubuntu, didn't work for me.
image

May be I am doing something wrong in the script. Could you please share your pb to tflite script?

@PINTO0309
Copy link
Contributor
PINTO0309 commented Jun 14, 2021

@ankitShuklaDev

May be I am doing something wrong in the script. Could you please share your pb to tflite script?

Sorry. For example,

$ xhost +local: && \
  docker run -it --rm \
  -v `pwd`:/home/user/workdir \
  -v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
  --device /dev/video0:/dev/video0:mwr \
  --net=host \
  -e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
  -e DISPLAY=$DISPLAY \
  --privileged \
  pinto0309/openvino2tensorflow:latest

$ cd workdir

$ pb_to_saved_model \
--pb_file_path mtcnn.pb \
--inputs input:0,min_size:0,thresholds:0,factor:0 \
--outputs prob:0,landmarks:0,box:0

$ saved_model_to_tflite \
--saved_model_dir_path saved_model_from_pb \
--input_shapes [1,64,64,3] \
--output_no_quant_float32_tflite
$ python3
>>> import tensorflow as tf
>>> model_path = 'tflite_from_saved_model/model_float32.tflite'
>>> interpreter = tf.lite.Interpreter(model_path=model_path)

INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: Created TensorFlow Lite delegate for select TF ops.
2021-06-14 17:10:08.595271: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-14 17:10:08.595945: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-06-14 17:10:08.596105: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-06-14 17:10:08.596117: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: ubuntu2004
2021-06-14 17:10:08.596122: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: ubuntu2004
2021-06-14 17:10:08.596145: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
2021-06-14 17:10:08.596163: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 465.19.1
INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 46 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 97 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 116 nodes with 2 partitions.

INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 2 nodes with 0 partitions.

INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 63 nodes with 3 partitions.

model_float32.tflite.tar.gz

A TensorFlow or TensorFlowLite runtime with FlexDelegate enabled is required. For example, I have committed the FlexDelegate enabled installer for armv7l and aarch64 below.
https://github.com/PINTO0309/Tensorflow-bin

As an example, for an x86 machine, you can use the . After running ./configure, you can build it with the following Bazel command.

$ git clone -b v2.5.0 https://github.com/tensorflow/tensorflow.git && cd tensorflow
$ ./configure

$ sudo bazel build \
--config=monolithic \
--config=noaws \
--config=nohdfs \
--config=nonccl \
--config=v2 \
--define=tflite_pip_with_flex=true \
--define=tflite_with_xnnpack=true \
//tensorflow/tools/pip_package:build_pip_package

@ankitShuklaDev
Copy link
Author
ankitShuklaDev commented Jun 14, 2021

Hi @PINTO0309 ,

Thanks for a swift response. Could you share the pb_to_saved_model and saved_model_to_tflite script too.

@PINTO0309
Copy link
Contributor

@ankitShuklaDev
Copy link
Author

Thanks a ton @PINTO0309.

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues TF 2.5 Issues related to TF 2.5 type:support Support issues
Projects
None yet
Development

No branches or pull requests

4 participants