-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to read TFLite after conversion #50182
Comments
There seems to be no problem with Ubuntu. I'm just a hobby programmer.
model_from_pb_float32.tflite.tar.gz $ xhost +local: && \
docker run -it --rm \
-v `pwd`:/home/user/workdir \
-v /tmp/.X11-unix/:/tmp/.X11-unix:rw \
--device /dev/video0:/dev/video0:mwr \
--net=host \
-e XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR \
-e DISPLAY=$DISPLAY \
--privileged \
pinto0309/openvino2tensorflow:latest $ cd workdir
$ wget https://github.com/blaueck/tf-mtcnn/raw/master/mtcnn.pb
$ pb_to_tflite \
--pb_file_path mtcnn.pb \
--inputs input,min_size,thresholds,factor \
--outputs prob,landmarks,box
$ python3 >>> import tensorflow as tf
>>> model_path = 'saved_model_from_pb/model_from_pb_float32.tflite'
>>> interpreter = tf.lite.Interpreter(model_path=model_path)
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
INFO: Created TensorFlow Lite delegate for select TF ops.
2021-06-12 11:10:20.333686: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-06-12 11:10:20.336650: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/intel/openvino_2021/data_processing/dl_streamer/lib:/opt/intel/openvino_2021/data_processing/gstreamer/lib:/opt/intel/openvino_2021/opencv/lib:/opt/intel/openvino_2021/deployment_tools/ngraph/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/tbb/lib::/opt/intel/openvino_2021/deployment_tools/inference_engine/external/hddl/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/omp/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/gna/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/external/mkltiny_lnx/lib:/opt/intel/openvino_2021/deployment_tools/inference_engine/lib/intel64:/usr/local/cuda/compat/lib:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-06-12 11:10:20.336665: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-06-12 11:10:20.336691: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: ubuntu2004
2021-06-12 11:10:20.336696: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: ubuntu2004
2021-06-12 11:10:20.336721: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: Not found: was unable to find libcuda.so DSO loaded into this program
2021-06-12 11:10:20.336759: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 465.19.1
INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 46 nodes with 2 partitions.
INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.
INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 97 nodes with 2 partitions.
INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 0 nodes with 0 partitions.
INFO: TfLiteFlexDelegate delegate: 2 nodes delegated out of 116 nodes with 2 partitions.
INFO: TfLiteFlexDelegate delegate: 0 nodes delegated out of 2 nodes with 0 partitions.
INFO: TfLiteFlexDelegate delegate: 7 nodes delegated out of 63 nodes with 3 partitions.
>>> tf.__version__
'2.5.0'
>>> |
Hi @PINTO0309 Thanks for responding, I tried the same on Ubuntu, didn't work for me. May be I am doing something wrong in the script. Could you please share your pb to tflite script? |
Sorry. For example,
A TensorFlow or TensorFlowLite runtime with FlexDelegate enabled is required. For example, I have committed the FlexDelegate enabled installer for armv7l and aarch64 below. As an example, for an x86 machine, you can use the . After running
|
Hi @PINTO0309 , Thanks for a swift response. Could you share the pb_to_saved_model and saved_model_to_tflite script too. |
My tool. |
Thanks a ton @PINTO0309. |
Problem Statement
I tried converting a frozen garph - https://github.com/blaueck/tf-mtcnn/blob/master/mtcnn.pb to tflite using the TF v2.5.0. The conversion was successful, but I am unable to load the generated TFLite for inference.
Code for conversion to tflite
Code for loading TFLite
Error while loading TFLite
System information
No CUDA/GPU.
The text was updated successfully, but these errors were encountered: