-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensorflow lite selective build results _ZN6google8protobuf8internal26fixed_address_empty_stringE" error, build fails with tne --config=monolithic setup, returning the Check failed: existing == nullptr (Tensor already registered) errror #60831
Comments
Hi @dachshund-ncu, which way are you building? With Docker/ w/o Docker? Please provide exact steps so that we can reproduce your issue. This includes commands prior to docker commands, docker commands (if applicable), commands within the docker (if applicable). |
Hi @pkgoogle, the build steps are presented below: Prepare docker envirnoment:
Inside docker envirnoment:
|
Hi @dachshund-ncu, Thanks for providing detailed reproducing steps, this helps a lot! Following your instructions with some modifications I was able to build the tflite aar, however I see you are using a command that is dependent your tflite models. This will probably allow you to continue with a bigger aar file, I can't reproduce your issue with your specific tflite model as I don't have access to those models. To build the "fatter" aar, install your android tools as stated in the documentation:
Your version is probably fine if you do it line by line, but when I pasted it all it seemed to say No to the license. When building the aar, just for arm64-v8a:
Let me know if this works for you. Please close if this is sufficient for you. If you need help w/ the specific model cases, please upload the additional models or at least a minimally reproducible toy model that reproduces the issue. |
Hello, thank for your response!
Unfortunately I cannot share the input models, but I asked for some dummy models that utilize the same operatos. It results in the same error as stated above. Thank you for support! |
Hi @dachshund-ncu, can you provide me the exact steps that led to the above error message? I am unsure what you mean by "the provided solution". i.e. I'm trying to answer did it fail on the build step? were you able to build the aar successfully or did it fail after you integrated it? I'll try to build the aar w/ your dummy models. |
Hi @dachshund-ncu, with your toy model I was able to build fine if I don't include this step: echo -n "build --config=monolithic" >> .bazelrc do you absolutely need this configuration? |
Steps are exactly the same, as before, but instead of using
It results in output:
And no .aar file is produced
Reason for adding --config=monolithic to the .bazelrc file is to avoid other error:
during running the android project. This appeared to fix this issue for some other users (see here and here and here ), yet fails to build in my case. |
Hi @dachshund-ncu, there seems to be an issue with that command that doesn't output the aar... I was able to get the output if I made the aar really fat as in the documentation:
Can you try that and use that aar for now while I investigate the other part? |
Hi @pkgoogle, command provided from documentation:
produces only the |
Hi @pkgoogle I was wondering if my issue is still under consideration by you. |
Hi @dachshund-ncu, I'm still looking at it but was having an internal docker issue which I just resolved, I'll let you know when I have more information. Thanks. |
I tried not editing the dockerfile but I'm running into the same issue:
Seems like the combination of the workaround Hi @terryheo, can you please take a look? Thanks. |
I have the same problem. Had to add the workaround for tensorflow 2.13.0 but them i have this build error. |
Hi @dachshund-ncu, can you provide for me reproducible steps for the root cause of the issue?
We shouldn't really rely on work-arounds long-term and it seems like we fixed the "official way" before so let's try to actually fix the root cause here. |
Dear @pkgoogle, I provide the steps to reproduce this bug below Prepare docker envirnoment:
Inside docker envirnoment:
I also attach the dummy model that contains the operators used. |
Hi @dachshund-ncu, I was able to integrate the aar's into my default empty Android project fine ("Hello Android!"). I'm not really using any functionality though so maybe that's why I'm not running into the issue... how are you integrating the aar's into your project? and how are you using that functionality that is causing the error? |
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you. |
Hi @pkgoogle , the test is to run single inference through the network and then then the exception occurs
|
Hi @mszczuj, there's still a lot of missing context, can you please show me the Android Studio code? a full project export from a toy version (that just loads and runs inference) would be the easiest to share. |
Dear @pkgoogle, you'll find a dummy android studio project attached. You'll only need to change one line in inseye_tracker_core/build.gradle
to
and run |
Hi @dachshund-ncu, thanks for the info it helps. @terryheo, I was able to replicate with with @dachshund-ncu's project, I also rebuilt the aar using the official instructions: java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "_ZNK6google8protobuf7Message11GetTypeNameEv" referenced by "/data/app/~~bIkQuSqajVmYEPNnKetv5w==/com.inseye.core.test-7O34SiL9duJWfYnOaIKVsA==/base.apk!/lib/arm64-v8a/libtensorflowlite_flex_jni.so"... |
Any update on this issue? Ive ran into a similar error when building tensorflow select ops via docker on branch r2.13 on my model. |
Hi @shsaronian, can you please try with r2.15 or nightly. Also can you please share your reproduce steps the same way as above so that will give us more data to potentially solve the issue if it's the same issue. Thanks for your help. |
Hi, thanks for responding. I will give r2.15 a try and let you know. Meanwhile, I downgraded to r2.9 and followed the official instructions using docker and I managed to build successfully. I didn't even need the |
@pkgoogle I'm providing my reproduce steps on r2.13 to get to the above issue.
Just like above, if I execute the build command without the monolitic option, I get the |
I am encountering a similar error with 2.13 Anyone knows if 2.15 solves the error? Do you need
command for 2.15? |
I tried with 2.15, same error is coming. |
Click to expand!
Issue Type
Bug
Have you reproduced the bug with TF nightly?
No
Source
source
Tensorflow Version
2.13
Custom Code
No
OS Platform and Distribution
Ubuntu 22.04 lts, Ubuntu 23.04
Mobile device
No response
Python version
No response
Bazel version
6.1.0
GCC/Compiler version
9.4.0
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
I tried to create the selective build of the tensorflow lite, following this guide.
The build succeeded and I added the tensorflow-lite-select-tf-ops.aar to the android studio project, yet it resulted in an error:
I noticed, there are fixes for this problem, e.g. here - one needs just add following line:
to the .bazelrc file.
But build with this configuration fails with the following error:
Standalone code to reproduce the issue
Relevant log output
No response
The text was updated successfully, but these errors were encountered: