[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Build TensorFlow Lite for Android and reduce binary size to use with C API #50946

Closed
cristianl1251 opened this issue Jul 26, 2021 · 9 comments
Closed
Assignees
Labels
comp:lite TF Lite related issues type:build/install Build and install issues

Comments

@cristianl1251
Copy link

Hello,

I would like to build TensorFlow Lite for Android reducing the binary size. I want to optimize its size based on the operations used by my model. Then, I want to use the generated .so for arm64-v8a and armeabi-v7a in JNI with C API.

I am able to build TensorFlow Lite for Android as described in Build Android and use it in my Android Studio project. However, the binary files are too big for my application. In order to reduce the binary size, I followed the instructions described in Reduce Binary Size, but as stated in Known Limitations it does not support C API. The generated .aar does not contain the C headers and when I try to use the .so files in my Android Studio project, none of the C functions are available.

Is there any other way to reduce TensorFlow Lite binary size to be used in JNI with C API?

Thanks.

@cristianl1251 cristianl1251 added the type:build/install Build and install issues label Jul 26, 2021
@abattery
Copy link
Contributor

Bazel can do the selective build mechanism for C API.

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/c/BUILD#L115

@abattery
Copy link
Contributor

The tflite_custom_c_library build rule can generate the selectively built TensorFlow Lite C library.

@Saduf2019 Saduf2019 added the stat:awaiting response Status - Awaiting response from author label Jul 26, 2021
@cristianl1251
Copy link
Author

Cool, I replaced tflite_custom_c_library build rule with

tflite_custom_c_library(
    name = "selectively_built_c_api_test_lib",
    testonly = 1,
    models = [
        "//tensorflow/lite:testdata/mymodel1.bin",
        "//tensorflow/lite:testdata/mymodel2.bin",
    ],
    visibility = ["//visibility:private"],
)

Then, built armeabi-v7a with

bazel build -c opt --config=android_arm //tensorflow/lite/c:selectively_built_c_api_test_lib

and arm64-v8a with

bazel build -c opt --config=android_arm64 //tensorflow/lite/c:selectively_built_c_api_test_lib

Tested the generated .so files with my app in Android Studio and it built successfully.

Binaries reduction:

  • arm64-v8a: 2.5 MB to 1.5 MB
  • armeabi-v7a: 1.9 MB to 1.7 MB

Is there any reason why binary for armeabi-v7a reduces less than arm64-v8a?

Thanks for pointing me to the right direction @abattery.

@abattery
Copy link
Contributor

I haven't looked at the details. Looks like the pruned op kernels consume more spaces at arm64-8va.

@tensorflowbutler tensorflowbutler removed the stat:awaiting response Status - Awaiting response from author label Jul 27, 2021
@saikumarchalla saikumarchalla added the comp:lite TF Lite related issues label Jul 28, 2021
@cristianl1251
Copy link
Author

Hey, I still couldn't make it work.

When I build TensorFlow Lite using selective build to generate the library and use the library with JNI, my app in Android Studio builds. However, when I flash it to the mobile it cannot load the library. For selective build I use:

bazel build -c opt --config=android_arm64 //tensorflow/lite/c:selectively_built_c_api_test_lib

In order to verify whether the issue was related to the selective build or not, I also tried building without selective build. Again, I could build my app in Android Studio using the generated library, however it did not work when I flashed it to the mobile. For default build, I use:

bazel build -c opt --config=android_arm64 //tensorflow/lite/c:tensorflowlite_c

Before, I used to build the library using tensorflow/lite/tools/build_aar.sh and I could use the generated library in my app successfully. Successful build:

./tensorflow/lite/tools/build_aar.sh --target_archs=arm64-v8a

The issue is that, as stated in Reduce Binary Size, selective build is not supported for C API and I need to reduce my binary size.

I have the following questions: what is the difference between using bazel build -c opt --config=android_arm64 //tensorflow/lite/c:tensorflowlite_c and ./tensorflow/lite/tools/build_aar.sh --target_archs=arm64-v8a? Am I missing anything in tensorflowlite_c build rule in order to make it behave like ./tensorflow/lite/tools/build_aar.sh?

Thanks!

@abattery
Copy link
Contributor

Could you share how the compiled shared object is not working on the mobile app? If possible, could you share the error message you've got?

@cristianl1251
Copy link
Author

Hey @abattery, thanks for your reply.

Could you share how the compiled shared object is not working on the mobile app? If possible, could you share the error message you've got?

Unfortunately, I don't have access to the java source code of my mobile app. I am just replacing the compiled shared object of the app and the error it gives me is not very descriptive. It is something like WARNING: Could not load libMyLib.so.

On the other hand, I think I figured out what was my issue. As I said, I don't have access to the java source code, so I need to use a specific name for the compiled shared object which was already being used by the java part. My issue was that I was compiling the shared object with tensorflowlite_c build rule and then renaming the generated shared object. My app couldn't use the compiled shared object because of the renaming, so I replaced tensorflowlite_c build rule name by my_lib_specific_name and it generated the shared object with the proper name. This time, it worked with my mobile app.

Then, I tried to do the same for the selective build in order to reduce the library size. Initially, it did not work because I was trying to use selectively_built_c_api_test_lib build rule by its own. However, as pointed here the correct way of doing it (I think) is using tensorflowlite_c build rule replacing

":c_api",
":c_api_experimental",

with

"selectively_built_c_api_test_lib",

I did it and it worked. The shared object reduced from 2.5MB to 1.1MB. I used it with my mobile app and it worked!

Here is the diff with the necessary changes:
image

Again, thanks for your help @abattery.

@abattery
Copy link
Contributor

Great, thanks for sharing your success!

@google-ml-butler
Copy link

Are you satisfied with the resolution of your issue?
Yes
No

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues type:build/install Build and install issues
Projects
None yet
Development

No branches or pull requests

5 participants