[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

converting LSTM layer to tflite with float16 fails #61370

Open
sronen71 opened this issue Jul 24, 2023 · 9 comments
Open

converting LSTM layer to tflite with float16 fails #61370

sronen71 opened this issue Jul 24, 2023 · 9 comments
Assignees
Labels
comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit RNN RNN related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug

Comments

@sronen71
Copy link

Issue type

Bug

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

2.13.0

Custom code

Yes

OS platform and distribution

Colab

Mobile device

No response

Python version

3.10

Bazel version

No response

GCC/compiler version

No response

CUDA/cuDNN version

No response

GPU model and memory

No response

Current behavior?

Tflite converter fails to convert model with LSTM layer to float16 target.
It runs for a very long time, increases RAM consumption to the maximum system available, then crashes.

Expected behavior: should convert.

Workaround: instead of LSTM layer, use RNN wrapper of LSTMCell instead of the LSTM layer.
Tflite is successful in converting the alternative with float16 target.

Standalone code to reproduce the issue

See this gist:
https://colab.research.google.com/gist/sronen71/9b016245f507280f867841a7161fad8d/keras-lstm-fusion-codelab.ipynb

Relevant log output

Program crash after it is out of RAM.
@google-ml-butler google-ml-butler bot added the type:bug Bug label Jul 24, 2023
@sushreebarsa sushreebarsa added comp:lite TF Lite related issues TF 2.13 For issues related to Tensorflow 2.13 labels Jul 25, 2023
@sushreebarsa
Copy link
Contributor

@sronen71 For LSTM conversion, this is an article and colab describing the process, and you can follow the colab gist to do it. For 4 kinds of quantizations, maybe you can also try other ways like dynamic range. Hope it could help? Please let us know. Thank you!

@sushreebarsa sushreebarsa added the stat:awaiting response Status - Awaiting response from author label Jul 26, 2023
@sronen71
Copy link
Author
sronen71 commented Jul 26, 2023

@sushreebarsa
I checked that information and colab example previously, and based my colab gist on that.
I find the original colab example is broken right now, because of tf-nightly error.
I was able to run it yesterday successfully, however, it only uses the vanilla convert.
When I try to use the optimize option with float16 target, as in the colab gist I attached, the reported defect is observed.

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label Jul 26, 2023
@pkgoogle pkgoogle added RNN RNN related issues TFLiteConverter For issues related to TFLite converter labels Jul 27, 2023
@pjpratik
Copy link
Contributor

@pkgoogle I was able to reproduce this issue. Please find this gist.

Could you please look into this?

Thanks.

@pjpratik pjpratik assigned pkgoogle and unassigned pjpratik Jul 28, 2023
@pkgoogle pkgoogle added the ModelOptimizationToolkit TF Model Optimization Toolkit label Jul 31, 2023
@pkgoogle
Copy link

I was able to replicate with @pjpratik's exact same gist, @abattery, can you please take a look? Thanks.

@pkgoogle pkgoogle added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jul 31, 2023
@devadigapratham
Copy link

Hi, can i work on this issue ?

@pkgoogle
Copy link

Hi @devadigapratham, we always welcome any contribution from the community. Please let us know if you have any questions.

@barrypitman
Copy link

Also running into this issue for float16 tflite conversions

@LakshmiKalaKadali
Copy link
Contributor

Hi @sronen71 ,

You can now move to AI-Edge-Torch to resolve your issue. you can find more information here: googleblog.

Here is a simple script for converting an LSTM model here:

import torch
import torchvision
import ai_edge_torch

rnn = torch.nn.LSTM(10, 20, 2)
sample_inputs = (torch.randn(5, 3, 10),)

edge_model = ai_edge_torch.convert(rnn.eval(), sample_inputs)
edge_model.export("rnn.tflite")

try:
  edge_model = ai_edge_torch.convert(rnn.eval(), sample_inputs)
  edge_model.export('rnn.tflite')
  print("Conversion successful!")
except Exception as e:
  print("Error during conversion:", e)

Try visualizing the result in model-explorer as well.

Please try them out and let us know if this resolves your issue. If you still need further help, feel free to open a new issue at the respective repos.

@LakshmiKalaKadali LakshmiKalaKadali added stat:awaiting response Status - Awaiting response from author and removed stat:awaiting tensorflower Status - Awaiting response from tensorflower labels Jun 14, 2024
Copy link

This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.

@github-actions github-actions bot added the stale This label marks the issue/pr stale - to be closed automatically if no activity label Jun 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit RNN RNN related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
Projects
None yet
Development

No branches or pull requests

8 participants