-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
converting LSTM layer to tflite with float16 fails #61370
Comments
@sronen71 For LSTM conversion, this is an article and colab describing the process, and you can follow the colab gist to do it. For 4 kinds of quantizations, maybe you can also try other ways like dynamic range. Hope it could help? Please let us know. Thank you! |
@sushreebarsa |
Hi, can i work on this issue ? |
Hi @devadigapratham, we always welcome any contribution from the community. Please let us know if you have any questions. |
Also running into this issue for float16 tflite conversions |
Hi @sronen71 , You can now move to AI-Edge-Torch to resolve your issue. you can find more information here: googleblog. Here is a simple script for converting an LSTM model here: import torch rnn = torch.nn.LSTM(10, 20, 2)
sample_inputs = (torch.randn(5, 3, 10),)
edge_model = ai_edge_torch.convert(rnn.eval(), sample_inputs)
edge_model.export("rnn.tflite")
try:
edge_model = ai_edge_torch.convert(rnn.eval(), sample_inputs)
edge_model.export('rnn.tflite')
print("Conversion successful!")
except Exception as e:
print("Error during conversion:", e) Try visualizing the result in model-explorer as well. Please try them out and let us know if this resolves your issue. If you still need further help, feel free to open a new issue at the respective repos. |
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you. |
Issue type
Bug
Have you reproduced the bug with TensorFlow Nightly?
Yes
Source
source
TensorFlow version
2.13.0
Custom code
Yes
OS platform and distribution
Colab
Mobile device
No response
Python version
3.10
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current behavior?
Tflite converter fails to convert model with LSTM layer to float16 target.
It runs for a very long time, increases RAM consumption to the maximum system available, then crashes.
Expected behavior: should convert.
Workaround: instead of LSTM layer, use RNN wrapper of LSTMCell instead of the LSTM layer.
Tflite is successful in converting the alternative with float16 target.
Standalone code to reproduce the issue
Relevant log output
The text was updated successfully, but these errors were encountered: