[go: nahoru, domu]

Skip to content

Issues: tensorflow/tensorflow

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

INT4 and other low-precision conversion support status comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TFLiteConverter For issues related to TFLite converter type:feature Feature requests
#64193 opened Mar 21, 2024 by AIWintermuteAI
Full Integer Quantization Issue with Multiple Signatures in TensorFlow Lite comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.15 For issues related to 2.15.x TFLiteConverter For issues related to TFLite converter type:performance Performance Issue
#62996 opened Feb 20, 2024 by lxzheng
TFLite Converter, add possibility to ignore some OPs from quantization comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:feature Feature requests
#62923 opened Feb 8, 2024 by adamp87
Model conversion crashed when feed data during quantization comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.15 For issues related to 2.15.x TFLiteConverter For issues related to TFLite converter type:bug Bug
#62701 opened Dec 27, 2023 by houcheng
Internal quantize ops don't match external quantization comp:lite TF Lite related issues comp:ops OPs related issues ModelOptimizationToolkit TF Model Optimization Toolkit stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author stat:awaiting tensorflower Status - Awaiting response from tensorflower TF2.14 For issues related to Tensorflow 2.14.x TFLiteConverter For issues related to TFLite converter type:bug Bug type:performance Performance Issue
#62530 opened Dec 1, 2023 by EClemMarq
PadV2 constant_values tensor not quantized using 16x8 quantization mode comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.15 For issues related to 2.15.x TFLiteConverter For issues related to TFLite converter
#62499 opened Nov 29, 2023 by riestmo-nxp
[RNN] TFLite converter segfaults with GRU models comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit regression issue To spot regression issues in latest version stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF2.14 For issues related to Tensorflow 2.14.x TFLiteConverter For issues related to TFLite converter type:performance Performance Issue
#62281 opened Oct 30, 2023 by CNugteren
Quantization produces large scale coffiecient, which pervents the model from being loaded comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
#62196 opened Oct 23, 2023 by FabianSchuetze
error with concatenation when converting QAT model to tflite model using EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8 comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug type:feature Feature requests
#62014 opened Sep 29, 2023 by DerryFitz
TFLite model produces wrong output after fusion optimization comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author stat:awaiting tensorflower Status - Awaiting response from tensorflower TFLiteConverter For issues related to TFLite converter type:bug Bug
#61967 opened Sep 25, 2023 by dengyinlin
Why are not all of my conv2d layer weights Int8 when converting with dynamic range quantization? comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61896 opened Sep 18, 2023 by christian-steinmeyer
Uncompliant tflite model when converting "MultiHeadAttention" layer comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61796 opened Sep 5, 2023 by kobygold
Full integer quantization not possible with grouped convolution ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61760 opened Aug 31, 2023 by justlike-prog
Looking for selective post training quantization for 8 bit weights and 16 bit activations comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.13 For issues related to Tensorflow 2.13 type:feature Feature requests
#61720 opened Aug 28, 2023 by Hrayo712
Segmentation Fault (Core Dumped) when convert whisper with int8 quantization comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.12 For issues related to Tensorflow 2.12 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61695 opened Aug 25, 2023 by SantiagoMoreno-UdeA
converting LSTM layer to tflite with float16 fails comp:lite TF Lite related issues ModelOptimizationToolkit TF Model Optimization Toolkit RNN RNN related issues stale This label marks the issue/pr stale - to be closed automatically if no activity stat:awaiting response Status - Awaiting response from author TF 2.13 For issues related to Tensorflow 2.13 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61370 opened Jul 24, 2023 by sronen71
TensorFlow Lite Converter wraps unpack operator with dequantize/quantize ModelOptimizationToolkit TF Model Optimization Toolkit stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.12 For issues related to Tensorflow 2.12 TFLiteConverter For issues related to TFLite converter type:bug Bug
#61323 opened Jul 19, 2023 by willisacs-arm
[TFLite] Accumulator and bias types coherence for int16x8 FC operator comp:lite TF Lite related issues comp:lite-kernels TensorFlow Lite kernel issues stat:awaiting tensorflower Status - Awaiting response from tensorflower type:bug Bug
#53763 opened Jan 14, 2022 by Tessil
DSP Overflow - high pixel values are being clamped when running on DSP stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.7 Issues related to TF 2.7.0 TFLiteConverter For issues related to TFLite converter type:support Support issues
#53552 opened Dec 27, 2021 by aviaisr
Disable unroll batch matmul pass TFLiteConverter For issues related to TFLite converter type:feature Feature requests
#47425 opened Feb 26, 2021 by WindQAQ
TFLite converter does not convert dilated conv to single conv op when spatial dimension is dynamic comp:lite TF Lite related issues stat:awaiting tensorflower Status - Awaiting response from tensorflower TF 2.4 for issues related to TF 2.4 TFLiteConverter For issues related to TFLite converter type:bug Bug
#46822 opened Feb 1, 2021 by WindQAQ
ProTip! Add no:assignee to see everything that’s not assigned.