[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Op support request: Matmul with constant left hand side #66727

Open
gustavla opened this issue Apr 30, 2024 · 2 comments
Open

Op support request: Matmul with constant left hand side #66727

gustavla opened this issue Apr 30, 2024 · 2 comments
Assignees
Labels
comp:lite TF Lite related issues

Comments

@gustavla
Copy link
Contributor

System information

  • Samsung Galaxy S23 / Android 13 / Snapdragon® 8 Gen 2 | SM8550
  • TFLite 2.16.1 (stock)

Standalone code to reproduce the issue
Code to generate model:

import tensorflow as tf
import keras

input_shape = [16, 1]
output_shape = [13, 1]

tf_input = keras.Input(input_shape[1:], batch_size=input_shape[0])


class MyMatMul(keras.layers.Layer):
    def call(self, tf_input):
        tf_output = tf.matmul(tf.ones((13, 16)), tf_input)
        return tf_output

tf_output = MyMatMul()(tf_input)

model = keras.Model(inputs=[tf_input], outputs=[tf_output])

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
  f.write(tflite_model)

Any other info / logs

Runtime log (executed on https://aihub.qualcomm.com/):

[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] TfLiteGpuDelegate Init: BATCH_MATMUL: Not supported batched mat mul case

Full log:

[01/May/2024:05:54:08 +10:00: profiler/info] -=- Tungsten Initializing -=-
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.board.platform = kalama
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.boot.hardware = qcom
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.boot.hardware.platform = 
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.system.build.id = TP1A.220624.014
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.system.build.version.release = 13
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.hardware = qcom
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.hardware.chipname = 
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.board = kalama
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.brand = samsung
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.device = dm1q
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.build.fingerprint = samsung/dm1quew/qssi:13/TP1A.220624.014/S911U1UES1AWC9:user/release-keys
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.manufacturer = samsung
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.model = SM-S911U1
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.product.name = dm1quew
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.soc.manufacturer = QTI
[01/May/2024:05:54:08 +10:00: profiler/info] Android system property: ro.soc.model = SM8550
[01/May/2024:05:54:09 +10:00: profiler/info] [Manager] DeviceManager::DeviceManager
[01/May/2024:05:54:09 +10:00: profiler/info] [Manager] findAvailableDevices
[01/May/2024:05:54:09 +10:00: profiler/info] NNAPI devices: nnapi-reference
[01/May/2024:05:54:09 +10:00: profiler/info] GPU device: Qualcomm Adreno (TM) 740
[01/May/2024:05:54:09 +10:00: profiler/info] OpenGL Version: OpenGL ES 3.2 V@0676.13 (GIT@9ab6a2b2d8, Ifc633ebcef, 1674564387) (Date:01/24/23)
[01/May/2024:05:54:09 +10:00: profiler/info] OpenCL Version: OpenCL C 3.0 Adreno(TM) 740
[01/May/2024:05:54:09 +10:00: profiler/info] -=- Tungsten Running Task: Loading -=-
[01/May/2024:05:54:09 +10:00: profiler/info] Detected chipset 2807, made by 2000.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Loading tflite model Models/model.tflite
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Malloc VM size before: 16452.0 kB, allocated: 14059.8 kB, slack: 2392.2 kB.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Current memory baseline range: 58267.8-60660.0 kB.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Runtime metadata not found in Models/model.tflite/trt_metadata.json or Models/model.tflite/trt_metadata.pb
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] TF Lite version 2.16.1. Loading model from Models/model.tflite.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Mapping resource file in Models/model.tflite
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Loaded model. Minimum TF Lite version = 2.3.0.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] No delegates specified; using compute unit=cpu_and_gpu.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] [tflite] Initialized TensorFlow Lite runtime.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] GPUV2 delegate requested. OpenCL detected.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Enabling delegate cache in dir=/data/user/0/ai.tetra.tungsten/cache/1714506848918/ai.tetra.runtime/0.6.0/model.tflite_8945450969824422876_1714506836981/gpuv2.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] [tflite] Created TensorFlow Lite delegate for GPU.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] [tflite] Replacing 1 out of 1 node(s) with delegate (TfLiteGpuDelegateV2) node, yielding 1 partitions for the whole graph.
[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] TfLiteGpuDelegate Init: BATCH_MATMUL: Not supported batched mat mul case
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] [tflite] Created 0 GPU delegate kernels.
[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] TfLiteGpuDelegate Prepare: delegate is not initialized
[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] Node number 1 (TfLiteGpuDelegateV2) failed to prepare.
[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] Restored original execution plan after delegate application failure.
[01/May/2024:05:54:09 +10:00: profiler/error] [job_id: jz5wjvy4g] [model.tflite] Delegate GPUV2/OpenCL failed to prepare. Retrying without this delegate.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] TfLite XNNPACK delegate requested.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] [tflite] Created TensorFlow Lite XNNPACK delegate for CPU.
[01/May/2024:05:54:09 +10:00: profiler/warning] [job_id: jz5wjvy4g] [model.tflite] [tflite] failed to delegate BATCH_MATMUL node #0. Delegation of latest operators must be enabled
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Applied 0 delegates. Model will run using built-in kernels.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Saving delegate selection for subsequent steps.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Malloc VM size after: 16472.0 kB, allocated: 14339.7 kB, slack: 2132.3 kB.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Status Successfully Loaded Cold with t = 30997 us and usage: before = 60660.0 kB; peakBefore = 60660.0 kB; mallocUnusedBefore = 2392.2 kB; after = 61108.0 kB; peakAfter = 60808.0 kB; mallocUnusedAfter = 2132.3 kB; increase = 0.0-707.9 kB; peak = 148.0-2540.2 kB
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Saving results to /storage/emulated/0/Android/data/ai.tetra.tungsten/files/Results/job_jz5wjvy4g/job_jz5wjvy4g_results.bin
[01/May/2024:05:54:09 +10:00: profiler/info] -=- Tungsten Running Task: Loading -=-
[01/May/2024:05:54:09 +10:00: profiler/info] Detected chipset 2807, made by 2000.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Loading previously saved results in /storage/emulated/0/Android/data/ai.tetra.tungsten/files/Results/job_jz5wjvy4g/job_jz5wjvy4g_results.bin
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Loading tflite model Models/model.tflite
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Malloc VM size before: 16028.0 kB, allocated: 14825.3 kB, slack: 1202.7 kB.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Current memory baseline range: 61873.3-63076.0 kB.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Runtime metadata not found in Models/model.tflite/trt_metadata.json or Models/model.tflite/trt_metadata.pb
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] TF Lite version 2.16.1. Loading model from Models/model.tflite.
[01/May/2024:05:54:09 +10:00: profiler/debug] [job_id: jz5wjvy4g] [model.tflite] Mapping resource file in Models/model.tflite
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Loaded model. Minimum TF Lite version = 2.3.0.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Applied 0 delegates. Model will run using built-in kernels.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Malloc VM size after: 16480.0 kB, allocated: 15353.9 kB, slack: 1126.1 kB.
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Status Successfully Loaded Warm with t = 4159 us and usage: before = 63076.0 kB; peakBefore = 63076.0 kB; mallocUnusedBefore = 1202.7 kB; after = 63480.0 kB; peakAfter = 63384.0 kB; mallocUnusedAfter = 1126.1 kB; increase = 0.0-480.7 kB; peak = 308.0-1510.7 kB
[01/May/2024:05:54:09 +10:00: profiler/info] [job_id: jz5wjvy4g] [model.tflite] Saving results to /storage/emulated/0/Android/data/ai.tetra.tungsten/files/Results/job_jz5wjvy4g/job_jz5wjvy4g_results.bin
@gustavla gustavla added the comp:lite TF Lite related issues label Apr 30, 2024
@sawantkumar
Copy link

Hi @gustavla ,

I ran the code you provided to create the tflite model and i am getting the below error. Can you share me the tflite model itself.


AttributeError Traceback (most recent call last)
Cell In[6], line 21
19 # Convert the model.
20 converter = tf.lite.TFLiteConverter.from_keras_model(model)
---> 21 tflite_model = converter.convert()
23 # Save the model.
24 with open('model.tflite', 'wb') as f:

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\lite.py:1175, in _export_metrics..wrapper(self, *args, **kwargs)
1172 @functools.wraps(convert_func)
1173 def wrapper(self, *args, **kwargs):
1174 # pylint: disable=protected-access
-> 1175 return self._convert_and_export_metrics(convert_func, *args, **kwargs)

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\lite.py:1129, in TFLiteConverterBase._convert_and_export_metrics(self, convert_func, *args, **kwargs)
1127 self._save_conversion_params_metric()
1128 start_time = time.process_time()
-> 1129 result = convert_func(self, *args, **kwargs)
1130 elapsed_time_ms = (time.process_time() - start_time) * 1000
1131 if result:

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\lite.py:1641, in TFLiteKerasModelConverterV2.convert(self)
1637 if saved_model_convert_result:
1638 return saved_model_convert_result
1640 graph_def, input_tensors, output_tensors, frozen_func = (
-> 1641 self._freeze_keras_model()
1642 )
1644 graph_def = self._optimize_tf_model(
1645 graph_def, input_tensors, output_tensors, frozen_func
1646 )
1648 return super(TFLiteKerasModelConverterV2, self).convert(
1649 graph_def, input_tensors, output_tensors
1650 )

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\convert_phase.py:215, in convert_phase..actual_decorator..wrapper(*args, **kwargs)
213 except Exception as error:
214 report_error_message(str(error))
--> 215 raise error from None

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\convert_phase.py:205, in convert_phase..actual_decorator..wrapper(*args, **kwargs)
202 @functools.wraps(func)
203 def wrapper(*args, **kwargs):
204 try:
--> 205 return func(*args, **kwargs)
206 except ConverterError as converter_error:
207 if converter_error.errors:

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\lite.py:1582, in TFLiteKerasModelConverterV2._freeze_keras_model(self)
1573 # If the model's call is not a tf.function, then we need to first get its
1574 # input signature from model_input_signature method. We can't directly
1575 # call trace_model_call because otherwise the batch dimension is set
1576 # to None.
1577 # Once we have better support for dynamic shapes, we can remove this.
1578 if not isinstance(self._keras_model.call, _def_function.Function):
1579 # Pass keep_original_batch_size=True will ensure that we get an input
1580 # signature including the batch dimension specified by the user.
1581 # TODO(b/169898786): Use the Keras public API when TFLite moves out of TF
-> 1582 input_signature = _model_input_signature(
1583 self._keras_model, keep_original_batch_size=True
1584 )
1586 # TODO(b/169898786): Use the Keras public API when TFLite moves out of TF
1587 func = _trace_model_call(self._keras_model, input_signature)

File ~\AppData\Local\Programs\Python\Python312\Lib\site-packages\tensorflow\lite\python\tflite_keras_util.py:84, in model_input_signature(model, keep_original_batch_size)
82 input_specs = input_specs[0][0]
83 else:
---> 84 input_specs = model._get_save_spec( # pylint: disable=protected-access
85 dynamic_batch=not keep_original_batch_size)
86 if input_specs is None:
87 return None

AttributeError: 'Functional' object has no attribute '_get_save_spec'

@sawantkumar sawantkumar added the stat:awaiting response Status - Awaiting response from author label May 23, 2024
@gustavla
Copy link
Contributor Author

Sorry, I think the repro script requires tensorflow 2.15.

Here is the model: https://qaihub-public-issues.s3.us-west-2.amazonaws.com/tflite/66727_model.tflite

@google-ml-butler google-ml-butler bot removed the stat:awaiting response Status - Awaiting response from author label May 29, 2024
@sawantkumar sawantkumar removed the WIP label Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:lite TF Lite related issues
Projects
None yet
Development

No branches or pull requests

3 participants