-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a list of supported XLA operations like TensorFlow Lite #14798
Comments
Also fix a TODO in XlaOpRegistry to filter by the types allowed by the OpDef. Also see tensorflow#14798 PiperOrigin-RevId: 177986664
We now have some auto-generated tables listing the supported ops on CPU and GPU: Unlike the TFLite docs, we don't have a breakdown starting from the Python APIs; the above tables are based on the op names in the GraphDef. At the moment, if we wanted the Python API breakdown, we'd need to do that manually, and that seems unlikely to remain up-to-date. I hope the above tables are still useful though. |
Thanks! |
@tatatodd @joker-eph @MarkDaoust Do you know who is going to re-generate the tables mentioned by @tatatodd? It seems that last time they were updated in 2018. |
It seems it was introduced many years ago by @caisq with caisq@4b0a236 |
@lamberta @mihaimaruseac Do you know what kind of internal infra is going to "regularly" run
|
The tflite page doesn't get regular updates either: https://www.tensorflow.org/lite/guide/ops_compatibility That xla command still works. One solution would be to integrate this into api-reference generator, add an XLA column to the https://www.tensorflow.org/api_docs/python/tf/raw_ops page: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docs/generate2.py#L104 tensorflow.org/xla comes from tensorflow/compiler/xla/g3doc/ maybe someone there would have interest in pushing this through. |
Is this orchestrated by public available Github actions or with internal scripts? |
At least can we reopen this ticket adding also the XLA label? |
It's an internal tool that runs those. |
Thanks so probably It Is a little bit hard to contribute a PR with only the OSS/Github visibilty. |
Yes. It's possible that just integrating it into generate2.py with |
I meant could it be tested locally when we have no visibility of the CI logs? |
If anyone gets it working locally then it's my job to be sure it works in the CI. |
Yes when we don't have or we want to have the orchestration/environment with public visibility we need to have exrta docs on how to test this locally if we want to collect community contribution. |
Can we find an owner? As I don't know if @tatatodd is still on this project. |
@MarkDaoust We could do some steps ahead with #56510 |
Just curious, is there any plans to keep XLA operator information up-to-dated in TensorFlow's documentation? Thanks! |
@ganler As you see I cannot make progress on my PR at #56510 /cc @cheshire @theadactyl |
TensorFlow Lite provides a list of currently supported ops here and I wonder if XLA could also have such a list. It's rough to develop and train a model with the full TensorFlow Python API only to get stuck during AOT compilation because of missing ops kernels in the tf2xla bridge.
The text was updated successfully, but these errors were encountered: