[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a list of supported XLA operations like TensorFlow Lite #14798

Open
carlthome opened this issue Nov 22, 2017 · 18 comments
Open

Provide a list of supported XLA operations like TensorFlow Lite #14798

carlthome opened this issue Nov 22, 2017 · 18 comments
Assignees
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower type:docs-bug Document issues

Comments

@carlthome
Copy link
Contributor

TensorFlow Lite provides a list of currently supported ops here and I wonder if XLA could also have such a list. It's rough to develop and train a model with the full TensorFlow Python API only to get stuck during AOT compilation because of missing ops kernels in the tf2xla bridge.

@aselle aselle added stat:awaiting response Status - Awaiting response from author stat:awaiting tensorflower Status - Awaiting response from tensorflower type:docs-bug Document issues and removed stat:awaiting response Status - Awaiting response from author labels Nov 29, 2017
caisq pushed a commit to caisq/tensorflow that referenced this issue Dec 7, 2017
Also fix a TODO in XlaOpRegistry to filter by the types allowed by the OpDef.

Also see tensorflow#14798

PiperOrigin-RevId: 177986664
@tatatodd
Copy link
Contributor

We now have some auto-generated tables listing the supported ops on CPU and GPU:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/tf2xla/g3doc/cpu_supported_ops.md
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/tf2xla/g3doc/gpu_supported_ops.md

Unlike the TFLite docs, we don't have a breakdown starting from the Python APIs; the above tables are based on the op names in the GraphDef. At the moment, if we wanted the Python API breakdown, we'd need to do that manually, and that seems unlikely to remain up-to-date. I hope the above tables are still useful though.

@carlthome
Copy link
Contributor Author

Thanks!

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

@tatatodd @joker-eph @MarkDaoust Do you know who is going to re-generate the tables mentioned by @tatatodd? It seems that last time they were updated in 2018.

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

It seems it was introduced many years ago by @caisq with caisq@4b0a236

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

@lamberta @mihaimaruseac Do you know what kind of internal infra is going to "regularly" run

"bazel run -c opt -- tensorflow/compiler/tf2xla:tf2xla_supported_ops";
to update the Markdown tables?

@MarkDaoust
Copy link
Member

The tflite page doesn't get regular updates either: https://www.tensorflow.org/lite/guide/ops_compatibility

That xla command still works.

One solution would be to integrate this into api-reference generator, add an XLA column to the https://www.tensorflow.org/api_docs/python/tf/raw_ops page:

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docs/generate2.py#L104

tensorflow.org/xla comes from tensorflow/compiler/xla/g3doc/ maybe someone there would have interest in pushing this through.

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

One solution would be to integrate this into api-reference generator, add an XLA column to the https://www.tensorflow.org/api_docs/python/tf/raw_ops page:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docs/generate2.py#L104

Is this orchestrated by public available Github actions or with internal scripts?
If not:
Is this bazel target tensorflow/compiler/tf2xla:tf2xla_supported_ops available in the wheel?
Is the wheel installed in the orchestrated environment of docs/generate2.py?

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

At least can we reopen this ticket adding also the XLA label?

@MarkDaoust MarkDaoust reopened this Feb 22, 2022
@MarkDaoust MarkDaoust added the comp:xla XLA label Feb 22, 2022
@MarkDaoust
Copy link
Member

can we reopen this ticket adding also the XLA label?
Done.

It's an internal tool that runs those.
They're run from the target version's github branch, with bazel available, so just calling that bazel command and merging the output into that raw-ops table would work.

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

t's an internal tool that runs those.\nThey're run from the target version's github branch, with bazel available, so just calling that bazel command and

Thanks so probably It Is a little bit hard to contribute a PR with only the OSS/Github visibilty.

@MarkDaoust
Copy link
Member

Yes.

It's possible that just integrating it into generate2.py with subprocess.check_output(['bazel', 'run', '-c', 'opt', '--', 'tensorflow/compiler/tf2xla:tf2xla_supported_ops']) could get the job done.

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

I meant could it be tested locally when we have no visibility of the CI logs?

@MarkDaoust
Copy link
Member

If anyone gets it working locally then it's my job to be sure it works in the CI.

@bhack
Copy link
Contributor
bhack commented Feb 22, 2022

Yes when we don't have or we want to have the orchestration/environment with public visibility we need to have exrta docs on how to test this locally if we want to collect community contribution.
As also the TFlite markdown is on hold since 2020 we could ping also the TFlite team

@bhack
Copy link
Contributor
bhack commented Mar 11, 2022

Can we find an owner? As I don't know if @tatatodd is still on this project.

@bhack
Copy link
Contributor
bhack commented Jun 20, 2022

@MarkDaoust We could do some steps ahead with #56510

@ganler
Copy link
Contributor
ganler commented Sep 27, 2022

Just curious, is there any plans to keep XLA operator information up-to-dated in TensorFlow's documentation? Thanks!

@bhack
Copy link
Contributor
bhack commented Sep 27, 2022

@ganler As you see I cannot make progress on my PR at #56510

/cc @cheshire @theadactyl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
comp:xla XLA stat:awaiting tensorflower Status - Awaiting response from tensorflower type:docs-bug Document issues
Projects
None yet
Development

No branches or pull requests

6 participants