-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the right way to use coverage.py with Tensorflow? #33759
Comments
Has there been any progress on this on the TF side? This is an important feature for TF in production. |
I second that request. This issue needs discussion and maybe special addressing on the |
Does anyone want to take on writing a coverage.py plugin? I can help with the coverage.py side of things. |
The mentioned issue ( Sujit-O/pykg2vec#123 ) hinted me at the function |
With tensorflow 2.5, even if I run tf.config.experimental_run_functions_eagerly(True) directly before my tests, the coverage is not correctly reported. I want to write tests for a custom loss function around CRFs for a production model, where having adequate test coverage is key. Currently, I am stuck on having to exclude this file from coverage.py. |
I'm the coverage.py maintainer. I'd be glad to work closely with someone from the Tensorflow side to find a solution to this problem. |
Hi @nedbat, Found some articles of unit testing and code coverage of Tensorflow codes , It indicates use of tf.test api .Attaching below for reference . |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
Bump; this is still an issue. |
|
Ok! Reopening as requested. |
@fchollet Any further progress or thoughts on this please? Coverage reports not being accurate is a production concern. Essentially any decorated function is showing up as a miss. Could we please look into prioritising this work with @gsakkis While this does work for some use cases |
I think easiest fix is to just have a global way to fully disable autograph. edit: One hacky way (depends on internals) that does look to work, @pytest.fixture(autouse=True)
def disable_autograph_in_coverage() -> None:
config.CONVERSION_RULES = (DoNotConvert("your_module"),) + config.CONVERSION_RULES Scanning the tf tests this way looks to be used here. A public api for adjusting conversion rules would work too. Although I think for test case simple global flag to fully turn off tf_convert would be enough. |
I apologize if this is the wrong way to ask this question. I'm the maintainer of coverage.py, for measuring code coverage in Python projects. A user wrote an issue for me: nedbat/coveragepy#856
After digging into it, I see that his tf.keras.Model.call() function is not executed directly, but is transformed into a temporary file, and executed there. So coverage.py reports that his code is unexecuted, even though he can see the effects of its execution.
I also see that the transformed code has an
ag_source_map__
parameter which can be used to map back from the transformed code to the original code. A coverage.py plugin could use that information to report coverage usefully.My questions are:
The text was updated successfully, but these errors were encountered: