[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Standardize layer input tests #199

Closed
zaqqwerty opened this issue Apr 9, 2020 · 7 comments
Closed

Standardize layer input tests #199

zaqqwerty opened this issue Apr 9, 2020 · 7 comments
Assignees

Comments

@zaqqwerty
Copy link
Contributor

There is much duplication between layer input checks, would simplify testing if this was abstracted to a separate function

@zaqqwerty
Copy link
Contributor Author

Should also iterate over backends in the layer input tests

@zaqqwerty zaqqwerty self-assigned this Apr 15, 2020
@zaqqwerty
Copy link
Contributor Author

Basically, this should change layers to enforce the appropriate data shapes, rather than relying on backend error messages propagating upwards

@MichaelBroughton
Copy link
Collaborator
MichaelBroughton commented Apr 15, 2020

Could you try and nail down what the core issue is here ? I'm having some trouble following what each of these things have to do with one another.

Re your points:

  1. We probably could have seperate functions that do keras init sanity checks and type checks etc.

  2. We don't need to iterate over backends in the layer input tests since the errors and functionality we are testing should not depend on the backend at all (We do this at the op level, which is what keras layers turn into once they get crawled by the autographer).

  3. Shape checking is something that belongs at the op level. Shape inference contexts use these when linking tf.functions together. There is no need for us to have duplicate shape error checks (nor can these checks be completely done at the keras level since the graph is still missing some information at this point. Only once it starts running do you have ALL the needed shape information), but I agree sometimes discerning whether or not an error is supposed to come from the keras level or the op level can sometimes be tricky.

@zaqqwerty
Copy link
Contributor Author
  1. Separate functions make sense here because the boilerplate on the circuit executor layers is not simply error checking, but expansion of various possible inputs to common types and shapes. Since this code should be identical for all 4 executor layers for their inputs, symbol_names, and symbol_values arguments, it would be better practice to have the code stand-alone.

  2. Currently the sampled_expectation layer tests have error tests that are specific to the python backend here. Currently it does depend on the backend, not just the op, for example the "bytes-like" regex.

  3. I agree the full shape checking belongs in the op.

@MichaelBroughton
Copy link
Collaborator

Offline discussion that cleared things up:

  1. Discrepancy between different op error strings coming from different backends (C++ python) should be fixed for the sampled expectation layer.
  2. While those are being fixed it would also make sense to improve the checks for inputs, symbol_names, symbol_values for execution type layers.

@MichaelBroughton
Copy link
Collaborator

Has this been resolved now that we have input_checks.py in layers ?

@github-actions
Copy link
github-actions bot commented Aug 1, 2020

This issue has not had any activity in a month. Is it stale ?

@github-actions github-actions bot closed this as completed Sep 1, 2020
jaeyoo pushed a commit to jaeyoo/quantum that referenced this issue Mar 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants