-
Notifications
You must be signed in to change notification settings - Fork 74k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Autograph] Inconsistent behaviour with lambda variable in loop #56089
Comments
See more at keras-team/keras-cv#432 /cc @mdanatg |
As referenced in the official Python FAQ:
|
I think it's because we pass the loop variable through a function argument, which is enough to avoid the closure aliasing:
In the code above, the lambda would close over the local This only happens for the
This means that the fix is also to avoid passing the iterate as argument to the loop body, and instead rely on the |
Yes is what I have suspected: from tensorflow.python.autograph.impl import api
ag__ = api._TRANSPILER.get_extra_locals()['ag__'] # pylint:disable=protected-access
def tf__test_for():
with ag__.FunctionScope('test', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
fns = []
def get_state():
return ()
def set_state(block_vars):
pass
def loop_body(itr):
i = itr
ag__.converted_call(ag__.ld(fns).append, (ag__.autograph_artifact((lambda : ag__.ld(print)((ag__.converted_call(ag__.ld(str), (ag__.ld(i),), None, fscope))))),), None, fscope)
i = ag__.Undefined('i')
ag__.for_stmt(ag__.converted_call(ag__.ld(range), (3,), None, fscope), None, loop_body, get_state, set_state, (), {'iterate_names': 'i'})
def get_state_1():
return ()
def set_state_1(block_vars):
pass
def loop_body_1(itr_1):
f = itr_1
ag__.converted_call(ag__.ld(f), (), None, fscope)
f = ag__.Undefined('f')
ag__.for_stmt(ag__.ld(fns), None, loop_body_1, get_state_1, set_state_1, (), {'iterate_names': 'f'})
def tf__test_while():
with ag__.FunctionScope('test_while', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
fns = []
i = 0
def get_state():
return (i,)
def set_state(vars_):
nonlocal i
(i,) = vars_
def loop_body():
nonlocal i
ag__.converted_call(ag__.ld(fns).append, (ag__.autograph_artifact((lambda : ag__.ld(print)(ag__.ld(i)))),), None, fscope)
i = ag__.ld(i)
i += 1
def loop_test():
return (ag__.ld(i) < 3)
ag__.while_stmt(loop_test, loop_body, get_state, set_state, ('i',), {})
def get_state_1():
return ()
def set_state_1(block_vars):
pass
def loop_body_1(itr):
f = itr
ag__.converted_call(ag__.ld(f), (), None, fscope)
f = ag__.Undefined('f')
ag__.for_stmt(ag__.ld(fns), None, loop_body_1, get_state_1, set_state_1, (), {'iterate_names': 'f'})
tf__test_for()
print("====")
tf__test_while() 0
1
2
====
3
3
3 And modify your while example to correctly get the output we "expect": import tensorflow as tf
def test():
fns = []
i = 0
while i < 3:
fns.append(lambda i=i: print(i))
i += 1
for f in fns:
f()
test()
tf.autograph.set_verbosity(0, True)
@tf.function(autograph=True)
def test():
fns = []
i = 0
while i < 3:
fns.append(lambda i=i: print(i))
i += 1
for f in fns:
f()
test() 0
1
2
0
1
2 |
@mdanatg I've manually rewritten the transformation output. Do we need to produce something like this for the from tensorflow.python.autograph.impl import api
ag__ = api._TRANSPILER.get_extra_locals()['ag__'] # pylint:disable=protected-access
def tf__test_for():
with ag__.FunctionScope('test', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
fns = []
i = 0
def get_state():
return (i,)
def set_state(block_vars):
nonlocal i
(i,) = block_vars
def loop_body(itr):
nonlocal i
ag__.converted_call(ag__.ld(fns).append, (ag__.autograph_artifact((lambda : ag__.ld(print)((ag__.converted_call(ag__.ld(str), (ag__.ld(i),), None, fscope))))),), None, fscope)
i = ag__.ld(i)
i += 1
ag__.for_stmt(ag__.converted_call(ag__.ld(range), (3,), None, fscope), None, loop_body, get_state, set_state, (), {'iterate_names': 'i'})
def get_state_1():
return ()
def set_state_1(block_vars):
pass
def loop_body_1(itr_1):
f = itr_1
ag__.converted_call(ag__.ld(f), (), None, fscope)
f = ag__.Undefined('f')
ag__.for_stmt(ag__.ld(fns), None, loop_body_1, get_state_1, set_state_1, (), {'iterate_names': 'f'}) |
Yes, something like that. And then the loop_body function would be a regular thunk: |
Do we want to handle tensorflow/tensorflow/python/autograph/converters/control_flow.py Lines 170 to 199 in 44e84ce
|
Yes, I think we do. |
I don't know if it makes sense: I've changed: if s in live_in or s in live_out or s in nonlocals or \
(s not in live_in and s not in live_out): import tensorflow as tf
tf.autograph.set_verbosity(0, True)
@tf.function
def test_b():
fns = []
for i in range(3):
fns.append(lambda: print(i))
for f in fns:
f()
test_b() 2
2
2 And import tensorflow as tf
tf.autograph.set_verbosity(0, True)
@tf.function
def test_b():
fns = []
for i in range(3):
fns.append(lambda i=i: print(i))
for f in fns:
f()
test_b() 0
1
2 So the output is correct but the transformation, surely, it is still quite ugly: # coding=utf-8
def tf__test_b():
with ag__.FunctionScope('test_b', 'fscope', ag__.ConversionOptions(recursive=True, user_requested=True, optional_features=(), internal_convert_user_code=True)) as fscope:
fns = []
def get_state():
return (i,)
def set_state(vars_):
nonlocal i
(i,) = vars_
def loop_body(itr):
nonlocal i
i = itr
ag__.converted_call(ag__.ld(fns).append, (ag__.autograph_artifact((lambda : ag__.ld(print)(ag__.ld(i)))),), None, fscope)
i = ag__.Undefined('i')
ag__.for_stmt(ag__.converted_call(ag__.ld(range), (3,), None, fscope), None, loop_body, get_state, set_state, ('i',), {'iterate_names': 'i'})
def get_state_1():
return (f,)
def set_state_1(vars_):
nonlocal f
(f,) = vars_
def loop_body_1(itr_1):
nonlocal f
f = itr_1
ag__.converted_call(ag__.ld(f), (), None, fscope)
f = ag__.Undefined('f')
ag__.for_stmt(ag__.ld(fns), None, loop_body_1, get_state_1, set_state_1, ('f',), {'iterate_names': 'f'}) |
P.s. as a side note, in the tensorflow/tensorflow/python/autograph/tests/BUILD Lines 26 to 27 in f2edfb6
If it is activated some loop_scoping tests are failing: # # Scoping and modularity
reference_test(name = "loop_scoping_test") |
Ah, that's not intended. Can add it to the build file and mark the failing ones with self.skipTest, then file an issue to get them to pass? For the transformation, I'm not sure, the rules are quite finnicky, and I'm not sure I'd change them without extensive testing. Likely still safer to manually add the iterate to the list of loop_vars. |
Side note - I just realized the limitations section of autograph does seem to document this case: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#variables-closed-over-by-lambda-functions |
Hello, I have just discovered this in my app logs:
Is there anything I should look for in my code or is this a generic warning that displays to anyone? To be more precise, does the fact that I am seeing this warning mean that I am actually using something which is now deprecated? |
@clime We hope to align to the python behavior after the deprecation: |
My question rather is, if seeing that warning on my screen necessarily means that I am triggering that deprecated behavior (i.e. I am using lambda somewhere in @tf.function or similar). |
Do you have a small code gist to reproduce this? |
Sorry, I don't understand what I should be reproducing. I just have a pretty big code base and I just want to know how much relevant the warning is for me :). |
I guess the answer is here: 6197fa3 |
If it was internal someone introduced a new deprecation case internally ignoring the warning. |
Are there any instructions on how to hide this warning? :P
|
It is suppressed in nightly and in the next release |
Hi,
|
Click to expand!
Issue Type
Bug
Source
source
Tensorflow Version
master
Custom Code
No
OS Platform and Distribution
No response
Mobile device
No response
Python version
No response
Bazel version
No response
GCC/Compiler version
No response
CUDA/cuDNN version
No response
GPU model and memory
No response
Current Behaviour?
Standalone code to reproduce the issue
Relevant log output
test_b
is wrongly working "as expected" in graph mode:The text was updated successfully, but these errors were encountered: