[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Branch 152141388 #8958

Merged
merged 82 commits into from
Apr 4, 2017
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
82 commits
Select commit Hold shift + click to select a range
a68d644
Add more test cases for DivideTwoScalarsS32.
tensorflower-gardener Mar 31, 2017
e4a8dc8
Allow variable reuse in function.
tensorflower-gardener Mar 31, 2017
78ead4e
Only change ._variables_created in template after inner function succ…
malcolmreynolds Mar 31, 2017
52267c1
Add more test cases for scalar s32 remainder.
tensorflower-gardener Mar 31, 2017
35e9035
TESTFIX: LinearOperatorFullMatrix placeholder test was not using a
langmore Mar 31, 2017
7ce8fb4
[tf learn estimators] Bugfix to rnn_common following rollback of RNNC…
ebrevdo Mar 31, 2017
9731b5e
Add a SliceProcessor specialized for const inputs, used if a constant…
Mar 31, 2017
50be7aa
Migrate trees, models, testutils, and resources libraries to boosted_…
tensorflower-gardener Mar 31, 2017
bbd2047
[XLA:HLO] Minor fix for Clamp shape inference, and add some tests.
tensorflower-gardener Mar 31, 2017
47cd4cd
Use a simpler threshold in MultiplyWithoutOverflow
girving Mar 31, 2017
93e822e
Fixes a bug where heads/pre-canned estimators were not exporting prop…
Mar 31, 2017
0fe523a
tfdbg doc: fix code blocks under numbered bullets
caisq Mar 31, 2017
997ffb5
Add default saver option to CheckpointSaverHook and improve docstrings.
tensorflower-gardener Mar 31, 2017
ac933c9
Fix issue in installing latest nightly tensorflow pip wheel in ubuntu…
caisq Mar 31, 2017
bf02d3c
Add an optional activation function to the OutputProjectionWrapper an…
tensorflower-gardener Mar 31, 2017
89ed6b4
Adds area under precision recall curve for binary and multiclass heads.
Mar 31, 2017
a05668a
Makes GraphRunner a class to explicitly control it's lifetime.
vinuraja Mar 31, 2017
08a3e36
Add nccl to tf.contrib.
tensorflower-gardener Mar 31, 2017
7b735b0
Adding bazel clean prior to running tests in run_pip.sh
tensorflower-gardener Mar 31, 2017
d922630
Fix inequalities in Estimator.train().
tensorflower-gardener Mar 31, 2017
8eae27d
Remove old doc generator.
martinwicke Mar 31, 2017
844efb9
Remove obsolete dimension restriction on tf.transpose
girving Mar 31, 2017
ad078f8
Fix condition for skipping sampling by treating sample_ids as a boole…
adarob Mar 31, 2017
afe9e40
[XLA] Don't crash when analyzing ops with no layout contained in a fu…
tensorflower-gardener Mar 31, 2017
4b11964
Reshape input tensors to match output_rank for `_input_from_feature_c…
tensorflower-gardener Mar 31, 2017
4991664
Make GridLSTM and BidirectionalGridLSTM use dynamic batch size.
tensorflower-gardener Mar 31, 2017
697da92
Add a public `clone()` method to AttentionCellWrapper.
ebrevdo Mar 31, 2017
fe416cc
Fix links in docs.
tensorflower-gardener Mar 31, 2017
945a84b
Add link to SignatureDef documentation for TensorFlow Serving from Sa…
sukritiramesh Mar 31, 2017
17878b9
[TF:XLA] Add a VLOG() to log the operators registered by the XlaOpReg…
hawkinsp Mar 31, 2017
9a92e63
Fix typo.
tensorflower-gardener Apr 1, 2017
6f6e590
Refactor factorization_ops_test in contrib/factorization.
tensorflower-gardener Apr 1, 2017
4b7e06d
Add an option to color graph explorer nodes by XLA cluster.
tensorflower-gardener Apr 1, 2017
512c21c
Don't prune nodes that are driven by control dependencies to avoid po…
benoitsteiner Apr 1, 2017
e4265b4
Handle stack of TensorArray with empty elements. Previously caused a …
adarob Apr 1, 2017
b1b09a3
Change calls to use status.Update.
Apr 1, 2017
643573d
Fix bug where we ignore errors in RecvOutputsAsync.
Apr 1, 2017
00d7258
Fix reference to proto2 in boosted_trees.
tensorflower-gardener Apr 1, 2017
01fcc46
[XLA:HLO] Change HloModule to ensure computation names are unique.
tensorflower-gardener Apr 1, 2017
4fe6860
Don't fold nodes that have no outgoing edges.
Apr 1, 2017
406199e
Set cpu family, model and frequency in the op info.
Apr 1, 2017
09cc9f8
Java: Bump version to 1.1.0-rc1 in Maven POMs
asimshankar Apr 2, 2017
13dd442
Remove unused transcription array.
tensorflower-gardener Apr 2, 2017
4f1273a
Adding check_types option to nest's assert_same_structure, map_struct…
tensorflower-gardener Apr 3, 2017
ca881f4
Fix unbound <table> tag.
tensorflower-gardener Apr 3, 2017
663eaa0
Fix the circular dependency tf/contrib/tensorboard <--> tf/tensorboard.
Apr 3, 2017
2fa9750
Add backprop of betainc op w.r.t. the x argument
ebrevdo Apr 3, 2017
c64831d
Improve docstring for eval_metrics.
tensorflower-gardener Apr 3, 2017
5359288
Increase kMaxRecursionDepth.
tensorflower-gardener Apr 3, 2017
a585ccb
Add support for running an end of training hook, specified as a
tensorflower-gardener Apr 3, 2017
24c9cd7
Expand MVN broadcast logic to broadcast batch shape between loc/scale…
jvdillon Apr 3, 2017
b63e9b5
Fix build broken by cl/151008918 by adding missing #include.
tensorflower-gardener Apr 3, 2017
fd2c8bc
Remove extra blank lines from auto-generated Python documentation.
tensorflower-gardener Apr 3, 2017
185cb83
Update ops-related pbtxt files.
tensorflower-gardener Apr 3, 2017
e8dc966
Go: Update generated wrapper functions for TensorFlow ops.
tensorflower-gardener Apr 3, 2017
c1ab70c
Make one_like python op call ones_like kernel.
Apr 3, 2017
40c26b9
Adding Python3.6 support to ci_param_build script.
tensorflower-gardener Apr 3, 2017
242ecde
Update ops-related pbtxt files.
tensorflower-gardener Apr 3, 2017
51296e5
Expand contrib build to more than just python.
tensorflower-gardener Apr 3, 2017
0864bed
Simplified the testImportIntoNamescope test
benoitsteiner Apr 3, 2017
e18c7a5
Fix a bunch of compiler warnings.
Apr 3, 2017
01f0329
Add kernel_methods package to contrib
petrosmol Apr 3, 2017
65b5ca4
Splitting scalar strict test into an internal and external version.
tensorflower-gardener Apr 3, 2017
6a5b37e
Add a xla tf graph utility to treat xla representation as tf graph to…
tensorflower-gardener Apr 3, 2017
98602f5
Update pypi description to reflect that our manylinux1 wheel does not…
tensorflower-gardener Apr 3, 2017
0fde061
[TF distributions] Add Binomial CDF support.
ebrevdo Apr 3, 2017
32a54fd
Automated rollback of change 152048597
Apr 3, 2017
8e833fb
Eliminating some 64 bit to 32 bit silent downconversions.
Apr 4, 2017
d139cf3
Register OnesLike kernel in XLA.
Apr 4, 2017
438c13e
Add support for creating objects inside the resource manager of an XL…
tensorflower-gardener Apr 4, 2017
b657f5a
Modify the TensorBoard text dashboard so that it can support non-scal…
Apr 4, 2017
7a825ba
Add function test to tfcompile.
skye Apr 4, 2017
c3d9905
Change the state_is_tuple default to True for the remaining 3 cells: …
tensorflower-gardener Apr 4, 2017
a02a331
Fix the build in contrib/factorization.
tensorflower-gardener Apr 4, 2017
73be7de
Enable windows cmake build log print all the time.
tensorflower-gardener Apr 4, 2017
91685c1
tfdbg: exclude some nodes with apparently uninitialized tensors in te…
caisq Apr 4, 2017
ac3c768
[TF:XLA] Failures from ComputationBuilder::IsConstant() and Computati…
hawkinsp Apr 4, 2017
af21dee
Adds alt text for images of equations in mnist beginners tutorial.
tensorflower-gardener Apr 4, 2017
e227115
Adds a function in ModelFnOps to create an equivalent EstimatorSpec. …
Apr 4, 2017
9c4124c
Fix two ongoing breakages in Jenkins postsubmit
caisq Apr 4, 2017
0873aa5
Fix all 64/32 bit warning in core/common_runtime.
Apr 4, 2017
5ee21f2
fixing merge conflicts
rohan100jain Apr 4, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Allow variable reuse in function.
Change: 151807504
  • Loading branch information
tensorflower-gardener committed Mar 31, 2017
commit e4a8dc831dbf2894c79659d50aea73999c1ff173
7 changes: 7 additions & 0 deletions tensorflow/python/framework/function.py
Original file line number Diff line number Diff line change
Expand Up @@ -299,6 +299,7 @@ def getvar(self,
shape=None,
dtype=None,
initializer=None,
reuse=None,
trainable=True,
collections=None, # pylint: disable=redefined-outer-name
use_resource=None,
Expand All @@ -319,6 +320,7 @@ def getvar(self,
shape=shape,
dtype=dtype,
initializer=initializer,
reuse=reuse,
trainable=trainable,
collections=collections,
use_resource=use_resource)
Expand Down Expand Up @@ -886,6 +888,11 @@ def foo(x, y):
default graph. Because the addition of the function into the graph
is deferred, the decorator can be used anywhere in the program.

Any variables created inside of the function are hoisted into the outer graph.
Note that the variables are created in the variable scope that was active
during the first call to the function. Subsequent function calls will refer to
the same set of variables.

Definitions of functions are frozen in a graph as soon as the graph is used to
create a session. Therefore, nodes using the function must be created in the
graph before the corresponding session is created.
Expand Down
52 changes: 52 additions & 0 deletions tensorflow/python/framework/function_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -722,6 +722,58 @@ def Bar(x):
y = Bar(array_ops.zeros([1, 2, 3]))
self.assertAllEqual(y.get_shape().as_list(), [1, 1, 2, 3])

def testVariableReuse(self):
def LinearWithReuse(input_tensor, reuse=None):
size = input_tensor.shape.dims[1]
with variable_scope.variable_scope("linear", reuse=reuse):
w = variable_scope.get_variable("w", shape=[size, size],
dtype=input_tensor.dtype)
return math_ops.matmul(input_tensor, w)

@function.Defun(dtypes.float32)
def Foo(inputs):
inputs = array_ops.reshape(inputs, [32, 100])
hidden = LinearWithReuse(inputs)
return LinearWithReuse(hidden, reuse=True)

input_op = array_ops.placeholder(shape=[32, 100], dtype=dtypes.float32)
output_op = Foo(input_op)

global_vars = variables.global_variables()
self.assertEqual(len(global_vars), 1)
self.assertEqual(global_vars[0].name, "linear/w:0")

with session.Session() as sess:
sess.run(variables.global_variables_initializer())
output_val = sess.run(output_op,
feed_dict={input_op: np.random.rand(32, 100)})
self.assertEqual(output_val.shape, (32, 100))

def testFunctionCallInDifferentVariableScopes(self):
@function.Defun(dtypes.float32)
def Foo(inputs):
var = variable_scope.get_variable("var", shape=[10], dtype=dtypes.float32,
initializer=init_ops.ones_initializer())
return inputs + var

input_op = array_ops.placeholder(shape=[10], dtype=dtypes.float32)
with variable_scope.variable_scope("vs1"):
out1_op = Foo(input_op)

with variable_scope.variable_scope("vs2"):
out2_op = Foo(input_op)

global_vars = variables.global_variables()
self.assertEqual(len(global_vars), 1)
self.assertEqual(global_vars[0].name, "vs1/var:0")

with session.Session() as sess:
sess.run(variables.global_variables_initializer())
out1, out2 = sess.run([out1_op, out2_op],
feed_dict={input_op: np.linspace(1, 10, 10)})
self.assertAllEqual(out1, np.linspace(2, 11, 10))
self.assertAllEqual(out2, np.linspace(2, 11, 10))


class FunctionsFromProtos(test.TestCase):

Expand Down
5 changes: 4 additions & 1 deletion tensorflow/python/ops/variable_scope.py
Original file line number Diff line number Diff line change
Expand Up @@ -904,6 +904,7 @@ def get_variable(self,
dtype=None,
initializer=None,
regularizer=None,
reuse=None,
trainable=True,
collections=None,
caching_device=None,
Expand All @@ -920,6 +921,8 @@ def get_variable(self,
partitioner = self._partitioner
if custom_getter is None:
custom_getter = self._custom_getter
if reuse is None:
reuse = self._reuse

full_name = self.name + "/" + name if self.name else name
# Variable names only depend on variable_scope (full_name here),
Expand All @@ -942,7 +945,7 @@ def get_variable(self,

return var_store.get_variable(
full_name, shape=shape, dtype=dtype, initializer=initializer,
regularizer=regularizer, reuse=self.reuse, trainable=trainable,
regularizer=regularizer, reuse=reuse, trainable=trainable,
collections=collections, caching_device=caching_device,
partitioner=partitioner, validate_shape=validate_shape,
use_resource=use_resource, custom_getter=custom_getter)
Expand Down