[go: nahoru, domu]

Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restoring the Model #40

Open
leifsyliongka opened this issue May 25, 2017 · 3 comments
Open

Restoring the Model #40

leifsyliongka opened this issue May 25, 2017 · 3 comments

Comments

@leifsyliongka
Copy link

Hello, I'm trying out your code and I'm running the Test and Visualize attention results codes and I'm getting this the error below. Could it be that the sample model is already outdated? Thank you.

Log

2017-05-25 09:05:29,733 root  INFO     loading data
2017-05-25 09:05:29,738 root  INFO     phase: test
2017-05-25 09:05:29,738 root  INFO     model_dir: model
2017-05-25 09:05:29,738 root  INFO     load_model: True
2017-05-25 09:05:29,738 root  INFO     output_dir: results
2017-05-25 09:05:29,738 root  INFO     steps_per_checkpoint: 500
2017-05-25 09:05:29,738 root  INFO     batch_size: 1
2017-05-25 09:05:29,738 root  INFO     num_epoch: 1000
2017-05-25 09:05:29,739 root  INFO     learning_rate: 1
2017-05-25 09:05:29,739 root  INFO     reg_val: 0
2017-05-25 09:05:29,739 root  INFO     max_gradient_norm: 5.000000
2017-05-25 09:05:29,739 root  INFO     clip_gradients: True
2017-05-25 09:05:29,743 root  INFO     valid_target_length inf
2017-05-25 09:05:29,743 root  INFO     target_vocab_size: 39
2017-05-25 09:05:29,743 root  INFO     target_embedding_size: 10.000000
2017-05-25 09:05:29,743 root  INFO     attn_num_hidden: 128
2017-05-25 09:05:29,743 root  INFO     attn_num_layers: 2
2017-05-25 09:05:29,743 root  INFO     visualize: True
2017-05-25 09:05:29,743 root  INFO     buckets
2017-05-25 09:05:29,744 root  INFO     [(16, 32), (27, 32), (35, 32), (64, 32), (80, 32)]
input_tensor dim: (?, 1, 32, ?)
CNN outdim before squeeze: (?, 1, ?, 512)
CNN outdim: (?, ?, 512)
2017-05-25 09:07:04,718 root  INFO     Reading model parameters from model/translate.ckpt-47200
Traceback (most recent call last):
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/lrs/Attention-OCR/src/model/model.py", line 204, in __init__
    self.saver_all.restore(self.sess, ckpt.model_checkpoint_path)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1428, in restore
    {self.saver_def.filename_tensor_name: save_path})
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 767, in run
    run_metadata_ptr)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _run
    feed_dict_string, options, run_metadata)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1015, in _do_run
    target_list, options, run_metadata)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1035, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.NotFoundError: Tensor name "embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/basic_lstm_cell/biases" not found in checkpoint files model/translate.ckpt-47200
         [[Node: save/RestoreV2_33 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_33/tensor_names, save/RestoreV2_33/shape_and_slices)]]

Caused by op u'save/RestoreV2_33', defined at:
  File "src/launcher.py", line 146, in <module>
    main(sys.argv[1:], exp_config.ExpConfig)
  File "src/launcher.py", line 142, in main
    session = sess)
  File "/home/lrs/Attention-OCR/src/model/model.py", line 198, in __init__
    self.saver_all = tf.train.Saver(tf.all_variables())
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1040, in __init__
    self.build()
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 1070, in build
    restore_sequentially=self._restore_sequentially)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 675, in build
    restore_sequentially, reshape)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 402, in _AddRestoreOps
    tensors = self.restore_op(filename_tensor, saveable, preferred_shard)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/training/saver.py", line 242, in restore_op
    [spec.tensor.dtype])[0])
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/ops/gen_io_ops.py", line 668, in restore_v2
    dtypes=dtypes, name=name)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
    op_def=op_def)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2327, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/home/lrs/tf101/local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1226, in __init__
    self._traceback = _extract_stack()

NotFoundError (see above for traceback): Tensor name "embedding_attention_decoder/attention_decoder/multi_rnn_cell/cell_1/basic_lstm_cell/biases" not found in checkpoint files model/translate.ckpt-47200
         [[Node: save/RestoreV2_33 = RestoreV2[dtypes=[DT_FLOAT], _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_save/Const_0, save/RestoreV2_33/tensor_names, save/RestoreV2_33/shape_and_slices)]]
@BeSlower
Copy link

@leifsyliongka Same problem. Did you fix this error?

@leifsyliongka
Copy link
Author
leifsyliongka commented May 30, 2017

@BeSlower I was able to eventually run the code. The error is caused by the model being outdated with respect to the script. You'll have to train your own model.

@13354236170
Copy link

same problem。when i train my own model,it works。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants