Skip to content

Commit

Permalink
checkpoints_keep_amount -> checkpoints_max_keep and added to
Browse files Browse the repository at this point in the history
base_config under train.
  • Loading branch information
dekked committed Oct 31, 2018
1 parent 2b25c89 commit 73534c0
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 1 deletion.
2 changes: 2 additions & 0 deletions luminoth/models/fasterrcnn/base_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,8 @@ train:
save_timeline: False
# The frequency, in seconds, that a checkpoint is saved.
save_checkpoint_secs: 600
# The maximum number of checkpoints to keep
checkpoints_max_keep: 1
# The frequency, in number of global steps, that the summaries are written to
# disk.
save_summaries_steps:
Expand Down
2 changes: 2 additions & 0 deletions luminoth/models/ssd/base_config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,8 @@ train:
save_timeline: False
# The frequency, in seconds, that a checkpoint is saved.
save_checkpoint_secs: 600
# The maximum number of checkpoints to keep
checkpoints_max_keep: 1
# The frequency, in number of global steps, that the summaries are written to disk
save_summaries_steps:
# The frequency, in secs, that the summaries are written to disk.
Expand Down
2 changes: 1 addition & 1 deletion luminoth/train.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ def run(config, target='', cluster_spec=None, is_chief=True, job_name=None,
model_saver = tf.train.Saver(
set(tf.global_variables()) - set(slot_variables),
name='model_saver',
max_to_keep=config.get('checkpoints_keep_amount', 1),
max_to_keep=config.train.get('checkpoints_max_keep', 1),
)

# Create saver for loading pretrained checkpoint into base network
Expand Down

0 comments on commit 73534c0

Please sign in to comment.