We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
is anyone know how to replace EXT_CKPT to train BertExtAbs? I use this code !python train.py -task abs -mode train -bert_data_path ../bert_data/cnndm -dec_dropout 0.2 -model_path ../models -sep_optim true -lr_bert 0.002 -lr_dec 0.2 -save_checkpoint_steps 2000 -batch_size 140 -train_steps 2000 -report_every 50 -accum_count 5 -use_bert_emb true -use_interval true -warmup_steps_bert 2000 -warmup_steps_dec 10000 -max_pos 512 -visible_gpus 0 -log_file ../logs/abs_bert_cnndm -load_from_extractive ../models
!python train.py -task abs -mode train -bert_data_path ../bert_data/cnndm -dec_dropout 0.2 -model_path ../models -sep_optim true -lr_bert 0.002 -lr_dec 0.2 -save_checkpoint_steps 2000 -batch_size 140 -train_steps 2000 -report_every 50 -accum_count 5 -use_bert_emb true -use_interval true -warmup_steps_bert 2000 -warmup_steps_dec 10000 -max_pos 512 -visible_gpus 0 -log_file ../logs/abs_bert_cnndm -load_from_extractive ../models
and then i got this error
[2022-01-15 14:50:25,555 INFO] Namespace(accum_count=5, alpha=0.6, batch_size=140, beam_size=5, bert_data_path='../bert_data/cnndm', beta1=0.9, beta2=0.999, block_trigram=True, dec_dropout=0.2, dec_ff_size=2048, dec_heads=8, dec_hidden_size=768, dec_layers=6, enc_dropout=0.2, enc_ff_size=512, enc_hidden_size=512, enc_layers=6, encoder='bert', ext_dropout=0.2, ext_ff_size=2048, ext_heads=8, ext_hidden_size=768, ext_layers=2, finetune_bert=True, generator_shard_size=32, gpu_ranks=[0], label_smoothing=0.1, large=False, load_from_extractive='../models', log_file='../logs/abs_bert_cnndm', lr=1, lr_bert=0.002, lr_dec=0.2, max_grad_norm=0, max_length=150, max_pos=512, max_tgt_len=140, min_length=15, mode='train', model_path='../models', optim='adam', param_init=0, param_init_glorot=True, recall_eval=False, report_every=50, report_rouge=True, result_path='../results/cnndm', save_checkpoint_steps=2000, seed=666, sep_optim=True, share_emb=False, task='abs', temp_dir='../temp', test_all=False, test_batch_size=200, test_from='', test_start_from=-1, train_from='', train_steps=2000, use_bert_emb=True, use_interval=True, visible_gpus='0', warmup_steps=8000, warmup_steps_bert=2000, warmup_steps_dec=10000, world_size=1) [2022-01-15 14:50:25,555 INFO] Device ID 0 [2022-01-15 14:50:25,555 INFO] Device cuda [2022-01-15 14:50:25,585 INFO] Loading bert from extractive model ../models Traceback (most recent call last): File "train.py", line 122, in <module> train_abs(args, device_id) File "/content/PreSumm/src/train_abstractive.py", line 273, in train_abs train_abs_single(args, device_id) File "/content/PreSumm/src/train_abstractive.py", line 303, in train_abs_single bert_from_extractive = torch.load(args.load_from_extractive, map_location=lambda storage, loc: storage) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 594, in load with _open_file_like(f, 'rb') as opened_file: File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 211, in __init__ super(_open_file, self).__init__(open(name, mode)) IsADirectoryError: [Errno 21] Is a directory: '../models'
The text was updated successfully, but these errors were encountered:
No branches or pull requests
is anyone know how to replace EXT_CKPT to train BertExtAbs?
I use this code
!python train.py -task abs -mode train -bert_data_path ../bert_data/cnndm -dec_dropout 0.2 -model_path ../models -sep_optim true -lr_bert 0.002 -lr_dec 0.2 -save_checkpoint_steps 2000 -batch_size 140 -train_steps 2000 -report_every 50 -accum_count 5 -use_bert_emb true -use_interval true -warmup_steps_bert 2000 -warmup_steps_dec 10000 -max_pos 512 -visible_gpus 0 -log_file ../logs/abs_bert_cnndm -load_from_extractive ../models
and then i got this error
[2022-01-15 14:50:25,555 INFO] Namespace(accum_count=5, alpha=0.6, batch_size=140, beam_size=5, bert_data_path='../bert_data/cnndm', beta1=0.9, beta2=0.999, block_trigram=True, dec_dropout=0.2, dec_ff_size=2048, dec_heads=8, dec_hidden_size=768, dec_layers=6, enc_dropout=0.2, enc_ff_size=512, enc_hidden_size=512, enc_layers=6, encoder='bert', ext_dropout=0.2, ext_ff_size=2048, ext_heads=8, ext_hidden_size=768, ext_layers=2, finetune_bert=True, generator_shard_size=32, gpu_ranks=[0], label_smoothing=0.1, large=False, load_from_extractive='../models', log_file='../logs/abs_bert_cnndm', lr=1, lr_bert=0.002, lr_dec=0.2, max_grad_norm=0, max_length=150, max_pos=512, max_tgt_len=140, min_length=15, mode='train', model_path='../models', optim='adam', param_init=0, param_init_glorot=True, recall_eval=False, report_every=50, report_rouge=True, result_path='../results/cnndm', save_checkpoint_steps=2000, seed=666, sep_optim=True, share_emb=False, task='abs', temp_dir='../temp', test_all=False, test_batch_size=200, test_from='', test_start_from=-1, train_from='', train_steps=2000, use_bert_emb=True, use_interval=True, visible_gpus='0', warmup_steps=8000, warmup_steps_bert=2000, warmup_steps_dec=10000, world_size=1) [2022-01-15 14:50:25,555 INFO] Device ID 0 [2022-01-15 14:50:25,555 INFO] Device cuda [2022-01-15 14:50:25,585 INFO] Loading bert from extractive model ../models Traceback (most recent call last): File "train.py", line 122, in <module> train_abs(args, device_id) File "/content/PreSumm/src/train_abstractive.py", line 273, in train_abs train_abs_single(args, device_id) File "/content/PreSumm/src/train_abstractive.py", line 303, in train_abs_single bert_from_extractive = torch.load(args.load_from_extractive, map_location=lambda storage, loc: storage) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 594, in load with _open_file_like(f, 'rb') as opened_file: File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 211, in __init__ super(_open_file, self).__init__(open(name, mode)) IsADirectoryError: [Errno 21] Is a directory: '../models'
The text was updated successfully, but these errors were encountered: