-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how run run_infer.py,I meet a problem #91
Comments
It looks like there are some spelling errors in your command line args. Please can you send it again with code formatting please so we can read it better. |
sorry Let me rephrase Process finished with exit code 1 How should I set the path to --model_path? Or I should have made those arguments to run_infer.py??? |
I am copying your command line arguments as it makes it easier to interpret and refer to: python run_infer.py --gpu="0,1" --nr_types=6 --type_info_path="type_info.json" --batch_size=64 --model_mode=oright --model_path="./logs/01/net_epoch=50. tar "--nr_inference_workers=8 --nr_post_proc_workers=16 wsi- input_dir = ". / the Dataset/CoNSeP/Test/Images/" - output_dir = ". / the Dataset/sample_tiles/Mr Pred/" - save_thumb - save_mask Your input is the CoNSeP dataset which are standard image tiles, not WSIs. Therefore you should not be using WSI mode. Also, you have selected an invalid model mode - this should be Please refer to this as an example on how to use the command line arguments with tile inference mode. You will also see that you have not got the correct number of nuclei types for the CoNSeP dataset - this should be 5. |
@zerodohero For everyone sake, please please read what you input as the command
Also, the internal pytorch error clearly indicate what wrong with it, please read it try to understand it.
Only I close the thread as this doesnt seem to go anywhere when you can't seem to understand what we said. As far as the problem goes, the cause is in the input format and we already pointed it out. |
Hello, I trained this network on the CONSEP dataset and got the training weight of each epoch (...)How should I use these weights.I tried to run the run_inferption.py file like this:--gpu="0,1" --nr_types=6 --type_info_path="type_info.json" --batch_size=64 --model_mode=oright --model_path="./logs/01/net_epoch=50. Tar "--nr_inference_workers=8 --nr_post_proc_workers=16 wsi- input_dir = ". / the Dataset/CoNSeP/Test/Images/" - output_dir = ". / the Dataset/sample_tiles/Mr Pred/" - save_thumb - save_mask.However, the following error is displayed
/home/dc2-user/anaconda3/envs/pytorch160/bin/python /data0/hover_net-master/run_infer.py --gpu=0,1 --nr_types=6 --type_info_path=type_info.json --batch_size=64 --model_mode=oright --model_path=./logs/01/net_epoch=50.tar --nr_inference_workers=8 --nr_post_proc_workers=16 wsi --input_dir=./Dataset/CoNSeP/Test/Images/ --output_dir=./Dataset/sample_tiles/pred/ --save_thumb --save_mask
|2021-01-19|08:35:52.802| [INFO] .... Detect #GPUS: 2
WARNING: Detect checkpoint saved in data-parallel mode. Converting saved model to single GPU mode.
Traceback (most recent call last):
File "/data0/hover_net-master/run_infer.py", line 181, in
infer = InferManager(**method_args)
File "/data0/hover_net-master/infer/base.py", line 27, in init
self.__load_model()
File "/data0/hover_net-master/infer/base.py", line 68, in __load_model
net.load_state_dict(saved_state_dict, strict=True)
File "/home/dc2-user/anaconda3/envs/pytorch160/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for HoVerNet:
size mismatch for decoder.tp.u3.conva.weight: copying a param with shape torch.Size([256, 1024, 5, 5]) from checkpoint, the shape in current model is torch.Size([256, 1024, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.4.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.5.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.6.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u3.dense.units.7.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u2.conva.weight: copying a param with shape torch.Size([128, 512, 5, 5]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for decoder.tp.u2.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u2.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u2.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u2.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.tp.u1.conva.weight: copying a param with shape torch.Size([64, 256, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 256, 3, 3]).
size mismatch for decoder.tp.u0.conv.weight: copying a param with shape torch.Size([5, 64, 1, 1]) from checkpoint, the shape in current model is torch.Size([6, 64, 1, 1]).
size mismatch for decoder.tp.u0.conv.bias: copying a param with shape torch.Size([5]) from checkpoint, the shape in current model is torch.Size([6]).
size mismatch for decoder.np.u3.conva.weight: copying a param with shape torch.Size([256, 1024, 5, 5]) from checkpoint, the shape in current model is torch.Size([256, 1024, 3, 3]).
size mismatch for decoder.np.u3.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.4.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.5.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.6.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u3.dense.units.7.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u2.conva.weight: copying a param with shape torch.Size([128, 512, 5, 5]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for decoder.np.u2.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u2.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u2.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u2.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.np.u1.conva.weight: copying a param with shape torch.Size([64, 256, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 256, 3, 3]).
size mismatch for decoder.hv.u3.conva.weight: copying a param with shape torch.Size([256, 1024, 5, 5]) from checkpoint, the shape in current model is torch.Size([256, 1024, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.4.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.5.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.6.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u3.dense.units.7.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u2.conva.weight: copying a param with shape torch.Size([128, 512, 5, 5]) from checkpoint, the shape in current model is torch.Size([128, 512, 3, 3]).
size mismatch for decoder.hv.u2.dense.units.0.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u2.dense.units.1.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u2.dense.units.2.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u2.dense.units.3.conv2.weight: copying a param with shape torch.Size([32, 32, 5, 5]) from checkpoint, the shape in current model is torch.Size([32, 32, 3, 3]).
size mismatch for decoder.hv.u1.conva.weight: copying a param with shape torch.Size([64, 256, 5, 5]) from checkpoint, the shape in current model is torch.Size([64, 256, 3, 3]).
Process finished with exit code 1
How should I set the path to --model_path?Or I should have made those arguments to run_infertion.py
The text was updated successfully, but these errors were encountered: