This is the official implementation of "Sketch-SparseNet: A Novel Sparse-convolution-based Framework for Sketch Recognition"
- ResNet 18/34/50/101
- MnasNet
- Mobilenet
- DenseNet
- Swin Transformer
- GNN
- RNN
- Transformer
- QuickDraw-414k
- Tuberlin
- CIFAR
The code is built with following libraries:
You can modify the config (e.g. configs/swin_image.yaml) to choose or define your model for training
Supported Distributed Training
CUDA_VISIBLE_DEVICES=0,1,2,3 torchpack dist-run -np 4 python train_img_single.py configs/swin_image.yaml --run-dir nbenchmark/swin_sce # 4 gpus
CUDA_VISIBLE_DEVICES=2,3 torchpack dist-run -np 2 python train_img2.py configs/quickdraw/sd3b1_image_stroke.yaml --run-dir nbenchmark/trans/resnet50_quickdraw_image_stroke_sd3b1_norm/
Single GPU Training
python train_img_single.py configs/swin_image.yaml --run-dir nbenchmark/swin_sce --distributed False
python train_img2.py configs/quickdraw/sd3b1_image_stroke.yaml --run-dir nbenchmark/trans/resnet50_quickdraw_image_stroke_sd3b1_norm/ --distributed False
If you have any problems, feel free to reach out to me in the issue.