Binary classification of bone CT images between normal/fractured bone using covolutional neural networks (CNN). A Total of 353 images (177 fractured, 176 normal) were used for training, with 20% as validation. All images were center-cropped to 400 x 400 dimension. Due to the small size of data, training images were augmented using random horizontal flip(p=0.5).
Variants of VGGNet, AlexNet, and ResNet were trained to classify the images. Out of the three, AlexNet achieved best validation performance, thus was selected as final model to predict test images.
This was carried out as a course project for "Pattern Recognition and Machine Learning" taught by Prof. Taehoon Shin at Ewha Womans University.
- Mix of fractured and normal bone images
- VGGNet - Very Deep Convolutional Networks For Large-Scale Image Recognition, 2014, (Karen Simonyan, Andrew Zisserman)
- AlexNet - ImageNet Classification with Deep Convolutional Neural Networks, 2012, (Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton)
- ResNet - Deep Residual Learning for Image Recognition, 2015, (Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun)
Final Model
Modified AlexNet with input size 400x400, Dropout(p=0.8) trained with L2 Regularization(lambda=1e-3)
All of the models were trained and tested via Pytorch framework.
No pretrained model was used for this project.
Accuracy | Loss |
---|---|
Bone Fracure Detection
├── README.md
├── code
│ ├───AlexNet.ipynb
│ ├───AlexNet_Drop.ipynb
│ ├───AlexNet_Drop_L2Reg(1e-3).ipynb
│ ├───AlexNet_Drop_L2Reg(1e-4).ipynb
│ ├───ResNet16.ipynb
│ ├───ResNet26.ipynb
│ ├───VGG9.ipynb
│ └───VGG11.ipynb
│
├── report.pdf
│