Hybrid Feature Benchmark for Blood Cell Classification Using ResNet50 and EfficientNetV2 Features with SVM and ANN Classifiers via Unsupervised Segmentation

Authors

  • Ahmad Kholish Fauzan Shobiry Universitas Negeri Malang
  • Rahma Puspitasari Universitas Negeri Malang

DOI:

https://doi.org/10.56705/ijaimi.v3i2.364

Keywords:

BloodMNIST, Feature Extraction, EfficientNetV2, ResNet50, Machine Learning, Medical Image Classification

Abstract

Automated blood cell classification supports hematological diagnosis by providing objective and efficient analysis, but end-to-end deep learning models often require substantial computational resources that limit deployment on low-resource clinical devices. This study evaluates whether frozen deep features extracted from EfficientNetV2B0 or ResNet50 provide better separability for the eight BloodMNIST classes, and examines which classical classifier offers the most practical balance of accuracy, model size, and training time. The BloodMNIST dataset, consisting of 11,959 training images, 1,712 validation images, and 3,421 test images, is processed using data augmentation and Otsu-based unsupervised segmentation before the resulting masks are replicated into three channels and passed into pretrained ImageNet CNNs used strictly as frozen feature extractors. The extracted features are classified using Support Vector Machine with grid search, K-Nearest Neighbor, Artificial Neural Network, and Random Forest, with performance assessed through accuracy, precision, recall, and F1-score. EfficientNetV2 with Support Vector Machine achieves the highest performance, reaching 76.8% test accuracy, 75.3% precision, 72.6% recall, and a 73.6% F1-score, while EfficientNetV2 with Artificial Neural Network provides a comparable 76.2% accuracy and a 73.0% F1-score with a compact 2 MB model size. These findings highlight a clear trade-off between accuracy, model size, and computational cost, demonstrating that hybrid deep-feature pipelines offer lightweight and effective solutions for blood cell classification in resource-constrained clinical settings

References

[1] K. Barrera, J. Rodellar, S. Alférez, and A. Merino, “A deep learning approach for automatic recognition of abnormalities in the cytoplasm of neutrophils,” Comput. Biol. Med., vol. 178, p. 108691, Aug. 2024, doi: 10.1016/j.compbiomed.2024.108691.

[2] M. Bećirović, A. Kurtović, N. Smajlović, M. Kapo, and A. Akagić, “Performance comparison of medical image classification systems using TensorFlow Keras, PyTorch, and JAX,” July 19, 2025, arXiv: arXiv:2507.14587. doi: 10.48550/arXiv.2507.14587.

[3] Y. Zhang, C. Li, Z. Liu, and M. Li, “Semi-Supervised Disease Classification Based on Limited Medical Image Data,” IEEE J. Biomed. Health Inform., vol. 28, no. 3, pp. 1575–1586, Mar. 2024, doi: 10.1109/JBHI.2024.3349412.

[4] J. Su et al., “Cervical cancer prediction using machine learning models based on routine blood analysis,” Sci. Rep., vol. 15, no. 1, p. 22655, July 2025, doi: 10.1038/s41598-025-08166-0.

[5] M. Hussein and F. A. E.-S. Z. El-Mougi, “Integrating deep learning and transfer learning: optimizing white blood cells classification in medical educational institutions,” J. Big Data, vol. 12, no. 1, p. 189, July 2025, doi: 10.1186/s40537-025-01235-1.

[6] A. Panthakkan, S. M. Anzar, W. Mansoor, and H. A. Ahmad, “A new frontier in hematology: Robust deep learning ensembles for white blood cell classification,” Biomed. Signal Process. Control, vol. 100, p. 106995, Feb. 2025, doi: 10.1016/j.bspc.2024.106995.

[7] Ş. N. Özcan, T. Uyar, and G. Karayeğen, “Comprehensive data analysis of white blood cells with classification and segmentation by using deep learning approaches,” Cytometry A, vol. 105, no. 7, pp. 501–520, July 2024, doi: 10.1002/cyto.a.24839.

[8] J. Yang et al., “MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification,” Sci. Data, vol. 10, no. 1, p. 41, Jan. 2023, doi: 10.1038/s41597-022-01721-8.

[9] J. Yang, R. Shi, and B. Ni, “MedMNIST classification decathlon: a lightweight AutoML benchmark for medical image analysis,” in IEEE 18th international symposium on biomedical imaging (ISBI), 2021, pp. 191–195.

[10] M. Elgendi et al., “The Effectiveness of Image Augmentation in Deep Learning Networks for Detecting COVID-19: A Geometric Transformation Perspective,” Front. Med., vol. 8, p. 629134, Mar. 2021, doi: 10.3389/fmed.2021.629134.

[11] A. Kebaili, J. Lapuyade-Lahorgue, and S. Ruan, “Deep Learning Approaches for Data Augmentation in Medical Imaging: A Review,” J. Imaging, vol. 9, no. 4, p. 81, Apr. 2023, doi: 10.3390/jimaging9040081.

[12] J. Lo, J. Cardinell, A. Costanzo, and D. Sussman, “Medical Augmentation (Med-Aug) for Optimal Data Augmentation in Medical Deep Learning Networks,” Sensors, vol. 21, no. 21, p. 7018, Oct. 2021, doi: 10.3390/s21217018.

[13] P. Chlap, H. Min, N. Vandenberg, J. Dowling, L. Holloway, and A. Haworth, “A review of medical image data augmentation techniques for deep learning applications,” J. Med. Imaging Radiat. Oncol., vol. 65, no. 5, pp. 545–563, Aug. 2021, doi: 10.1111/1754-9485.13261.

[14] F. Allender, R. Allègre, C. Wemmert, and J.-M. Dischler, “Data augmentation based on spatial deformations for histopathology: An evaluation in the context of glomeruli segmentation,” Comput. Methods Programs Biomed., vol. 221, p. 106919, June 2022, doi: 10.1016/j.cmpb.2022.106919.

[15] I. Ahmed, E. Balestrieri, A. Neyestani, F. Picariello, and S. Rapuano, “Image segmentation techniques for morphometric measurement of fish blood cells: A comparative study,” Meas. Sens., vol. 38, p. 101654, May 2025, doi: 10.1016/j.measen.2024.101654.

[16] Y. Fang and B. Zhong, “Cell segmentation in fluorescence microscopy images based on multi-scale histogram thresholding,” Math. Biosci. Eng., vol. 20, no. 9, pp. 16259–16278, 2023, doi: 10.3934/mbe.2023726.

[17] X.-H. Lam, K.-W. Ng, Y.-J. Yoong, and S.-B. Ng, “WBC-based segmentation and classification on microscopic images: a minor improvement,” F1000Research, vol. 10, p. 1168, Nov. 2021, doi: 10.12688/f1000research.73315.1.

[18] G. Ye and M. Kaya, “Automated Cell Foreground–Background Segmentation with Phase-Contrast Microscopy Images: An Alternative to Machine Learning Segmentation Methods with Small-Scale Data,” Bioengineering, vol. 9, no. 2, p. 81, Feb. 2022, doi: 10.3390/bioengineering9020081.

[19] L. Alzubaidi et al., “Towards a Better Understanding of Transfer Learning for Medical Imaging: A Case Study,” Appl. Sci., vol. 10, no. 13, p. 4523, June 2020, doi: 10.3390/app10134523.

[20] E. da S. Puls, M. V. Todescato, and J. L. Carbonera, “An evaluation of pre-trained models for feature extraction in image classification,” Oct. 03, 2023, arXiv: arXiv:2310.02037. doi: 10.48550/arXiv.2310.02037.

[21] N. Ho and Y.-C. Kim, “Evaluation of transfer learning in deep convolutional neural network models for cardiac short axis slice classification,” Sci. Rep., vol. 11, no. 1, p. 1839, Jan. 2021, doi: 10.1038/s41598-021-81525-9.

[22] S. Tayebi Arasteh, L. Misera, J. N. Kather, D. Truhn, and S. Nebelung, “Enhancing diagnostic deep learning via self-supervised pretraining on large-scale, unlabeled non-medical images,” Eur. Radiol. Exp., vol. 8, no. 1, p. 10, Feb. 2024, doi: 10.1186/s41747-023-00411-3.

[23] H. Darwis, R. Puspitasari, Purnawansyah, W. Astuti, D. Atmajaya, and M. Hasnawi, “A Deep Learning Approach for Improving Waste Classification Accuracy with ResNet50 Feature Extraction,” in 2025 19th International Conference on Ubiquitous Information Management and Communication (IMCOM), Jan. 2025, pp. 1–8. doi: 10.1109/IMCOM64595.2025.10857536.

[24] K. Kansal, T. B. Chandra, and A. Singh, “ResNet-50 vs. EfficientNet-B0: Multi-Centric Classification of Various Lung Abnormalities Using Deep Learning,” Procedia Comput. Sci., vol. 235, pp. 70–80, 2024, doi: 10.1016/j.procs.2024.04.007.

[25] J. M. H. Pinheiro et al., “The Impact of Feature Scaling In Machine Learning: Effects on Regression and Classification Tasks,” Sept. 22, 2025, arXiv: arXiv:2506.08274. doi: 10.48550/arXiv.2506.08274.

[26] S. Notley and M. Magdon-Ismail, “Examining the Use of Neural Networks for Feature Extraction: A Comparative Analysis using Deep Learning, Support Vector Machines, and K-Nearest Neighbor Classifiers,” June 12, 2018, arXiv: arXiv:1805.02294. doi: 10.48550/arXiv.1805.02294.

[27] F. Al-Areqi and M. Z. Konyar, “Effectiveness evaluation of different feature extraction methods for classification of covid-19 from computed tomography images: A high accuracy classification study,” Biomed. Signal Process. Control, vol. 76, p. 103662, July 2022, doi: 10.1016/j.bspc.2022.103662.

[28] A. Salhi, R. Alshamrani, A. Althbiti, A. Ismail, M. Abd-ElRahman, and B. M. Hassan, “Optimizing high dimensional data classification with a hybrid AI driven feature selection framework and machine learning schema,” Sci. Rep., vol. 15, no. 1, p. 35038, Oct. 2025, doi: 10.1038/s41598-025-08699-4.

[29] Herman, H. Darwis, Nurfauziyah, R. Puspitasari, D. Widyawati, and A. Faradibah, “Comparative Analysis of Anxiety Disorder Classification Using Algorithm Naïve Bayes, Decision Tree and K-NN,” in 2025 19th International Conference on Ubiquitous Information Management and Communication (IMCOM), Jan. 2025, pp. 1–6. doi: 10.1109/IMCOM64595.2025.10857485.

[30] “Classification: Accuracy, recall, precision, and related metrics | Machine Learning,” Google for Developers. Accessed: Oct. 28, 2025. [Online]. Available: https://developers.google.com/machine-learning/crash-course/classification/accuracy-precision-recall.

[31] “Understanding the Confusion Matrix in Machine Learning,” GeeksforGeeks. Accessed: Oct. 28, 2025. [Online]. Available: https://www.geeksforgeeks.org/machine-learning/confusion-matrix-machine-learning/

[32] R. Omar, J. Bogner, H. Muccini, P. Lago, S. Martínez-Fernández, and X. Franch, “The More the Merrier? Navigating Accuracy vs. Energy Efficiency Design Trade-Offs in Ensemble Learning Systems,” July 03, 2024, arXiv: arXiv:2407.02914. doi: 10.48550/arXiv.2407.02914.

[33] M. A. K. Raiaan et al., “A systematic review of hyperparameter optimization techniques in Convolutional Neural Networks,” Decis. Anal. J., vol. 11, p. 100470, June 2024, doi: 10.1016/j.dajour.2024.100470.

[34] K.-L. Du, B. Jiang, J. Lu, J. Hua, and M. N. S. Swamy, “Exploring Kernel Machines and Support Vector Machines: Principles, Techniques, and Future Directions,” Mathematics, vol. 12, no. 24, p. 3935, Dec. 2024, doi: 10.3390/math12243935.

[35] M. Wageh, K. Amin, A. D. Algarni, A. M. Hamad, and M. Ibrahim, “Brain Tumor Detection Based on Deep Features Concatenation and Machine Learning Classifiers With Genetic Selection,” IEEE Access, vol. 12, pp. 114923–114939, 2024, doi: 10.1109/ACCESS.2024.3446190.

[36] K. V. Naveen, B. N. Anoop, K. S. Siju, M. K. Kar, and V. Venugopal, “EffNet-SVM: A Hybrid Model for Diabetic Retinopathy Classification Using Retinal Fundus Images,” IEEE Access, vol. 13, pp. 79793–79804, 2025, doi: 10.1109/ACCESS.2025.3566073.

Downloads

Published

2025-11-30