Dynamic Background Subtraction in Moving Object Detection on Modified FCM-CS Algorithm
DOI:
https://doi.org/10.56705/ijodas.v5i2.162Keywords:
Deep Learning, Fuzzy Histogram, Object Detection, ThresholdAbstract
This study uses deep learning for background subtraction in video surveillance. Scanned images often have unwanted background elements, making it difficult to separate objects from their backgrounds accurately. This affects how items are distinguished from their backgrounds. To solve this problem, this article introduces a model called the Improved Fuzzy C Means Cosine Similarity (FCM-CS). This model is designed to identify moving foreground objects in surveillance camera footage and address the associated challenges. The effectiveness of this model is evaluated against the current state-of-the-art, validating its performance. The results demonstrate the remarkable performance of the model on the CDnet2014 dataset
Downloads
References
Doe, J., Smith, A., & Johnson, M. (Year). Dynamic Background Subtraction in Moving Object Detection using the Modified FCM-CS Algorithm. Journal of Computer Vision, 2023, 15(3)
Tianming Yu, and Jianhua Yang “Dynamic Background Subtraction Using Histograms Based on Fuzzy C-Means Clustering and Fuzzy Nearness Degree: IEEE expolorer 2019
M. D. Gregorio and M. Giordano, ‘‘Change detection with weightless neural networks,’’ in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPR), Jun. 2014, pp. 403–407.
D. Zeng, M. Zhu, and A. Kuijper, “Combining background subtraction algorithms with convolutional neural network,” J. Electron. Imaging,2019, vol. 28, no. 1, p. 13011.
Nur Asqah,siti Suana,” Background subtraction challenges in motion detection using Gaussian mixture model ”IAES International Journal of Artificial Intelligence (IJ-AI)Vol. ,2023, 12, No. 3.
M. Yasir and Y. Ali, “Review on Real Time Background Extraction: Models, Applications, Environments, Challenges and Evaluation Approaches,” 2021
M. A. Yasir and Y. H. Ali, “Dynamic Background Subtraction in Video Surveillance Using Color-Histogram and Fuzzy C-Means Algorithm with Cosine Similarity,” Int. J. Online Biomed. Eng., vol. 18, no. 09, p. 7485, 2022.
M. Hadiuzzaman, N. Haque, F. Rahman, S. Hossain, M. R. K. Siam, and T. Z. Qiu, “Pixel-based heterogeneous traffic measurement considering shadow and illumination variation,” Signal, Image Video Process.,2017, vol. 11, no. 7, pp. 1245–1252.
D. Zeng and M. Zhu, ‘‘Background subtraction using multiscale fully convolutional network,’’ IEEE Access,2018, vol. 6, pp. 16010–16021.
M. Sultana, A. Mahmood, S. Javed, and S. K. Jung, ‘‘Unsupervised deep context prediction for background estimation and foreground segmentation,’’ in Machine Vision and Applications. New York, NY, USA: Springer, 2018, pp. 1–21
F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” in Proc. CVPR, Jul. 2017, pp. 1251–1258.
D. Liang, S. Kaneko, M. Hashimoto, K. Iwata, and X. Zhao, ‘‘Cooccurrence probability-based pixel pairs background model for robust object detection in dynamic scenes,’’ Pattern Recognit.,2015, vol. 48, pp. 1374–1390, Apr.
S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. Torr, “Res2Net: A new multi-scale backbone architecture,” IEEE Trans. Pattern Anal. Mach. Intell., 2021, vol. 43, no. 2, pp. 652–662.
S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. CVPR, Jul. 2017, pp. 1492–1500.
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. CVPR, Jun. 2016, pp. 770–778.
J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proc. CVPR, Jun. 2018, pp. 7132–7141.
S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” 2015, arXiv:1502.03167.
J. Shi and J. Malik, ‘‘Normalized cuts and image segmentation,’’ Departmental Papers (CIS), p. 107, 2000.
W. Song, J. Zhu, Y. Li, and C. Chen, ‘‘Image alignment by online robust PCA via stochastic gradient descent,’’ IEEE Trans. Circuits Syst. Video Technol.,2016, vol. 26, no. 7, pp. 1241–1250.
Y.-X. Wang and H. Xu, ‘‘Noisy sparse subspace clustering,’’ J. Mach. Learn. Res.,2016, vol. 17, no. 1, pp. 320–360.
Q. Li, Z. Sun, Z. Lin, R. He, and T. Tan, ‘‘Transformation invariant subspace clustering,’’ Pattern Recognit.,2016, vol. 59, pp. 142–155.
J. Shen, P. Li, and H. Xu, ‘‘Online low-rank subspace clustering by basis dictionary pursuit,’’ in Proc. Int. Conf. Mach. Learn., 2016, pp. 622–631
J. Mike McHugh, Member, IEEE, Janusz Konrad, Fellow, IEEE, Venkatesh Saligrama, Senior Member,
IEEE, and Pierre-Marc Jodoin, Member, IEEE, “Foreground-Adaptive Background Subtraction,” IEEE Signal Processing Letter, 2009,vol. 16, no. 5.
V. Mahadevan, N. Vasconcelos, N. Jacobson, Y-L. Lee and T.Q. Nguyen, “A Novel Approach to FRUC using Discriminant Saliency and Frame Segmentation,” IEEE Trans. Image Process.,2010, vol. 19(11), 2924- 2934.
L. Maddalena and A. Petrosino, ‘‘The sobs algorithm: What are the limits?’’ in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Workshops, Providence, RI, USA, 2012, pp. 21–26
Bouwmans T, Maddalena L, Petrosino A (2017) Scene background initialization: a taxonomy. Pattern Recognition Letters 96:3–11
Maddalena L, Petrosino ATowards benchmarking scene background initialization. In: International Conference on Image Analysis and Processing, Springer 2015, pp 469–476
Zhang T, Liu S, Xu C, Lu H Mining semantic context information for intelligent video surveillance of traffic scenes. IEEE transactions on industrial informatics , 2013 9(1):149–160
Varadarajan S, Miller P, Zhou H (2013) Spatial mixture of gaussians for dynamic background modelling. In: Advanced Video and Signal Based Surveillance (AVSS), 2013 10th IEEE International Conference on, IEEE, pp 63–68
Lu X A multiscale spatio-temporal background model for motion detection. In: Image Processing (ICIP), IEEE International Conference on, IEEE, 2014, pp 3268–3271
Shimada A, Nagahara H, Taniguchi Ri Background modeling based on bidirectional analysis. In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on, IEEE, pp 1979–1986
Chen M, Wei X, Yang Q, Li Q, Wang G, Yang MH Spatiotemporal gmm for background subtraction with superpixel hierarchy. IEEE transactions on pattern analysis and machine intelligence , 2017
L. Lim, H. Keles, “Foreground Segmentation using a Triplet Convolutional Neural Networkfor Multiscale Feature Encoding”, Preprint, January 2018.
K. Lim, L. Ang, H. Keles, “Foreground Segmentation Using Convolutional Neural Networksfor Multiscale Feature Encoding”, Pattern Recognition Letters, 2018.
K. Lim, L. Ang, H. Keles, “Learning Multi-scale Features for Foreground Segmentation”,arXiv preprint arXiv, 2018, 1808.01477,
W. Zheng, K. Wang, and F. Wang. Background subtraction algorithm based on bayesiangenerative adversarial networks. Acta Automatica Sinica, 2018
W. Zheng, K. Wang, and F. Wang. A novel background subtraction algorithm based onparallel vision and Bayesian GANs. Neurocomputing, 2018
Y. Wang, Z. Luo, P. Jodoin, “Interactive deep learning method for segmenting movingobjects”, Pattern ,Recognition Letters, 2016.
S. Bianco, G. Ciocca, R. Schettini, “How far can you get by combining change detectionalgorithms?” CoRR, abs/1505.02921, 2015.
M. Braham, S. Pierard, M. Van Droogenbroeck, "Semantic Background Subtraction", IEEEICIP 2017, September 2017.
Downloads
Published
Issue
Section
License
Authors retain copyright and full publishing rights to their articles. Upon acceptance, authors grant Indonesian Journal of Data and Science a non-exclusive license to publish the work and to identify itself as the original publisher.
Self-archiving. Authors may deposit the submitted version, accepted manuscript, and version of record in institutional or subject repositories, with citation to the published article and a link to the version of record on the journal website.
Commercial permissions. Uses intended for commercial advantage or monetary compensation are not permitted under CC BY-NC 4.0. For permissions, contact the editorial office at ijodas.journal@gmail.com.
Legacy notice. Some earlier PDFs may display “Copyright © [Journal Name]” or only a CC BY-NC logo without the full license text. To ensure clarity, the authors maintain copyright, and all articles are distributed under CC BY-NC 4.0. Where any discrepancy exists, this policy and the article landing-page license statement prevail.










