A Cardiotocographic Classification using Feature Selection: A comparative Study

Main Article Content

Septian Eko Prasetyo Pulung Hendro Prastyo Shindy Arti

Abstract

Cardiotocography is a series of inspections to determine the health of the fetus in pregnancy. The inspection process is carried out by recording the baby's heart rate information whether in a healthy condition or contrarily. In addition, uterine contractions are also used to determine the health condition of the fetus. Fetal health is classified into 3 conditions namely normal, suspect, and pathological. This paper was performed to compare a classification algorithm for diagnosing the result of the cardiotocographic inspection. An experimental scheme is performed using feature selection and not using it. CFS Subset Evaluation, Info Gain, and Chi-Square are used to select the best feature which correlated to each other. The data set was obtained from the UCI Machine Learning repository available freely. To find out the performance of the classification algorithm, this study uses an evaluation matrix of precision, Recall, F-Measure, MCC, ROC, PRC, and Accuracy. The results showed that all algorithms can provide fairly good classification. However, the combination of the Random Forest algorithm and the Info Gain Feature Selection gives the best results with an accuracy of 93.74%.

Downloads

Download data is not yet available.

Article Details

How to Cite
Prasetyo, S., Prastyo, P., & Arti, S. (2021, March 31). A Cardiotocographic Classification using Feature Selection: A comparative Study. JITCE (Journal of Information Technology and Computer Engineering), 5(01), 25-32. https://doi.org/https://doi.org/10.25077/jitce.5.01.25-32.2021
Section
Articles

References

[1] P. U. Okorie, “The Significant of Biomedical Engineering to Medical Field in Nigeria The Significant of Biomedical Engineering to Medical Field in,” Am. J. Biomed. Sci. Eng., vol. 2, no. 1, 2015.
[2] A. Illanes and M. Haritopoulos, “Fetal heart rate feature extraction from cardiotocographic recordings through autoregressive model’s power spectral- and pole-based analysis,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS, vol. 2015-Novem, pp. 5842–5845, 2015.
[3] C. Rotariu, H. Costin, A. Păsărică, and agoş Nemescu, “Classification of parameters extracted from cardiotocographic signals for early detection of metabolic acidemia in newborns,” Adv. Electr. Comput. Eng., vol. 15, no. 3, pp. 161–166, 2015.
[4] A. Pinas and E. Chandraharan, “Continuous cardiotocography during labour: Analysis, classification and management,” Best Pract. Res. Clin. Obstet. Gynaecol., vol. 30, pp. 33–47, 2016.
[5] K. A. Allen and D. H. Brandon, “Hypoxic Ischemic Encephalopathy: Pathophysiology and Experimental Treatments,” Newborn Infant Nurs. Rev., vol. 11, no. 3, pp. 125–133, 2011.
[6] D. Ayres-De-Campos, C. Y. Spong, and E. Chandraharan, “FIGO consensus guidelines on intrapartum fetal monitoring: Cardiotocography,” Int. J. Gynecol. Obstet., vol. 131, no. 1, pp. 13–24, 2015.
[7] H. Tang, T. Wang, M. Li, and X. Yang, “The Design and Implementation of Cardiotocography Signals Classification Algorithm Based on Neural Network,” Comput. Math. Methods Med., vol. 2018, 2018.
[8] Z. CÖMERT and A. KOCAMAZ, “A Study of Artificial Neural Network Training Algorithms for Classification of Cardiotocography Signals,” Bitlis Eren Univ. J. Sci. Technol., vol. 7, no. 2, pp. 93–103, 2017.
[9] N. J. Ali Kadhim and J. Kadhim Abed, “Enhancing the Prediction Accuracy for Cardiotocography (CTG) using Firefly Algorithm and Naive Bayesian Classifier,” IOP Conf. Ser. Mater. Sci. Eng., vol. 745, p. 012101, 2020.
[10] M. E. B. Menai, F. J. Mohder, and F. Al-mutairi, “Influence of Feature Selection on Naïve Bayes Classifier for Recognizing Patterns in Cardiotocograms,” J. Med. Bioeng., vol. 2, no. 1, pp. 66–70, 2013.
[11] H. Ocak, “A medical decision support system based on support vector machines and the genetic algorithm for the evaluation of fetal well-being,” J. Med. Syst., vol. 37, no. 2, 2013.
[12] N. Chamidah and I. Wasito, “Fetal state classification from cardiotocography based on feature extraction using hybrid K-Means and support vector machine,” ICACSIS 2015 - 2015 Int. Conf. Adv. Comput. Sci. Inf. Syst. Proc., pp. 37–41, 2016.
[13] M. Arif, “Classification of cardiotocograms using random forest classifier and selection of important features from cardiotocogram signal,” Biomater. Biomech. Bioeng., vol. 2, no. 3, pp. 173–183, 2015.
[14] R. Jyothi, S. Hiwale, and P. V. Bhat, “Classification of labour contractions using KNN classifier,” 2016 Int. Conf. Syst. Med. Biol. ICSMB 2016, no. January, pp. 110–115, 2017.
[15] S. A. A. Shah, W. Aziz, M. Arif, and M. S. A. Nadeem, “Decision Trees Based Classification of Cardiotocograms Using Bagging Approach,” Proc. - 2015 13th Int. Conf. Front. Inf. Technol. FIT 2015, pp. 12–17, 2016.
[16] S. Velappan, D. Murugan, J. Rani, and K. Rajalakshmi, “Comparative Analysis of Classification Techniques using Cardiotocography Dataset,” IJRIT Int. J. Res. Inf. Technol., vol. 1, no. 12, pp. 274–280, 2013.
[17] K. Agrawal and H. Mohan, “Cardiotocography Analysis for Fetal State Classification Using Machine Learning Algorithms,” 2019 Int. Conf. Comput. Commun. Informatics, ICCCI 2019, no. October, pp. 1–6, 2019.
[18] M. de Sa, J. Bernardes, and A. de Campos, “Cardiotocography Data Set,” UCI - Machine Learning Repository, 2010. http://archive.ics.uci.edu/ml/datasets/Cardiotocography.
[19] S. Agarwal, Data mining: Data mining concepts and techniques, 3rd ed. Elsevier Inc., 2014.
[20] P. Kaviani and S. Dhotre, “Short Survey on Naive Bayes Algorithm,” Int. J. Adv. Eng. Res. Dev., vol. 4, no. 11, pp. 607–611, 2017.
[21] S. Taheri and M. Mammadov, “Learning the naive bayes classifier with optimization models,” Int. J. Appl. Math. Comput. Sci., vol. 23, no. 4, pp. 787–795, 2013.
[22] S. L. Ting, W. H. Ip, and A. H. C. Tsang, “Is Naïve bayes a good classifier for document classification?,” Int. J. Softw. Eng. its Appl., vol. 5, no. 3, pp. 37–46, 2011.
[23] N. Satyanarayana, Y. Ramadevi, and K. K. Chari, “High blood pressure prediction based on AAA using J48 classifier,” 2018 Conf. Signal Process. Commun. Eng. Syst. SPACES 2018, vol. 2018-Janua, pp. 121–126, 2018.
[24] R. Patil and V. M. Barkade, “Class-Specific Features Using J48 Classifier for Text Classification,” Proc. - 2018 4th Int. Conf. Comput. Commun. Control Autom. ICCUBEA 2018, pp. 1–5, 2018.
[25] Y. L. Pavlov, “Random forests,” Mach. Learn., vol. 45, pp. 5–32, 2001.
[26] F. Da Silva, J. Desaphy, G. Bret, and D. Rognan, “IChemPIC: A Random Forest Classifier of Biological and Crystallographic Protein-Protein Interfaces,” J. Chem. Inf. Model., vol. 55, no. 9, pp. 2005–2014, 2015.
[27] Z. Masetic and A. Subasi, “Congestive heart failure detection using random forest classifier,” Comput. Methods Programs Biomed., vol. 130, pp. 54–64, 2016.
[28] M. P. LaValley, “Logistic regression,” Circulation, vol. 117, no. 18, pp. 2395–2399, 2008.
[29] Y. P. Lin, H. J. Chu, C. F. Wu, and P. H. Verburg, “Predictive ability of logistic regression, auto-logistic regression and neural network models in empirical land-use change modeling - a case study,” Int. J. Geogr. Inf. Sci., vol. 25, no. 1, pp. 65–87, 2011.
[30] J. Tolles and W. J. Meurer, “Logistic regression: Relating patient characteristics to outcomes,” J. Am. Med. Assoc., vol. 316, no. 5, pp. 533–534, 2016.
[31] W. J. Meurer and J. Tolles, “Logistic Regression Diagnostics : Understanding How Well a Model Predicts Outcomes,” J. Am. Stat. Assoc., vol. 317, p. 1068, 2017.
[32] C. M. Bishop, Pattern Recognition and Machine Learning, vol. 27, no. 1. Springer, 2006.
[33] N. Garcia-Pedrajas, J. A. Romero Del Castillo, and G. Cerruela-Garcia, “A Proposal for Local k Values for k-Nearest Neighbor Rule,” IEEE Trans. Neural Networks Learn. Syst., vol. 28, no. 2, pp. 470–475, 2017.
[34] A. B. A. Hassanat, “Two-point-based binary search trees for accelerating big data classification using KNN,” PLoS One, vol. 13, no. 11, pp. 1–15, 2018.
[35] C. Jensen, J. Carl, L. Boesen, N. C. Langkilde, and L. R. Østergaard, “Assessment of prostate cancer prognostic Gleason grade group using zonal-specific features extracted from biparametric MRI using a KNN classifier,” J. Appl. Clin. Med. Phys., vol. 20, no. 2, pp. 146–153, 2019.
[36] V. N. Vapnik, The Nature of Statistical Learning, 2nd ed. New-York: Springer-Verlag, 1995.
[37] X. Shen, L. Niu, Z. Qi, and Y. Tian, “Support vector machine classifier with truncated pinball loss,” Pattern Recognit., vol. 68, no. 5, pp. 199–210, 2017.
[38] H. Byun and S. W. Lee, “Applications of support vector machines for pattern recognition: A survey,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 2388, pp. 213–236, 2002.
[39] M. W. Huang, C. W. Chen, W. C. Lin, S. W. Ke, and C. F. Tsai, “SVM and SVM ensembles in breast cancer prediction,” PLoS One, vol. 12, no. 1, pp. 1–15, 2017.
[40] F. Murtagh, “Multilayer perceptrons for classification and regression,” Neurocomputing, vol. 2, no. 5–6, pp. 183–197, 1991.
[41] H. Ramchoun, M. Amine, J. Idrissi, Y. Ghanou, and M. Ettaouil, “Multilayer Perceptron: Architecture Optimization and Training,” Int. J. Interact. Multimed. Artif. Intell., vol. 4, no. 1, p. 26, 2016.
[42] E. Heidari, M. A. Sobati, and S. Movahedirad, “Accurate prediction of nanofluid viscosity using a multilayer perceptron artificial neural network (MLP-ANN),” Chemom. Intell. Lab. Syst., vol. 155, pp. 73–85, 2016.
[43] J. Li et al., “Feature Selection: A Data Perspective,” ACM Comput. Surv., vol. 50, no. 6, 2017.
[44] K. R. Pushpalatha and A. G. Karegowda, “CFS Based Feature Subset Selection for Enhancing Classification of Similar Looking Food Grains-A Filter Approach,” 2017 2nd Int. Conf. Emerg. Comput. Inf. Technol. ICECIT 2017, pp. 1–6, 2018.
[45] S. Lei, “A feature selection method based on information gain and genetic algorithm,” Proc. - 2012 Int. Conf. Comput. Sci. Electron. Eng. ICCSEE 2012, vol. 2, pp. 355–358, 2012.
[46] X. Jin, A. Xu, R. Bie, and P. Guo, “Machine learning techniques and chi-square feature selection for cancer classification using SAGE gene expression profiles,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 3916 LNBI, pp. 106–115, 2006.
[47] C. J. Willmott and K. Matsuura, “Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance,” Clim. Res., vol. 30, no. 1, pp. 79–82, 2005.
[48] R. Delgado and X. A. Tibau, “Why Cohen’s Kappa should be avoided as performance measure in classification,” PLoS One, vol. 14, no. 9, pp. 1–26, 2019.