Advancing the Measurement of MOOCs Software Quality: Validation of Assessment Tools Using the I-CVI Expert Framework

Mugi Praseptiawan, Ahmad Naim Che Pee, Mohd Hafiz Zakaria, Agustinus Noertjahyana

Abstract


The growing use of MOOCs in the post-pandemic era, particularly in developing countries, requires the availability of valid assessment tools to ensure software quality that meets users' needs. However, several tools are still being used without a proper content validation process, which risks producing biased and unrepresentative data. This study aims to evaluate the validity of the content of an assessment instrument designed to measure the dimension of software quality on the Massive Open Online Courses (MOOC) platform, particularly in the context of the increased adoption of online learning post-pandemic in developing countries. The instrument comprises 27 statement items representing ten quality software factors: functionality, usability, reliability, performance, security, maintainability, portability, compatibility, support, and integration. The validation was carried out by involving seven experts in information systems and digital learning. The method used is an item-level content validity index (I-CVI) based on a descriptive quantitative approach, with each item being assessed using a 5-point Likert scale. An item is declared valid if it obtains an I-CVI score of ? 0.79. The analysis showed that 21 items were valid; three needed to be revised at the I-CVI value between 0.70–0.78, and 3 invalid items at the I-CVI value < 0.70. The functionality, usability, support, and integration quality factors had the highest levels of validity, while the safety and support dimensions showed a higher degree of divergence in the expert assessment. These findings highlight the need for content validation to ensure MOOC indicators are accurate and relevant. The study showed the need for advanced validation tests involving real users and other validation methods, such as Aiken V or the fuzzy analysis hierarchy process (FAHP) to enhance the reliability and practical relevance of the tools developed.

Keywords


Software Quality; Content Validity; I-CVI; Instrument Validation; MOOC.

Full Text:

PDF

References


I. El Mourabit, S. J. Andaloussi, M. Miyara, and O. Ouchetto, “Identification of Online Learning Challenges During the COVID-19 Pandemic in Developing Countries: A Case Study of a Metropolis Faculty of Sciences,” Int. J. Emerg. Technol. Learn., vol. 18, no. 8, pp. 238–258, 2023, doi: 10.3991/ijet.v18i08.36747.

Y. M. Wang, C. L. Wei, W. J. Chen, and Y. S. Wang, “Revisiting the E-Learning Systems Success Model in the Post-COVID-19 Age: The Role of Monitoring Quality,” Int. J. Hum. Comput. Interact., vol. 0, no. 0, pp. 1–16, 2023, doi: 10.1080/10447318.2023.2231278.

J. Shah and M. Khanna, “What Determines MOOC Success? Validation of MOOC Satisfaction Continuance Model,” Vision, no. November 2022, 2022, doi: 10.1177/09722629221131386.

N. A. Shichkov, “Tools and methods for management system processes validation,” no. 1 (100), pp. 176–180, 2024, doi: 10.37493/2307-907X.2024.1.20.

F. A. Fernandes, C. S. C. Rodrigues, E. N. Teixeira, and C. M. L. Werner, “Immersive Learning Frameworks: A Systematic Literature Review,” IEEE Trans. Learn. Technol., vol. 16, no. 5, pp. 736–747, 2023, doi: 10.1109/TLT.2023.3242553.

C. M. Stracke and G. Trisolini, “A systematic literature review on the quality of moocs,” Sustain., vol. 13, no. 11, pp. 1–26, 2021, doi: 10.3390/su13115817.

U. U. E. Mohamed and N. Salleh, “Measuring the Success of Massive Open Online Courses: A Mixed-Method Case Study,” 2021 IEEE Asia-Pacific Conf. Comput. Sci. Data Eng. CSDE 2021, pp. 1–5, 2021, doi: 10.1109/CSDE53843.2021.9718478.

C. Ferreira, A. R. Arias, and J. Vidal, “Quality criteria in MOOC: Comparative and proposed indicators,” PLoS One, vol. 17, no. 12, p. e0278519, Dec. 2022, doi: 10.1371/journal.pone.0278519.

J. Li and J. Liu, “The Application of Triangular FAHP to Evaluate MOOC Quality,” 2022 7th Int. Conf. Intell. Comput. Signal Process. ICSP 2022, pp. 385–390, 2022, doi: 10.1109/ICSP54964.2022.9778700.

F. Polo and J. Kantola, “Tomorrow’s Digital Worker: A Critical Review and Agenda for Building Digital Competency Models,” 2020, pp. 107–115.

C. K. Jha and V. Gupta, “Farmer’s perception and factors determining the adaptation decisions to cope with climate change: An evidence from rural India,” Environ. Sustain. Indic., vol. 10, p. 100112, Jun. 2021, doi: 10.1016/j.indic.2021.100112.

B. Li et al., “A personalized recommendation framework based on MOOC system integrating deep learning and big data,” Comput. Electr. Eng., vol. 106, no. January, p. 108571, 2023, doi: 10.1016/j.compeleceng.2022.108571.

Y. Liu, M. Zhang, D. Qi, and Y. Zhang, “Understanding the role of learner engagement in determining MOOCs satisfaction: a self-determination theory perspective,” Interact. Learn. Environ., vol. 31, no. 9, pp. 6084–6098, Dec. 2023, doi: 10.1080/10494820.2022.2028853.

R. A Rahmat, S. B. Saidi, and N. S. Mohd Nasir, “Content Validity of Digital Knowledge using CVI Method,” Environ. Proc. J., vol. 9, no. SI20, pp. 21–28, Jul. 2024, doi: 10.21834/e-bpj.v9iSI20.6092.

I. Istyarini, N. Wahzudik, W. Wardi, N. Nurussa’adah, and C. Christian, “Validation of Reflective Model Curriculum Evaluation Instruments,” in Proceedings of the International Conference on Industrial Engineering and Operations Management, Apr. 2021, pp. 3430–3437, doi: 10.46254/SA02.20210953.

V. Shah, S. Murthy, and S. Iyer, “Is My MOOC Learner-Centric? A Framework for Formative Evaluation of MOOC Pedagogy,” Int. Rev. Res. Open Distance Learn., vol. 24, no. 2, pp. 138–161, 2023, doi: 10.19173/irrodl.v24i2.6898.

B. F. Zhu, Y. Zheng, M. Q. Ding, J. Dai, G. B. Liu, and L. T. Miao, “A pedagogical approach optimization toward sustainable architectural technology education applied by massive open online courses,” ARCHNET-IJAR Int. J. Archit. Res., vol. 17, no. 3, pp. 589–607, 2023, doi: 10.1108/ARCH-07-2022-0151.

H. Sebbeq and N. El Faddouli, “Towards Quality Assurance in MOOCs: A Comprehensive Review and Micro-Level Framework,” Int. Rev. Res. Open Distrib. Learn., vol. 25, no. 1, pp. 1–23, Mar. 2024, doi: 10.19173/irrodl.v25i1.7544.

S. Li et al., “Quantification and prediction of engagement: Applied to personalized course recommendation to reduce dropout in MOOCs,” Inf. Process. Manag., vol. 61, no. 1, p. 103536, 2024, doi: https://doi.org/10.1016/j.ipm.2023.103536.

D. Galin, Software Quality: Concepts and Practice. Wiley, 2018.

A. L. de S. LIMA and F. B. V. BENITTI, “UsabilityZero: Can a Bad User Experience Teach Well?,” Informatics Educ., vol. 20, no. 1, pp. 69–84, Mar. 2021, doi: 10.15388/infedu.2021.04.

W. N. W. A. Rahman, H. Zulzalil, I. Ishak, and A. W. Selamat, “Quality model for massive open online course (MOOC) web content,” Int. J. Adv. Sci. Eng. Inf. Technol., vol. 10, no. 1, pp. 24–33, 2020, doi: 10.18517/ijaseit.10.1.10192.

T. I. Kh and B. F. Ibrahim, “A Review: On performance quality aspect assessment for the IoT software applications,” 8th IEC 2022 - Int. Eng. Conf. Towar. Eng. Innov. Sustain., pp. 102–106, 2022, doi: 10.1109/IEC54822.2022.9807536.

S. J. Wafudu, Y. Bin Kamin, and D. Marcel, “Validity and reliability of a questionnaire developed to explore quality assurance components for teaching and learning in vocational and technical education,” Humanit. Soc. Sci. Commun., vol. 9, no. 1, p. 303, Sep. 2022, doi: 10.1057/s41599-022-01306-1.

J. Kasper et al., “MAPPinfo ? mapping quality of health information: Validation study of an assessment instrument,” PLoS One, vol. 18, no. 10, p. e0290027, Oct. 2023, doi: 10.1371/journal.pone.0290027.

D. F. Polit, C. T. Beck, and S. V. Owen, “Is the CVI an acceptable indicator of content validity? Appraisal and recommendations,” Res. Nurs. Health, vol. 30, no. 4, pp. 459–467, Aug. 2007, doi: 10.1002/nur.20199.

B. Resnick, S. Zimmerman, D. Orwig, A.-L. Furstenberg, and J. Magaziner, “Model Testing for Reliability and Validity of the Outcome Expectations for Exercise Scale,” Nurs. Res., vol. 50, no. 5, pp. 293–299, Sep. 2001, doi: 10.1097/00006199-200109000-00007.

H. B. Li, H. L. Gu, W. Chen, and Q. K. Zhu, “Improving Massive Open Online Course Quality in Higher Education by Addressing Student Needs Using Quality Function Deployment,” SUSTAINABILITY, vol. 15, no. 22, 2023, doi: 10.3390/su152215678.

C. Ferreira, A. R. Arias, and J. Vidal, “Quality criteria in MOOC: Comparative and proposed indicators,” PLoS One, vol. 17, no. 12 December, pp. 1–11, 2022, doi: 10.1371/journal.pone.0278519.

Y. Wang, “Where and what to improve? Design and application of a MOOC evaluation framework based on effective teaching practices,” Distance Educ., vol. 44, no. 3, pp. 458–475, Jul. 2023, doi: 10.1080/01587919.2023.2226601.

Z. Xie, S. Tang, C. E. Johnson, L. Xiao, J. Ding, and C. Huang, “Translation, cross-cultural adaptation and validation of the Chinese version of supportive and palliative care indicators tool (SPICT-CH) to identify cancer patients with palliative care needs,” BMC Palliat. Care, vol. 24, no. 1, p. 4, Jan. 2025, doi: 10.1186/s12904-024-01641-x.

Q. Liu, L. Yang, and J. Huang, “Development and Verification of the Evaluation Scale of MOOC Learners’Online Deep Learning Ability,” in Proceedings of the 2023 6th International Conference on Educational Technology Management, Nov. 2023, pp. 192–199, doi: 10.1145/3637907.3637973.

X. Jing et al., “Development, validation, and usage of metrics to evaluate the quality of clinical research hypotheses,” BMC Med. Res. Methodol., vol. 25, no. 1, p. 11, Jan. 2025, doi: 10.1186/s12874-025-02460-1.

H. Sebbaq and N. E. El Faddouli, “Towards Quality Assurance in MOOCs: A Comprehensive Review and Micro-Level Framework,” Int. Rev. Res. Open Distrib. Learn., vol. 25, no. 1, 2024, doi: 10.19173/irrodl.v25i1.7544.

S. S. Singh, S. Kumar, S. K. Meena, K. Singh, S. Mishra, and A. Y. Zomaya, “Quantum social network analysis: Methodology, implementation, challenges, and future directions,” Inf. Fusion, vol. 117, p. 102808, 2025, doi: https://doi.org/10.1016/j.inffus.2024.102808.




DOI: https://doi.org/10.52088/ijesty.v5i3.911

Article Metrics

Abstract view : 0 times
PDF - 0 times

Refbacks

  • There are currently no refbacks.


Copyright (c) 2025 Mugi Praseptiawan, Ahmad Naim Che Pee, Mohd Hafiz Zakaria, Agustinus Noertjahyana

International Journal of Engineering, Science, and Information Technology (IJESTY) eISSN 2775-2674