Authentic Learning Exercise for Kubernetes Misconfigurations: An Experience Report of Student Perceptions

Shazibul Islam Shamim, Fan Wu, Hossain Shahriar, Anthony Skjellum, and Akond Rahman in IEEE Conference on Software Engineering Education and Training (CSEE&T) 2025, 2025 Pre-print

Machine learning (ML) deployment projects are used by practitioners to automatically deploy ML models. While ML deployment projects aid practitioners, security vulnerabilities in these projects can make ML deployment infrastructure susceptible to security attacks. A systematic characterization of vulnerabilities can aid in identifying activities to secure ML deployment projects used by practitioners. We conduct an empirical study with 149 vulnerabilities mined from 12 open source ML deployment projects to characterize vulnerabilities in ML deployment projects. From our empirical study, we (i) find 68 of the 149 vulnerabilities are critically or highly severe; (ii) derive 10 consequences of vulnerabilities, e.g., unauthorized access to trigger ML deployments; and (iii) observe established quality assurance activities, such as code review to be used in the ML deployment projects. We conclude our paper by providing a set of recommendations for practitioners and researchers. Dataset used for our paper is available online.