Generative AI Risks in UK Cyber Security Education
/ 1 min read
🛡️💻 Generative AI poses significant risks to academic integrity in cyber security education. A study of a Master’s-level cyber security program at a UK Russell Group university reveals a high vulnerability to misuse of large language models (LLMs) like ChatGPT. The research identifies independent project and report assessments as particularly at risk, exacerbated by factors such as block teaching and a diverse international student body. To combat these challenges, the paper suggests implementing LLM-resistant assessments, utilizing detection tools, and promoting an ethical learning environment. These strategies aim to maintain academic standards while equipping students to navigate the complexities of real-world cyber security.
