Policies, procedures, and guidelines: are universities effectively ensuring AI (academic integrity) in the era of generative AI?
Abstract
The objective of this study was to analyze Generative AI guidelines and policies at
Canadian universities, examining how these universities are ensuring academic integrity in the
face of challenges posed by using Generative AI tools in academic work. Focusing on assessment
redesign, AI-content citation, and AI-detection, the study employed qualitative document analysis
of policies and guidelines from the top twenty Canadian universities according to Times Higher
Education World Rankings. This purposive sampling strategy, focused on leading institutions from
different provinces, aimed to provide a representative overview of best practices and emerging
trends in Generative AI policy and guideline development. The analysis revealed both
commonalities and differences in institutional approaches. While universities generally emphasize
transparency through documentation, updated academic integrity policies, and instructor
autonomy in AI use, they differ in their approaches to AI detection tools, as well as AI
acknowledgment and citation. These results show Canadian universities' varied strategies to
address the complexities of Generative AI in academic environments. The study identifies key
recommendations for instructors, students, researchers, and staff, offering a foundation for
developing comprehensive Generative AI guidelines at the university level.