Surveillance Pedagogy: The Psychological and Pedagogical Risks of AI-Based Behavioral Analytics in Digital Classrooms
DOI:
https://doi.org/10.63056/Keywords:
Surveillance Pedagogy , Psychological and Pedagogical Risks of AI-Based Behavioral Analytics, Digital ClassroomsAbstract
Background: The rise of AI-based surveillance in education has introduced tools that track student behavior, emotions, and attention in real time. Though marketed as innovations for improving learning outcomes, these systems risk compromising student privacy, increasing anxiety, and narrowing pedagogical approaches. As schools adopt such technologies with limited oversight, it becomes crucial to investigate their broader implications on mental health, teaching practices, and educational equity. Objectives: This study aimed to investigate the psychological impact of AI-based surveillance on students’ mental health, stress, and motivation in digital classrooms; evaluated how AI-driven behavioral analytics influenced pedagogical practices such as teacher decision-making, student engagement, and instructional design; and explored the ethical, legal, and equity concerns related to data privacy, algorithmic bias, and student consent in educational surveillance systems. Methods: This study used a qualitative multi-case design to explore the psychological, pedagogical, and ethical impacts of AI-based surveillance in digital classrooms. Three institutions using tools like facial recognition and emotion AI were purposefully selected, with 60–70 participants including students, teachers, and policymakers. Data from interviews, focus groups, and documents were thematically analyzed using Braun and Clarke’s method with NVivo. Cross-case analysis revealed common and context-specific issues around stress, instructional shifts, and data ethics. Results: Findings reveal that AI surveillance heightens student stress, reduces intrinsic motivation, and fosters performative behaviors. Pedagogically, it reorients teaching toward data compliance and reduces teacher agency. Ethically, the study identifies serious concerns regarding privacy, consent, algorithmic bias, and the disproportionate impact on marginalized learners. Grounded in the emerging framework of surveillance pedagogy, this research calls for a human-centered, transparent, and equity-focused approach to educational AI. Conclusion: This study found that AI-based surveillance in digital classrooms intensifies student anxiety, undermines genuine engagement, and erodes intrinsic motivation. It alters pedagogy by pushing teachers to conform to algorithmic norms, often sidelining professional judgment and diverse learning styles. Ethical risks—such as opaque consent, algorithmic bias, and unequal impacts—threaten fairness and inclusivity. These insights underscore the urgent need for human-centered policies that safeguard dignity, equity, and agency in AI-driven education.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Muhammad Nawaz, Naila Awan, Sahira Ahmed, Asma Mustafa (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.