-
Exploring Teacher-Chatbot Interaction and Affect in Block-Based Programming
Authors:
Bahare Riahi,
Ally Limke,
Xiaoyi Tian,
Viktoriia Storozhevykh,
Sayali Patukale,
Tahreem Yasir,
Khushbu Singh,
Jennifer Chiu,
Nicholas lytle,
Tiffany Barnes,
Veronica Catete
Abstract:
AI-based chatbots have the potential to accelerate learning and teaching, but may also have counterproductive consequences without thoughtful design and scaffolding. To better understand teachers' perspectives on large language model (LLM)-based chatbots, we conducted a study with 11 teams of middle school teachers using chatbots for a science and computational thinking activity within a block-bas…
▽ More
AI-based chatbots have the potential to accelerate learning and teaching, but may also have counterproductive consequences without thoughtful design and scaffolding. To better understand teachers' perspectives on large language model (LLM)-based chatbots, we conducted a study with 11 teams of middle school teachers using chatbots for a science and computational thinking activity within a block-based programming environment. Based on a qualitative analysis of audio transcripts and chatbot interactions, we propose three profiles: explorer, frustrated, and mixed, that reflect diverse scaffolding needs. In their discussions, we found that teachers perceived chatbot benefits such as building prompting skills and self-confidence alongside risks including potential declines in learning and critical thinking. Key design recommendations include scaffolding the introduction to chatbots, facilitating teacher control of chatbot features, and suggesting when and how chatbots should be used. Our contribution informs the design of chatbots to support teachers and learners in middle school coding activities.
△ Less
Submitted 2 March, 2026;
originally announced March 2026.
-
AI-Generated Rubric Interfaces: K-12 Teachers' Perceptions and Practices
Authors:
Bahare Riahi,
Sayali Patukale,
Joy Niranjan,
Yogya Koneru,
Tiffany Barnes,
Veronica Cateté
Abstract:
This study investigates K--12 teachers' perceptions and experiences with AI-supported rubric generation during a summer professional development workshop ($n = 25$). Teachers used MagicSchool.ai to generate rubrics and practiced prompting to tailor criteria and performance levels. They then applied these rubrics to provide feedback on a sample block-based programming activity, followed by using a…
▽ More
This study investigates K--12 teachers' perceptions and experiences with AI-supported rubric generation during a summer professional development workshop ($n = 25$). Teachers used MagicSchool.ai to generate rubrics and practiced prompting to tailor criteria and performance levels. They then applied these rubrics to provide feedback on a sample block-based programming activity, followed by using a chatbot to deliver rubric-based feedback for the same work.
Data were collected through pre- and post-workshop surveys, open discussions, and exit tickets. We used thematic analysis to analyze the qualitative data. Teachers reported that they rarely create rubrics from scratch because the process is time-consuming and defining clear distinctions between performance levels is challenging.
After hands-on use, teachers described AI-generated rubrics as strong starting drafts that improved structure and clarified vague criteria. However, they emphasized the need for teacher oversight due to generic or grade-misaligned language, occasional misalignment with instructional priorities, and the need for substantial editing.
Survey results indicated high perceived clarity and ethical acceptability, moderate alignment with assignments, and usability as the primary weakness -- particularly the ability to add, remove, or revise criteria. Open-ended responses highlighted a ``strictness-versus-detail'' trade-off: AI feedback was often perceived as harsher but more detailed and scalable. As a result, teachers expressed conditional willingness to adopt AI rubric tools when workflows support easy customization and preserve teacher control.
△ Less
Submitted 11 March, 2026;
originally announced March 2026.
-
Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency
Authors:
Bahare Riahi,
Viktoriia Storozhevykh,
Veronica Catete
Abstract:
This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles framework articulated by Jobin (2019), our study examines fairness, trust, consistency, and transparency in AI grading by comparing AI-generated feedback with origina…
▽ More
This study investigates students' perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles framework articulated by Jobin (2019), our study examines fairness, trust, consistency, and transparency in AI grading by comparing AI-generated feedback with original human-graded feedback. Findings reveal concerns about AI's lack of contextual understanding and personalization. We recommend that equitable and trustworthy AI systems reflect human judgment, flexibility, and empathy, serving as supplementary tools under human oversight. This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments.
△ Less
Submitted 22 February, 2026; v1 submitted 7 February, 2026;
originally announced February 2026.
-
SnapClass: An AI-Enhanced Classroom Management System for Block-Based Programming
Authors:
Bahare Riahi,
Xiaoyi Tian,
Ally Limke,
Viktoriia Storozhevykh,
Veronica Catete,
Tiffany Barnes,
Nicholas Lytle,
Khushbu Singh
Abstract:
Block-Based Programming (BBP) platforms, such as Snap!, have become increasingly prominent in K-12 computer science education due to their ability to simplify programming concepts and foster computational thinking from an early age. While these platforms engage students through visual and gamified interfaces, teachers often face challenges in using them effectively and finding all the necessary fe…
▽ More
Block-Based Programming (BBP) platforms, such as Snap!, have become increasingly prominent in K-12 computer science education due to their ability to simplify programming concepts and foster computational thinking from an early age. While these platforms engage students through visual and gamified interfaces, teachers often face challenges in using them effectively and finding all the necessary features for classroom management. To address these challenges, we introduce SnapClass, a classroom management system integrated within the Snap! programming environment. SnapClass was iteratively developed drawing on established research about the pedagogical and logistical challenges teachers encounter in computing classrooms. Specifically, SnapClass allows educators to create and customize block-based coding assignments based on student skill levels, implement rubric-based auto-grading, and access student code history and recovery features. It also supports monitoring student engagement and idle time, and includes a help dashboard with a raise hand feature to assist students in real time. This paper describes the design and key features of SnapClass those are developed and those are under progress.
△ Less
Submitted 17 December, 2025;
originally announced December 2025.
-
Comparative Analysis of STEM and non-STEM Teachers' Needs for Integrating AI into Educational Environments
Authors:
Bahare Riahi,
Veronica Catete
Abstract:
There is an increasing imperative to integrate programming platforms within AI frameworks to enhance educational tasks for both teachers and students. However, commonly used platforms such as Code.org, Scratch, and Snap fall short of providing the desired AI features and lack adaptability for interdisciplinary applications. This study explores how educational platforms can be improved by incorpora…
▽ More
There is an increasing imperative to integrate programming platforms within AI frameworks to enhance educational tasks for both teachers and students. However, commonly used platforms such as Code.org, Scratch, and Snap fall short of providing the desired AI features and lack adaptability for interdisciplinary applications. This study explores how educational platforms can be improved by incorporating AI and analytics features to create more effective learning environments across various subjects and domains. We interviewed 8 K-12 teachers and asked their practices and needs while using any block-based programming (BBP) platform in their classes. We asked for their approaches in assessment, course development and expansion of resources, and student monitoring in their classes. Thematic analysis of the interview transcripts revealed both commonalities and differences in the AI tools needed between the STEM and non-STEM groups. Our results indicated advanced AI features that could promote BBP platforms. Both groups stressed the need for integrity and plagiarism checks, AI adaptability, customized rubrics, and detailed feedback in assessments. Non-STEM teachers also emphasized the importance of creative assignments and qualitative assessments. Regarding resource development, both AI tools desired for updating curricula, tutoring libraries, and generative AI features. Non-STEM teachers were particularly interested in supporting creative endeavors, such as art simulations. For student monitoring, both groups prioritized desktop control, daily tracking, behavior monitoring, and distraction prevention tools. Our findings identify specific AI-enhanced features needed by K-12 teachers across various disciplines and lay the foundation for creating more efficient, personalized, and engaging educational experiences.
△ Less
Submitted 18 September, 2025;
originally announced September 2025.