Trainee teachers made more accurate assessments of learning difficulties after receiving feedback from AI

Trainee teachers made more accurate assessments of learning difficulties after receiving feedback from AI

Example of automatic AI-generated feedback on the assessment of a student with learning difficulties by a trainee teacher (German, with translated annotations). Credit: Michael Sailer, LMU-Munich

A trial in which trainee teachers who were learning to identify students with potential learning difficulties had their work ‘marked’ with artificial intelligence found that the approach significantly improved their reasoning.

The study, with 178 trainee teachers in Germany, was conducted by a research team led by academics from the University of Cambridge and the Ludwig-Maximilians-Universität München (LMU Munich). It provides some of the first evidence that artificial intelligence (IA) could improve teachers’ ‘diagnostic reasoning’: the ability to collect and evaluate evidence about a student, and draw appropriate conclusions so that they can benefit from personalized support.

During the trial, trainees were asked to rate six fictitious “simulated” students with potential learning difficulties. They received samples of their school work, as well as other information such as behavior recordings and transcripts of conversations with parents. They then had to decide whether or not each student had learning difficulties such as dyslexia or attention deficit hyperactivity disorder (ADHD), and explain their reasoning.

Immediately after submitting their answers, half of the trainees received a prototype “expert solution”, written beforehand by a qualified professional, to compare with their own. This is typical of practice material that student teachers typically receive outside of taught classes. The rest received AI-generated feedback, which highlighted the correct parts of their solution and pointed out areas they could have improved.

After completing the six preparatory exercises, the trainees then took two similar follow-up tests, this time without any feedback. The tests were scored by the researchers, who assessed both their “diagnostic accuracy” (whether the trainees had correctly identified cases of dyslexia or ADHD) and their diagnostic reasoning: how well they had used the available evidence to make that judgment.

The average score for diagnostic reasoning among trainees who had received AI feedback during the six preliminary exercises was about 10 percentage points higher than those who had worked with the pre-written expert solutions.

The reason for this may be the “adaptive” nature of AI. Because he analyzed the intern the teachers’ work, rather than asking them to compare it with an expert version, the researchers believe that the feedback was clearer. There is therefore no evidence that AI of this type would improve the individual feedback of a high-quality human tutor or mentor, but the researchers point out that such close support is not always readily available. for trainee teachers for rehearsal. practical, especially those on long courses.

The study was part of a research project within the Cambridge LMU Strategic Partnership. The AI ​​was developed with the support of a team from the Technical University of Darmstadt.

Riikka Hofmann, Associate Professor at Cambridge University’s Faculty of Education, said: “Teachers play a vital role in recognizing signs of learning disabilities and difficulties in students and guiding them towards specialists. Unfortunately, many of them also feel they haven’t had enough opportunity to practice these skills. The level of personalized guidance that trainee teachers receive for German lessons is different to that in the UK, but in both cases it is possible that the AI ​​could provide an additional level of individualized feedback to help them. develop these essential skills. .”

Dr Michael Sailer, from LMU Munich, said: “We obviously do not claim that AI should replace teacher-educators: new teachers still need expert advice on how to recognize learning difficulties in the first place. It seems, however, that the AI-generated feedback helped these trainees focus on what they really needed to learn. Where personal commentary is not readily available, this could be an effective substitute.”

The study used a natural language processing system: an artificial neural network capable of analyzing human language and locate certain phrases, ideas, hypotheses or assessments in the trainees’ text.

It was created from the responses of an earlier cohort of pre-service teachers to a similar exercise. By segmenting and coding these responses, the team “trained” the system to recognize the presence or absence of key points in the solutions provided by the trainees during the trial. The system then selected blocks of pre-written text to give participants appropriate feedback.

In the preparatory exercises and follow-up tasks, trial participants were either asked to work individually or assigned to randomly selected pairs. Those who worked alone and received expert solutions during the preparatory exercises scored an average of 33% for their diagnostic reasoning during the follow-up tasks. On the other hand, those who had received the AI feedback obtained 43%. Similarly, the average score for trainees working in pairs was 35% if they received the expert solution, but 45% if they received the AI ​​coaching.

Training with the AI ​​did not seem to have a major effect on their ability to correctly diagnose simulated students. Instead, it appears to have made a difference by helping teachers sift through the various sources of information they were asked to read and providing specific evidence of potential learning difficulties. This is the main skill that most teachers really need in the classroom: the task of diagnosing students falls to specialist teachers, school psychologists and health professionals. Teachers need to be able to communicate and demonstrate their observations to specialists when they have concerns, in order to help students access appropriate support.

To what extent AI could be used more broadly to support teachers’ reasoning skills remains an open question, but the research team hopes to undertake further studies to explore the mechanisms that made it effective in this case and assess this wider potential.

Frank Fischer, Professor of Education and Educational Psychology at LMU Munich, said: “In large training programs, which are quite common in areas such as teacher training or medical training, the use of AI to support simulation-based learning could have real value. Implementing complex natural language processing tools for this purpose takes time and effort, but if it helps improve the reasoning skills of future cohorts of professionals, it may well be worth the investment.”

The research is published in Learning and instruction.


Computer games in the classroom: academic success depends on the teacher


More information:
Adaptive feedback from artificial neural networks facilitates diagnostic reasoning for pre-service teachers in simulation-based learning, Learning and instruction (2022). DOI: 10.1016/j.learninstruc.2022.101620

Quote: Trainee teachers made more accurate assessments of learning difficulties after receiving feedback from AI (April 11, 2022) Retrieved April 11, 2022 from https://phys.org/news/2022-04-trainee -teachers-sharper-difficulties-feedback.html

This document is subject to copyright. Except for fair use for purposes of private study or research, no part may be reproduced without written permission. The content is provided for information only.

Leave a Comment