News

Concerns raised over early years students using artificial intelligence to cheat on assignments

Artificial Intelligence (AI) is now so sophisticated it can ‘learn’ qualifications and write assignments by unit number, an early years tech expert has found.
There are concerns that students studying early years qualifications are writing assignments using AI, which can also generate images such as this one

Tools such as ChatGPT can now draft model answers in a particular writing style, prompting concerns this makes it difficult to reliably use written work to assess knowledge.

Samia Kazi, co-founder of EdTeach training platform Global Childhood Academy, uploaded an NCFE level 3 Early Years Educator handbook to a premium version of ChatGPT and told it to use the learning outcomes in the handbook to structure the answer for a unit on child development. The chatbot produced a textbook response which it was then able to remodel to appear as the work of a student. Written assignments form a key part of assessing a student’s knowledge in childcare qualifications.

Kazi told Nursery World, ‘Students can use AI for all of that, therefore that whole qualification needs to change. This needs to be done yesterday’ she added, as ‘the young generation are getting really good at analysing [AI] and putting their own stamp onto it’.

Misuse of AI could 'put children at risk'

AI misuse is a growing concern across all sectors. More than half of UK undergraduates that took part in a Higher Education Policy Institute (HEPi) survey said they use AI to help with essays (5 per cent admitting to copying unedited AI-generated text), and now early years training providers say they are coming across students using artificial intelligence to cheat on assignments.

The news has sparked concerns that gaps in knowledge could ultimately put children at risk. Poor practices in safe sleep and weaning, which are both part of early years qualifications, have already been found to be contributing factors in the recent deaths in nurseries of babies Genevieve Meehan and Oliver Steeper.

‘The Level 3 is a full and relevant qualification - someone can end up running a setting with one. If someone doesn’t have the knowledge it could end up putting the children at risk’ says Judith Saxon, training quality manager at the Early Years Alliance.

Saxon said she had a student who used chat GPT to write a four-page assignment which they passed off as their own. While the student’s work was picked up because the writing style was different, as the tech advances, she admits, ‘It is going to get harder and harder to spot.’

According to the training quality manager, AI tools can even mimic a student’s specific writing style if given access to their past work, such as StealthGPT, which aims to generate “undetectable AI content” reflecting the user’s “style and intellect.”

The Alliance tries to mitigate this by ‘getting to know the learner really well’ says Saxon, encouraging students to use personal examples in their work, doing extra verbal assessments, and using tools such as AI checkers, though Saxon has herself found that a response generated with AI can be tweaked ‘until an AI checker says it is human-generated, even though it wasn’t.’

Training companies without such safeguards could find misuse harder to spot, she adds. ‘There’s always a risk when money is involved. Your greatest expense when delivering qualifications is all the staff hours. You could have some unscrupulous [assessors] who just feel they can get people through and if AI misuse is not detected it doesn’t matter – they still get their outcome payments when the learner completes.’

Identifying use of AI

Cheating using AI can, if detected, lead to students being disqualified from their qualification. Guidance from the Joint Council for Qualifications (JCQ), which was supported by Ofqual, says students must reference any AI use and show clearly how they have used it.

However, the guidance makes no mention of premium AI tools analysing documents which have been uploaded. It provides a list of potential indicators of AI misuse, including American spelling, lack of quotes and local knowledge, or being inconsistent with a candidate’s writing style. However, these can be overcome by using further prompts.

NCFE said it has trained examiners to identify signs of AI use as well as restricting the use of AI tools within its external assessment system, and provided guidance on AI for centres and learners.

Janet King, sector manager for education and childcare at NCFE, said that ‘formative [ongoing] assessments, combined with the practical application of knowledge in an early years setting, ensures there are no shortcuts to obtaining a full and relevant license to practice.’

She said, ‘Assessment design can also go some way to mitigating the problems generative AI brings. For example, breaking down assessments into smaller tasks that are submitted separately.

‘Where appropriate, the use of AI can form part of the learning process’ but she added ‘Any learner that submits work which is not their own, or which shows other misuse of AI, will be investigated for malpractice.’

An Ofqual spokesperson said they are aware of the potential of premium AI tools, adding, ‘While instances of cheating using AI are low, we continue to assess the risks and are considering appropriate longer-term interventions.’

Ofqual says it requires awarding organisations to ‘assess the risk that AI poses to their assessments, and to monitor for, prevent and take action against cheating’, sharing any relevant information from such cases with other awarding organisations.

It also requests that all awarding organisations tell them how they are identifying and managing AI-related malpractice risks, and the support they offer to schools and colleges.  

Additionally, awarding organisations have to identify cases of cheating using AI when reporting malpractice to Ofqual.