AmplifyFE – Assessing in the AI era: an AI usage scale in practice

May 22, 2026|12:30 PM UK Time

In UK further education, unchecked student use of generative AI for assessments now threatens qualification integrity just as regulators clarify AI cannot solely determine marks.

Key takeaways

  • Since ChatGPT's 2022 launch and rapid capability jumps through 2025, student AI adoption in assignments surged, prompting educators like those at Bridgend College to trial custom 'AI usage scales' in 2024 to guide rather than ban use.
  • Further education providers risk inconsistent academic standards, potential invalid qualifications, and equity gaps between AI-savvy students and others without clear guidelines, especially with Ofqual prohibiting AI as sole marker in regulated qualifications.
  • Tensions arise between viewing AI scales as supportive pedagogical tools versus rigid rules, while broader UK developments like DfE's January 2026 generative AI product safety standards push for safer integration without resolving assessment redesign debates.

Navigating AI in FE Assessments

Further education in the UK—encompassing colleges delivering vocational and skills-based qualifications—confronts generative AI's disruption to assessment at a pivotal moment. Tools like ChatGPT, now far more capable than in 2022-2023, allow students to generate essays, code, or analyses quickly, eroding traditional evaluation methods built on independent demonstration of skills.

Recent regulatory signals underscore urgency. Ofqual's 2024 policy and January 2026 working paper on AI in marking reaffirm that AI cannot serve as the sole determinant of marks in regulated qualifications, citing risks of bias, inaccuracy, and lack of transparency. This rules out full automation but leaves room for supportive uses, while DfE's January 2026 generative AI product safety standards set expectations for edtech tools in education, focusing on privacy, fairness, and child safeguards—though not mandating assessment redesign.

Practitioners respond variably. In 2024, Pete Dunford at Bridgend College developed the Bridgend AI Usage Scale as a trial framework for his higher education-level courses within FE, aiming to clarify permissible AI involvement and foster transparency with quality teams. Two years on, experiences highlight a shift: treating such scales as enabling tools rather than punitive restrictions can support learning, but requires institutional buy-in to avoid inconsistencies.

Similar frameworks, like the AI Assessment Scale (updated 2024), gain traction beyond the UK but inform local adaptations. In FE, stakes include maintaining Ofqual compliance for awarding bodies, avoiding qualification invalidation, and addressing equity—vocational learners often from disadvantaged backgrounds may lack equal AI access or literacy, widening gaps if policies ignore this.

Non-obvious tensions persist. Bans prove unenforceable given AI ubiquity; permissive approaches risk over-reliance, skill atrophy, or undetected misconduct. Meanwhile, government pushes AI literacy and trials (e.g., AI tutoring for disadvantaged pupils) contrast with caution on high-stakes assessment, leaving colleges to balance innovation against integrity without full sector-wide consensus.

We use cookies to measure site usage. Privacy Policy