What Are We Growing? Trust, Identity, and Education in the Age of Generative AI

March 5, 2026|6:00 AM AEDT|Past event

Generative AI's ability to instantly forge convincing identities and messages is eroding foundational trust in digital interactions just as regulatory deadlines and escalating fraud losses force organizations to confront the consequences.

Key takeaways

  • Recent advancements in generative AI have made deepfakes and synthetic identities mainstream threats, driving fraud losses projected to reach $40 billion in the US by 2027 and prompting a crisis in verifiable identity.
  • In education and professional settings, overreliance on AI tools risks diminishing critical thinking and academic integrity, while EU AI Act deadlines in 2026 impose strict requirements on high-risk systems used in assessments and admissions to preserve trust.
  • The tension lies between AI's efficiency gains and the non-obvious erosion of human wisdom and interpersonal trust, where abundant information coexists with scarce discernment, complicating defenses against manipulation.

The Erosion of Digital Trust

Generative AI has fundamentally altered organizational and societal terrain by enabling the near-instant creation of persuasive content and the convincing fabrication of identities. What once required sophisticated resources now happens at scale with consumer-grade tools, amplifying risks in cybersecurity, fraud, and information integrity.

Deepfakes and synthetic identities have surged as tools for scams, misinformation, and credential fraud. In 2025, fraud experts reported encounters with synthetic identity fraud, voice deepfakes, and video manipulations, contributing to escalating financial damages. Deloitte forecasts generative AI-driven fraud losses in the US climbing from $12.3 billion in 2023 to $40 billion by 2027, a 32% compound annual growth rate. High-profile incidents, such as a $25 million transfer tricked by a deepfake video impersonation of a CFO, underscore the immediacy of the threat. This extends beyond finance into education, where AI-powered identity fraud targets student aid systems, with the US Department of Education flagging nearly 150,000 suspect identities and incurring $90 million in losses tied to ineligible applicants.

Education faces parallel pressures. Surveys indicate widespread student use of generative AI for assignments, with concerns that it fosters overreliance, reduces critical thinking, and undermines assessment validity. Faculty reports highlight diminished attention spans, increased cheating, and eroded trust in academic outputs. Institutions grapple with redesigning assessments and policies amid ambiguity, as inadequate governance exposes them to risks in privacy, bias, and equity. The OECD's Digital Education Outlook 2026 warns that unmanaged risks in access, privacy, ethics, and bias could hinder AI's potential to enhance learning.

Regulatory responses add concrete stakes. The EU AI Act, effective since August 2024, prohibits certain practices and imposes obligations phased in through 2026-2027. High-risk AI systems in education—such as those for admissions, evaluations, or cheating detection—face compliance by August 2026, requiring risk management, transparency, and human oversight. General-purpose AI transparency rules took effect earlier, but the looming 2026 deadline for high-risk categories creates urgency for organizations operating in or with EU ties, with non-compliance risking fines and operational restrictions.

Less visible tensions emerge between efficiency and depth. While AI delivers abundant information and personalized support, it often shortcuts the development of wisdom and discernment. Trust in AI systems has declined globally from 63% in 2022 to 56% in 2024, with worry rising to 62%. In human-AI interactions, over-delegation risks automation bias and weakened skills. Stakeholders face trade-offs: embracing AI for productivity versus safeguarding cognitive autonomy and relational trust. Detection lags innovation, fueling an arms race where eroded shared reality complicates governance and response.

We use cookies to measure site usage. Privacy Policy