AI Policy Refresh: Practical, Safe and Achievable AI Use for NFPs

March 5, 2026|4:00 PM NZDT|Past event

New Zealand's July 2025 launch of its first national AI strategy has thrust not-for-profits into a race to update policies amid surging AI adoption and persistent governance gaps.

Key takeaways

  • New Zealand released its inaugural AI Strategy in July 2025, promoting a light-touch regulatory approach that encourages widespread adoption while relying on existing laws for risk management.
  • While over 80% of nonprofits globally now use AI tools, only 10-24% have formal policies, exposing them to risks like data breaches, bias, and loss of donor trust.
  • For New Zealand NFPs handling sensitive community data, outdated or absent AI policies risk compliance failures under privacy laws and erode credibility with funders demanding ethical safeguards.

AI Governance Urgency for NFPs

New Zealand's government released its first national AI Strategy in July 2025, aiming to accelerate private-sector adoption—including for non-profits—by providing voluntary responsible AI guidance and aligning with OECD principles. This marks a shift after years of lag, as New Zealand was the last OECD country without such a framework, leaving organisations to navigate AI's rapid evolution without clear national direction.

Adoption has surged: globally, 92% of nonprofits now use AI for tasks like drafting and data analysis, yet most operate without structured oversight. In New Zealand, small NFPs and charities face similar patterns, often adopting tools informally amid resource constraints. The Ministry of Business, Innovation and Employment's Responsible AI Guidance for Businesses explicitly includes non-profits, urging trustworthy deployment to address ethical, legal, and social challenges.

The stakes are concrete. Without updated policies, NFPs risk privacy violations under the Privacy Act 2020, especially when handling vulnerable populations' data. High-profile global incidents underscore how even large entities falter; smaller organisations with limited IT capacity are more vulnerable to shadow AI use—unapproved tools creating untracked risks. Funders increasingly scrutinise ethical AI practices, and lack of governance can jeopardise grants or partnerships.

Tensions arise between opportunity and caution. AI promises efficiency for under-resourced NFPs—streamlining operations or enhancing impact measurement—but unchecked use can amplify biases or erode public trust in mission-driven work. New Zealand's light-touch approach avoids heavy regulation but places responsibility on organisations to self-govern, creating a trade-off: faster innovation versus higher self-imposed diligence. Māori and Pacific communities add a non-obvious angle, with guidance stressing inclusion to prevent exacerbating inequities.

As tools evolve quickly, policies drafted even a year ago may no longer suffice, pushing NFPs toward regular refresh to stay defensible and effective.

We use cookies to measure site usage. Privacy Policy