Skip to main content
PF TECH Insights
Data Sovereignty11 min readanalysis

AI, Data Sovereignty, and the Modern Non-Profit Tech Stack

The public debate about AI risk focuses on the dramatic — algorithmic bias, autonomous decision-making, systems going rogue. Those are worth discussing. But the risks that are actively costing Canadian non-profits their data, their service continuity, and their operational independence right now are far less dramatic and far more preventable. Here's how to think about data sovereignty in the AI era, and what we're doing about it.

Greg Zatulovsky
Greg Zatulovsky, CPA
February 11, 2026
PF TECH
Share

The public debate about AI risk in the non-profit sector tends to focus on the dramatic: algorithmic bias, deepfakes, autonomous decision-making. These are legitimate concerns. But they are not the risks that are actively costing Canadian non-profits their data, their service continuity, or their operational independence right now.

The risks that are costing them these things are far less dramatic and far more preventable. And AI — deployed with the right architecture — is one of the most effective tools available for addressing them.

I want to ground this in specifics, because vague warnings about "data risk" tend to produce the same response as vague warnings about anything: polite acknowledgement and no change in behaviour.

I have worked at an organization that had to sue a software vendor to get out of a contract. The vendor had failed to deliver the services they had scoped and documented — the failure was real, the evidence was clear, and litigation was the only exit available because there were still several years on the contract and the vendor fought the termination. The root cause was not the vendor's incompetence. It was the procurement process: an agreement signed without adequate evaluation of the vendor's actual delivery capacity, without proper exit clauses, and without a technical assessment of whether the scoped services were achievable. By the time the failure was undeniable, we were locked in. Getting out cost us time, money, and operational disruption that the organization absorbed on top of everything else it was managing.

I have also volunteered for an organization that experienced a ransomware attack. The data loss was significant. The service disruption was real. Donors could not be contacted. Service users could not be reached. Programs that depended on operational data had to pause. The recovery was expensive and incomplete, because backups that were supposed to exist either did not or had not been tested.

Neither of these is a hypothetical. Both happened inside organizations doing meaningful community work, staffed by capable people who were not thinking about cybersecurity because they were thinking about their mission. That is not a criticism. It is the reality of operating under the overhead myth — when every dollar spent on infrastructure feels like a dollar taken from programs, the first thing that gets cut is the infrastructure that protects everything else.

In the AI era, both of these risk categories have expanded significantly.

The vendor lock-in risk is now compounded by the proliferation of AI tools. Many of them are free at the entry level — and as the sector has learned to say about free products: if it is free, you are the product. The data your team inputs into a freemium AI tool to draft grant proposals, summarize meeting notes, or analyze donor trends is, in many cases, being used to train the model. The privacy policies that govern this are long, frequently updated, and almost never reviewed by the finance director who approved the tool in a budget meeting.

The cybersecurity risk is compounded by AI in a different way. AI-powered attacks are faster, more targeted, and more convincing than what came before. Phishing emails that used to be identifiable by their grammar errors are now indistinguishable from legitimate communications. Social engineering that once required a skilled human operator can now be automated at scale. The attack surface has expanded and the attacker capability has increased simultaneously.

The sector's response to both of these trends, in too many organizations, is indecision. They know the risks exist. They do not know how to evaluate them. They freeze. And frozen is not safe — it just means that the decisions are being made by default rather than by design.

Data sovereignty is a governance decision that happens to have technology implications — not the other way around.

At its core, data sovereignty means your organization controls where your data resides, who can access it, under what legal framework it is protected, and under what conditions you can leave any given vendor without losing what you need. It means the answers to these questions are documented, understood by leadership, and actually enforceable — not just stated in a vendor's terms of service.

For Canadian non-profits, this has specific regulatory dimensions. PIPEDA establishes baseline privacy requirements for how personal information is handled in the course of commercial activities. Many provinces have their own privacy legislation with stricter requirements. Health data, children's data, and certain social service data carry additional frameworks. The question of whether your donor data, service user records, or volunteer information can legally be processed by a U.S.-based AI model operating under U.S. law is not a question most organizations have formally answered.

The good news — genuinely — is that compliance is achievable. It requires intention, not complexity. And it does not require your organization to become a cybersecurity operation.

Every architecture decision we have made in building PF TECH's own infrastructure — and in designing TERN — is anchored to sovereignty principles. I want to make these concrete, because concrete examples are more useful than principles alone.

A practical starting point for any organization is not a technology replacement project. It is an audit.

Inventory every application your team uses. For each one, answer four questions: What data does it hold? Where is that data physically stored? What is the vendor's data breach history? And what would it take to leave?

Most organizations that go through this exercise are surprised by what they find. Sensitive operational data in tools that nobody officially approved. Personal information in cloud applications with no data residency controls. Critical workflows dependent on a single vendor with no documented exit path.

The second step is intentional procurement. Before adding any new tool — especially any AI tool — evaluate it against sovereignty criteria. Free tools that process sensitive data deserve the most scrutiny, not the least. The cost of a data breach, a vendor dispute, or a ransomware recovery dwarfs the cost of a paid alternative with better privacy architecture.

The third step is building a minimum viable policy. Not a perfect, comprehensive AI governance framework produced by a committee over eighteen months. A working set of guardrails that lets your organization engage with AI today, responsibly, while the fuller framework develops. Jason Shim of the Canadian Centre for Nonprofit Digital Resilience framed this well at the CPA Ontario Not-for-Profit Conference — the Minimum Viable Policy concept is one of the most useful frameworks I've encountered for getting organizations past the freeze.

Organizations that engage with AI thoughtfully — with a sovereignty lens, a clear policy, and an understanding of where their data goes — are not less competitive than those who adopt everything uncritically. They are more resilient. And the ones who freeze entirely gain nothing from either approach.

9 reads
Share
Greg Zatulovsky

About the author

Greg Zatulovsky, CPA

Founder & CEO, PF TECH · 15+ years in non-profit finance, operations & technology

Greg founded PF TECH to give Canadian non-profits access to the same operational infrastructure as the private sector — without the overhead. He writes about AI adoption, financial management, and the practical realities of running a mission-driven organisation.

FAQ — Knowledge Base

Browse frequently asked questions about Knowledge Base

Apply for the Founding Cohort

Spots are limited. Applications are accepted globally and reviewed the week of April 6 — apply early for priority consideration. If selected, you'll receive a secure payment link to confirm your spot. Sessions begin May 1, 2026.