The Credibility Problem with AI-Drafted Compliance Documents
For context, on April 2, 2026, the FDA issued what appears to be its first warning letter with a dedicated AI-manufacturing section, to Purolea Cosmetics Lab in Livonia, Michigan. The letter found that AI agents had generated drug specifications, procedures, and master production records, and that the firm used the AI-generated documents without the review CGMP requires. The firm's owner told FDA investigators she had not known process validation was required because the AI agent had not informed her. FDA devoted a stand-alone section of the warning letter to "Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing."
In Canada, CFIA does not operate through warning letters in the same way as FDA does, and much of our work is learned through inspections, correspondence, enforcement files, and practice around CFIA decision-making. So, I thought we would address the Purolea question in a way that lands closer to home: when does a Canadian food operator's preventive control plan become the subject of the same conversation? In our practice over the past year, we have started to read documents that we are confident were generated by AI. We see stylistic patterns that are immediately recognizable, particularly from less sophisticated users of LLM outputs.
To give some context, CFIA has committed to risk-based inspections of more than 2,400 previously-uninspected manufactured food establishments by fall 2026. The Agency's Action Plan, issued in response to the Inspector General's review of manufactured food coverage, also commits CFIA to licensing reviews that examine whether the information provided is complete, whether hazards have been identified, whether a PCP is in place where required, and whether food-safety culture is demonstrated; mandatory activity information from licence holders; and a shift toward stronger enforcement.
The Disclosure Penalty: in research and in regulators' offices
Recently, a small group of researchers have looked at how readers perceive writing once they learn AI was used, which is producing some consistent findings. Disclosed AI authorship lowers ratings of trustworthiness, goodwill, competence, and likability, in some studies even when the underlying text is identical. Recent papers, including The Transparency Dilemma: How AI Disclosure Erodes Trust, Penalizing Transparency? How AI Disclosure and Author Demographics Shape Human and AI Judgments About Writing, and Understanding Reader Perception Shifts upon Disclosure of AI Authorship, find versions of this effect. Researchers have called it the disclosure penalty.
The most straightforward reading is that readers who learn a text was AI-generated infer reduced effort and reduced personal investment by the human attached to it, and discount the text accordingly. The author who is willing to delegate the writing is understood by the reader as having delegated the thinking. Whether the inference is fair in any particular case is beside the point; it is the inference readers are making. If you’re openly using AI to write your Mother’s Day card, your mother may think less of the thoughts and feelings of your words.
The same trust problem arises in a CFIA file, whether or not authorship is disclosed. Inspectors are trained to compare submitted documents against the operation in front of them, and they pick up on when professional-sounding-but-generic phrasing does not match the facility. Sections that are uniformly formatted in a way that reads more like a model output than a working document, hazard descriptions that read as templates and references to control measures the line does not run create a credibility problem for the regulated party.
To bring it home, the disclosure penalty is an academic finding but the regulatory consequence of triggering it in a CFIA file is a very real and practical one.
What the Regulator-Regulated Relationship is Built Upon
Some of the documents that appear to have been generated by AI give the impression that the operator is not committed to the safety and validation processes those documents are supposed to represent, that they are not making out their regulatory obligations in good faith, and that food safety as a culture is not the priority the documents claim it is. If we’re reading that, it’s likely that an inspector is placing the same credibility discount on the regulated party.
I understand that this is a big inference to take from the way that a stack of paper is written and formatted. But, the reason the inference is available is because the relationship between CFIA and a licensed food business is not, at its very bottom, about paperwork. A healthy regulator relationship is built on respect for the regulator's mandate, transparency about how the facility actually operates, candour about gaps and the corrective actions taken to address them, and above all a shared commitment to food safety as a culture rather than a compliance category. So, when CFIA engages, they’re often seeking to confirm that the PCP is not a form filed once and consulted by no one and the traceability records and mock recall notes are not stale. They are evidence that the operator has thought about food safety, has organized their operation around it, and has committed people and resources to making it real.
CFIA's mandate is primarily about public health protection through a regulatory regime built on shared mission. The inspectors who walk the floors are given extraordinary powers of inspection to enable them to look for evidence that a facility has internalized that mission. Documents that read as if they were generated for the inspector, rather than generated by a serious regulated party running a serious operation that an inspector might one day need to read, create perceived gaps in that shared mission. The credibility discount compounds this issue, particularly when an operator has submitted documents that read less like the operator's own thinking and practices in the hope that the inspector will be satisfied to have something resembling a PCP and move on to their next inspection.
Cascading Doubts
An inspector who reads a PCP and is not sure the operator wrote it does not stop at the PCP. If the PCP is generic, an inspector will ask: “did the mock recall actually happen on the timeline the document claims?” or “Were the people named in the procedure trained for the role?” If the SOPs read as generated, then we see questions like “Are the swab schedules on the line being followed, or is the document the only place those schedules exist?” If the corrective action records are uniformly formatted, “Was each corrective action thought through individually, or is the pattern just part of the template you didn’t finish filling out?”
These are unpleasant questions that a reasonable inspector would ask after a document fails the credibility test. Generally speaking, CFIA does not assume facts in the operator's favour.
But our point is that, even in the best case, where the AI-drafted document is technically accurate but generic, the operator has paid a relationship cost. So, the next contact with CFIA starts from a worse position than the one before. If a generic PCP leads to inspection findings, enforcement action, complaints, recall management issues, or a poor PCP sub-element assessment, those inputs feed into the regulated party’s licence profile and into the Establishment-based Risk Assessment picture, which informs inspection frequency and scrutiny.
When the Documents are Fiction: Purolea, Mata, and the SFCA/SFCR framework
I started this post with the Purolea letter because it’s instructive. We’ve spoken about relatively benign credibility costs. But the worst case is much different than simply being treated less seriously than an operator would like.
A PCP that describes a process the facility no longer runs, or that omits a process the facility now runs, is not a PCP that is being implemented. SFCR s. 86 requires a licence holder, subject to the narrow exemptions in s. 86(2) and (3), to prepare, keep, and maintain a written preventive control plan that meets the requirements of s. 89, for any activity identified in their licence that they conduct. SFCR s. 47 requires the operator to identify and analyze the biological, chemical, and physical hazards that present a risk of contamination of a food. A PCP that does not reflect the operation likely fails both tests.
If that document is then submitted to CFIA as part of a licence application, an inspection response, or any other regulatory exchange, Safe Food for Canadians Act s. 15, which prohibits both false or misleading statements and the provision of false or misleading information to persons exercising powers, duties, or functions under the Act, becomes the next question. Section 15 is not a prosecutorial novelty; it’s the section CFIA regularly reaches for when a regulated party tells the regulator something that doesn’t match the regulator's own observations. In the last year, we’ve seen several Administrative Monetary Penalties issued against this section, which are designated “very serious violations.” And, we have seen an AI-generated PCP that the operator had not reconciled with its operations, creating a mismatch that can create the conditions for this violation.
We haven’t seen any public cases or commentary from CFIA on this point, so we’re using the Purolea warning letter as an operational template for how this conversation might go: AI agents created records; the firm used them without the review CGMP requires; and, when investigators identified gaps, the firm's owner told them she had not known the validation requirement existed because the AI agent had not informed her. FDA's response was that AI is not a substitute for an “authorized human representative” reviewing the document, that Quality Unit responsibility is not reduced when AI is involved, and that failure to review AI-generated content for accuracy and CGMP compliance is itself a 21 CFR 211.22(c) violation.
The CGMP framework FDA was applying in Purolea and the SFCR Part 4 framework CFIA applies in Canada are different and not interchangeable. But, the methodology is the same for our illustrative purposes: both require a written control system that maps to the actual operation; require the operator to demonstrate that the system is implemented; and, rely on a competent person to take responsibility for the document's accuracy. The question of whether AI-generated content meets that standard is the same question on both sides of the border. FDA has answered the question publicly, and CFIA's framework has the tools to answer it the same way.
More broadly, and perhaps why this issue hits home for our firm: the legal system has been making the same point since 2023. Mata v. Avianca was the first widely reported case in which a court sanctioned counsel for filing AI-generated material containing fabricated case citations. The court imposed $5,000 jointly and severally against the lawyers and their firm. There have been others since, both in Canada and around the common law world, which all share the same issue: the aggravator was not the use of an AI tool but the failure to verify the output against reality, and then the failure to come clean once the gap was visible. Canadian courts have applied the same logic since, including in Zhang v. Chen, 2024 BCSC 285. But pattern is established in law and in food regulation: a regulated party submitting AI-generated content that does not match reality, and not catching it, is a failure that regulators are increasingly prepared to discourage through various penalties.
Again, CFIA has not issued a Canadian Purolea letter, but it has already announced more inspections, tighter licensing conditions, enhanced risk intelligence, and stronger enforcement for manufactured foods.
What to do
The three action items are straightforward.
First, we see the use of AI for scaffolding, or building out documents rather than for finished compliance content as a helpful tool. An LLM can produce a defensible structural outline of a PCP, an SOP template that names the categories that need to be addressed, or a list of question prompts that surface assumptions the facility should be auditing. But we note, emphatically, that none of those outputs substitute for the document a regulator will read.
Second, have a competent person who walks the floor read every line of any AI-assisted document and rewrite the parts that do not reflect the facility. The test for whether a paragraph survives is whether someone who has been on the line in the last 90 days recognizes their operation in the language. If the answer is no, the paragraph is not yet a part of the operation's compliance program. It is a description of someone else's facility.
Third, reread the document before it is submitted with one question in mind: does this read in our voice, or in a generic voice that could be any facility? The disclosure penalty in the academic literature is a finding about reader perception. An inspector's recognition of generic AI output is the same finding, applied to a regulated context, with regulatory consequences attached.
For licence-tied submissions, sign-off is non-trivial. The signatory is attesting to the document's accuracy. The accuracy is the operator's responsibility. It does not transfer to the tool that generated the draft.
There is also real value, in higher-stakes documents, in having an external reader who is trained on the regulator's perspective walk the draft. The point is not to outsource the writing. It is to surface the parts an inspector would notice that an in-house team has stopped seeing.
Looking Forward
Regulation is changing quickly in the LLM-era, both publicly and behind the facility door. Internally, we have seen a recognizable pattern. And we know the CFIA has seen the same patterns, along with foreign regulators in adjacent jurisdictions and in regulatory regimes that are close enough to food that the lesson travels in a more public and concrete way. We also know that the Canadian framework already has the levers under the Safe Food for Canadians Act and Regulations: SFCR Part 4 for PCP implementation, SFCR s. 47 for hazard analysis, SFCA s. 15 for false or misleading statements. And while CFIA’s enforcement practices don’t generally result in public warning letters or stated positions, we know these discussions are happening.
The question we’re left with, for any regulated party with a written PCP, is whether the document on file would hold up if the next inspector walked the floor with it tomorrow morning, and whether the operator could say honestly that the document is theirs.
Glenford Jameson is the principal of GSJ&Co., Canada's only law firm dedicated to food law. This post is for informational purposes only and does not constitute legal advice.