AI System Disclosure
Last updated: April 25, 2026
In accordance with the EU AI Act (Regulation (EU) 2024/1689), and consistent with our own mission of AI governance transparency, this page discloses how Governer uses AI systems within its platform.
1. AI Systems Used
Governer uses the following AI system as part of its compliance scanning pipeline:
- OpenAI GPT-4o mini (OpenAI, L.L.C.): Used exclusively for legal document quality assessment — specifically, evaluating the adequacy of privacy policies and terms of service text submitted as part of a website compliance scan. This model performs natural language analysis only.
2. What the AI Does and Does NOT Do
The AI DOES:
- Assess the quality and completeness of legal document text (privacy policies, terms of service).
- Identify vague or legally inadequate language in those documents.
- Generate a plain-English executive summary of scan findings.
- Provide site-specific remediation context for identified risk indicators.
The AI does NOT:
- Determine whether a compliance violation exists. All compliance risk indicators are identified by a deterministic rule engine — not AI. The same URL scanned twice will always produce the same findings.
- Calculate the Trust Score. Scores are computed by a pure mathematical scorer.
- Make any legal determinations or provide legal advice.
- Process personal data. Only anonymised legal document text is sent to OpenAI.
3. Risk Classification (EU AI Act)
Under the EU AI Act risk classification framework, Governer's AI-assisted compliance scanning tool is classified as a minimal risk system:
- It does not fall within the high-risk AI system categories listed in Annex III of the EU AI Act (e.g., it is not used in critical infrastructure, education, employment, law enforcement, or administration of justice).
- It does not make binding automated decisions affecting individuals' rights or significant interests.
- All AI outputs are advisory only. A human (the user) reviews and acts on all results.
- The AI component is supplementary to, and cannot override, the deterministic rule engine.
4. Human Oversight
Governer is designed with human oversight as a core principle (consistent with EU AI Act Article 14):
- All compliance risk indicators are deterministically identified — AI enrichment is a secondary, non-binding layer.
- Users can inspect, question, and disregard any AI-generated recommendations.
- Scan results explicitly label which content is AI-generated (executive summary, site-specific advice) versus rule-engine detected (findings, scores).
- No automated enforcement actions are taken. All outputs require human review before any compliance action is taken.
5. Data Sent to OpenAI
- What is sent: Extracted text from publicly accessible privacy policy and terms of service pages on the scanned website (up to 3,000 characters of each).
- What is NOT sent: Source code, user account data, personal data, authentication credentials, or any data from password-protected pages.
- OpenAI API usage:Under OpenAI's API terms, data submitted via the API is not used to train OpenAI models.
- Data transfer: OpenAI is a US-based processor. Transfers are covered by Standard Contractual Clauses.
6. Accuracy and Limitations
- AI-generated document quality assessments are probabilistic and may not capture all nuances of a specific legal jurisdiction.
- The AI may occasionally misclassify document quality or produce inaccurate remediation suggestions. Users should verify all AI-generated content independently.
- The executive summary is generated to assist understanding, not to replace professional legal review.
7. Contact
If you have questions about how Governer uses AI, or wish to request information about AI system transparency, contact us at legal@governer.dev.