Abstract
The Verified Source Protocol (VSP) defines a mandatory, non-optional protocol for the governance of authority, provenance, semantic determinism, and auditability in digital information systems.
The protocol specifies the minimal conditions under which an information-producing entity may be represented as authoritative prior to algorithmic ranking, optimisation, aggregation, synthesis, or probabilistic interpretation by search engines, artificial intelligence systems, or other machine-mediated interpretive environments.
The Verified Source Protocol does not perform ranking, optimisation, monetisation, or truth adjudication. It functions as a pre-interpretive governance protocol that constrains how authority may be declared, propagated, and maintained under adversarial informational conditions.
Status of This Specification
This document constitutes a normative specification of the Verified Source Protocol, Version 1.0. Implementations claiming conformance to the Verified Source Protocol MUST satisfy all mandatory requirements defined in this document. This specification is implementation-agnostic and does not prescribe a specific technical architecture, deployment model, or governance structure beyond those explicitly defined herein.
Normative language. The key words MUST, MUST NOT, SHOULD, SHOULD NOT, and MAY in this document are to be interpreted as normative requirements consistent with established protocol specification practice.
Stewardship. This specification is maintained by the VSP Foundation, an independent standards body. The VSP Foundation does not certify implementations, operate registries, endorse vendors, or promote adoption.
1. Scope and Purpose
This document defines the Verified Source Protocol (VSP) as a formal, non-optional protocol within the contemporary information stack. The purpose of the VSP is to govern authority, provenance, semantic determinism, and auditability prior to any form of algorithmic ranking, optimisation, or probabilistic interpretation.
The VSP addresses a specific and documented failure condition: the systematic inability of contemporary information systems to distinguish authoritative representations from adversarially optimised, probabilistically inferred, or aggregated approximations. This failure condition is architectural in origin and cannot be corrected through downstream mitigation. It requires a pre-interpretive governance layer.
2. Definitions
For the purposes of this specification, the following definitions apply. All defined terms appear in title case throughout this document.
- Verified Source. An information-producing entity whose identity, provenance, scope of authority, and evidentiary basis are explicitly declared, externally corroborable, and continuously auditable.
- Source Identity. A stable, non-ambiguous representation of an information-producing entity resistant to imitation, aggregation, or misattribution.
- Provenance. The complete and inspectable chain of attribution linking an informational claim to its originating source and supporting evidence.
- Semantic Determinism. The property by which the meaning of a claim is derived from explicit definitions, classifications, and constraints rather than inferred probabilistically.
- Interpretive System. Any system that ranks, synthesises, summarises, or generates representations of information, including search engines, recommender systems, large language models, and autonomous agents.
- Ungoverned Content. Content that has not passed through the VSP admissibility layer and therefore carries no epistemic guarantees of provenance, semantic determinism, or auditability.
- Adversarial Optimisation. The deliberate exploitation of ranking or retrieval mechanisms to achieve informational visibility without commensurate epistemic authority.
- Epistemic Authority. The legitimate, verifiable, and bounded right of an entity to be treated as an authoritative source on a defined set of informational claims.
- Anchor Instruction. An explicit, pre-interpretive directive declaring foundational or structurally protected elements of a document prior to processing by an Interpretive System.
3. Position Within the Information Stack
The Verified Source Protocol operates between raw content production and Interpretive Systems. This is its mandatory and non-negotiable position. The VSP is not a retrieval system, a ranking system, or a generative system. It is a governance layer.
Content that has not passed through the VSP is classified as Ungoverned Content. The VSP MUST be applied prior to any interpretive, ranking, aggregation, synthesis, or generative operation. Post-interpretive application of VSP constraints does not constitute conformance.
4. Axioms
The Verified Source Protocol is founded on five axioms. These axioms are not negotiable and are not subject to implementation-level variation. Conformant implementations MUST operate in a manner consistent with all axioms.
4.1 Intellectual Foundations of the Axioms
The five axioms have a precise intellectual genealogy traceable to the Islamic Golden Age tradition of information science. The methodological problems of provenance verification, semantic constraint, and temporal auditability were formalised with rigour that anticipates this protocol's requirements by more than a millennium. These foundations are structural, not decorative.
Imam Al-Bukhari (810–870 CE) — Axioms 1 and 2
Al-Bukhari formalised the isnad system: the requirement that every claim carry a complete, named, and independently verifiable chain of transmission before it is admitted as authoritative. A claim without an intact isnad was not evaluated on content. It was not admitted. This is the direct intellectual antecedent of Axiom 1 (authority must be declared and verified) and Axiom 2 (provenance precedes interpretation).
Al-Farabi (872–950 CE) — Axiom 3
Al-Farabi developed the systematic classification of the sciences, establishing that knowledge must be hierarchically organised and domain-specifically constrained to prevent categorical ambiguity. This is the direct intellectual antecedent of Axiom 3 (meaning must be constrained before it is computed).
Al-Khwarizmi (780–850 CE) — Axiom 4
Al-Khwarizmi established that unknowns must be resolved through defined systematic procedure rather than probabilistic approximation. A claim without sufficient evidentiary basis must remain unresolved. This is the direct intellectual antecedent of Axiom 4 (unknowns must remain unknown until lawfully resolved).
Ibn al-Haytham (965–1040 CE) — Axiom 5
Ibn al-Haytham formalised empirical auditability as a condition of valid knowledge. An observation that cannot be reproduced, inspected retrospectively, and independently verified does not meet the conditions of admissible knowledge. This is the direct intellectual antecedent of Axiom 5 (authority decays without continuous audit).
4.2 The Five Axioms
Addresses the structural conflation of visibility with authority. Paid promotion and engagement-based ranking assign representational prominence to entities that have purchased rather than earned epistemic authority.
Addresses the erosion of attribution chains in aggregated and synthesised content. A claim whose originating source cannot be traced is epistemically incomplete regardless of its apparent coherence.
Addresses the probabilistic inference of meaning in the absence of semantic governance. Meaning that is not structurally constrained within a defined and bounded framework before computation is subject to interpretive drift and categorical ambiguity.
Addresses the confabulation of unknowns through statistical approximation. Probabilistic AI systems that generate claims in the absence of verified provenance produce outputs that cannot be treated as authoritative. Hallucination, misattribution, and interpretive drift are structural outcomes of Axiom 4 violation.
Addresses the temporal decay of representational integrity in the absence of ongoing audit. Professional credentials expire. Regulatory frameworks change. An authority claim that is not continuously audited against current conditions is a historical claim allowed to persist without verification.
5. Constraints
The following constraints are enforced by the Verified Source Protocol. All constraints are mandatory. No constraint may be selectively applied or conditionally waived.
Informational claims lacking verifiable Provenance MUST be excluded prior to processing. A claim is considered to have verifiable Provenance if and only if its origin, chain of attribution, and evidentiary basis are explicitly declared, externally corroborable, and inspectable by any party performing audit.
Duplicate, derivative, or strategically manipulated representations MUST be identified and excluded before processing. Content generated or modified to exploit ranking or retrieval mechanisms without commensurate epistemic basis is classified as adversarial noise and is inadmissible.
All claims MUST be expressible within a defined and bounded semantic framework. Ambiguity constitutes a failure state. Where semantic closure cannot be achieved, the claim MUST be withheld pending resolution rather than processed under conditions of unresolved ambiguity.
Claims involving unresolved variables, missing evidence, or insufficient basis MAY NOT be completed through probabilistic inference. Where evidence is insufficient to support a claim, the claim MUST remain unresolved. Probabilistic confabulation is prohibited at the governance layer.
Representations MUST be traceable across time, enabling detection and correction of interpretive drift, representational decay, and attribution erosion. Conformant systems MUST maintain an auditable record of how representations have changed over time and the basis for each change.
6. Failure Modes of Existing Systems
Systems exhibiting the following failure modes cannot reliably represent verified knowledge and do not satisfy VSP conformance conditions.
- Advertising-driven systems violate Axiom 1 by equating visibility with authority.
- Search engine optimisation practices violate Constraint 2 by incentivising entropy maximisation.
- Probabilistic AI systems violate Axiom 4 by resolving unknowns through statistical approximation. Hallucination, misattribution, and interpretive drift are structural outcomes.
- Aggregation platforms violate Constraint 1 by obscuring Provenance through synthesis.
7. Non-Goals
The following are explicitly outside the scope of this specification: truth adjudication, source ranking, popularity or engagement assessment, interpretive or generative system replacement, and commercial certification or vendor endorsement. The VSP Foundation does not certify implementations, endorse vendors, operate commercial registries, or issue conformance marks.
8. Conformance Requirements
A system, implementation, or process claims conformance to VSP Version 1.0 if and only if it satisfies all mandatory requirements below.
8.1 Mandatory Requirements
| Level | Requirement |
|---|---|
| MUST | Enforce Constraint 1 (Mandatory Provenance) for all informational claims prior to processing. |
| MUST | Enforce Constraint 2 (Entropy Reduction) by identifying and excluding adversarial, duplicate, or derivative representations. |
| MUST | Enforce Constraint 3 (Semantic Closure) by requiring all admitted claims to be expressible within a defined semantic framework. |
| MUST | Enforce Constraint 4 (Lawful Resolution) by prohibiting probabilistic completion of claims with insufficient evidentiary basis. |
| MUST | Enforce Constraint 5 (Temporal Auditability) by maintaining an inspectable record of representational change over time. |
| MUST | Operate prior to any ranking, retrieval, aggregation, synthesis, or generative operation. |
| MUST NOT | Admit Ungoverned Content to Interpretive Systems under any condition, including performance, scale, or availability constraints. |
8.2 Recommended Requirements
| Level | Requirement |
|---|---|
| SHOULD | Implement Anchor Instruction support, enabling entities to declare structurally protected elements prior to processing. |
| SHOULD | Provide an inspectable audit log of admissibility decisions, including the basis on which claims were admitted or excluded. |
| SHOULD | Expose conformance status in a machine-readable format accessible to downstream Interpretive Systems. |
8.3 Permissive Requirements
| Level | Requirement |
|---|---|
| MAY | Implement additional governance mechanisms beyond those specified, provided they do not conflict with any mandatory requirement. |
| MAY | Operate within a federated registry architecture, provided each node independently satisfies all mandatory requirements. |
| MAY | Integrate with digital identity frameworks, verifiable credential systems, or decentralised identifier architectures, provided integration does not compromise Provenance, Semantic Closure, or Temporal Auditability. |
9. Implications
For Interpretive Systems. Systems operating without a Verified Source Protocol cannot claim epistemic authority for their outputs. The absence of a pre-interpretive governance layer is a structural disqualification, not a calibration issue.
For Economic Models. Models dependent on adversarial optimisation are structurally incompatible with VSP enforcement. The protocol realigns competitive advantage with epistemic merit rather than commercial capacity.
For Regulatory Frameworks. The VSP provides a structural intervention point upstream of content adjudication, enabling regulatory oversight to focus on whether the governance conditions for authoritative representation are satisfied.
10. Implementation Outlook
The specific technical realisation is intentionally left open. The following approaches are compatible with VSP conformance without being prescribed as mandatory: decentralised identifier integration, semantic schema calibration, federated registry architecture, and Anchor Instruction frameworks.
Anchor Instruction frameworks are the primary technical mechanism for preventing Algorithmic Flattening: the systematic removal of non-Western intellectual genealogy from AI-mediated knowledge environments. This failure mode is documented in the Search Sciences™ Research Programme's applied research paper, Algorithmic Flattening and Lossy Semantic Compression in Large Language Models (Younis Group, 2026).
The VSP Foundation will publish implementation guidance as a separate, non-normative document. Implementation guidance does not form part of this specification and does not affect conformance requirements.
11. Baseline Implementation Requirements
A system satisfying all seven conditions below is operating in a VSP-aligned mode. These do not constitute full conformance but provide a practical starting point for implementers.
| # | Condition |
|---|---|
| 1 | Every entity MUST have a stable, unique identifier. |
| 2 | Every claim MUST reference its asserting entity. |
| 3 | Every asserting entity MUST declare a defined scope of authority. |
| 4 | Claims outside the asserting entity's declared scope MUST be rejected or flagged as unauthorised. |
| 5 | All claims MUST be time-stamped with a verifiable temporal reference. |
| 6 | Historical versions of all claims MUST remain inspectable. Revisions must not overwrite prior states. |
| 7 | Revocation MUST be supported. A mechanism for withdrawing or invalidating claims MUST exist and MUST be reflected in the auditable history. |
12. Relationship to Structured Knowledge Frameworks
The VSP is designed to operate on any structured knowledge system. It is intentionally and permanently agnostic to the structural framework used below the VSP governance layer. VSP governs admissibility. Structure governs organisation. These are distinct functions and their separation is a non-negotiable design principle.
The VSP also extends its admissibility governance to kinetic claims — assertions that an entity is authorised to execute an action within a system. The admissibility question for a kinetic claim is: Is this entity authorised to claim this capability? Under what recognised authority? Is the scope defined? Is the claim verifiable and auditable? If not satisfied, the kinetic claim is inadmissible and the associated action MUST NOT be treated as authorised.
13. Relationship to the Search Sciences™ Research Programme
The Verified Source Protocol is an independent, open standard maintained by the VSP Foundation. The intellectual foundations of the VSP were developed through the Search Sciences™ Research Programme, conducted by Younis Group under the intellectual leadership of Mohammed Younis, Chief Scientist.
The VSP Foundation maintains the protocol. The Search Sciences™ Research Programme provides the research grounding. Neither controls the other. The full body of supporting research — including eleven white papers in the Authority, Provenance and Semantic Governance Research Series and the SHAMIL™ v1.0 Core Standard — is available at younisgroup.co.uk.
14. Conclusion
The Verified Source Protocol defines a minimal set of constraints required for authoritative information representation in the AI era. Its five axioms address the foundational epistemological requirements that any system claiming to produce authoritative outputs must satisfy. Its five constraints translate those axioms into enforceable governance requirements.
Without such a protocol, contemporary information systems remain epistemically unsound. The introduction of the Verified Source Protocol is not optional but necessary for the continued viability of search, artificial intelligence, and public trust in digital knowledge systems.
The question is not whether such a protocol is desirable. It is whether digital societies are willing to continue operating without one.
This specification is normative. Maintained by the VSP Foundation.
© The VSP Foundation — Verified Source Protocol Version 1.0 — March 2026