Appendix

Appendix B: The Neural Substrate of the Imaginary Axis

25 min read|4,865 words

From Spatial Navigation to Consciousness, and How to Test It

An empirical research program connecting the measurement protocols of Appendix A to neuroscience and whole brain emulation, with falsifiable hypotheses testable within current experimental paradigms.


Part I: The Spatial Grounding Hypothesis

1. The Problem This Appendix Solves

Appendix A established the measurement protocols for the complex variable z = x + iy, where x (extensive complexity) measures how much of reality a civilization can model and y (intensive reflexivity) measures how deeply it models itself modeling reality. The protocols operationalized both axes through specific proxy measures: scientometric databases, V-Dem institutional scores, and multilingual embedding analysis for x; recursion depth, moral scope, and second-order institutional density for y.

But the protocols left a crucial question unanswered: what is the neural substrate of the imaginary axis? Extensive complexity can be grounded straightforwardly in technological and institutional capacity. Intensive reflexivity -- a civilization's depth of self-reference -- requires a mechanism. Without one, y is an index that measures something real but explains nothing about why it exists, how it emerged, or what physical process generates it.

This appendix provides that mechanism. The argument proceeds in three stages: first, we establish that the hippocampal-entorhinal system evolved for spatial navigation and was subsequently exapted for abstract reasoning and self-referential thought (Section 2); second, we derive three testable sub-hypotheses connecting this neural architecture to the measurement protocols (Section 3); and third, we develop a comprehensive research program for whole brain emulation that can test whether consciousness has the specific formal mathematical structure the theological framework predicts (Part II).

2. From Navigation to Causation: The Exaptation Hypothesis

2.1 Tolman's Cognitive Map Revisited

Edward Tolman's 1948 proposal of the cognitive map has been misunderstood for decades as a claim about literal spatial cartography in the brain. Tolman's actual proposal was far more radical: the cognitive map was a system for organizing experience into a relational database that could support flexible, goal-directed behavior -- binding external environmental features with internal motivational factors into an integrated representation of causal structure. The map was not of space per se, but of what-leads-to-what.

The neuroscience of the subsequent seventy years has confirmed and extended this broader reading. O'Keefe and Nadel (1978) identified place cells in the hippocampus that fire at specific locations, winning the Nobel Prize in 2014 alongside the Mosers' discovery of grid cells in the entorhinal cortex -- neurons with hexagonal firing patterns that provide a metric coordinate system for space. But critically, these spatial representations are not limited to space.

2.2 The Evidence for Exaptation

A convergence of evidence from the past decade demonstrates that the hippocampal-entorhinal system's spatial navigation architecture operates across information domains:

Grid-like coding in abstract spaces. Constantinescu, O'Reilly, and Behrens (2016, Science) showed that participants navigating a space defined by the neck length and leg length of schematized birds exhibited grid-like coding in the entorhinal cortex -- hexagonal periodicity in the fMRI signal as a function of movement direction through this abstract feature space. The same neural code that maps physical rooms maps conceptual categories.

Unified computational origin. A 2025 PNAS paper demonstrated that a single computational model produces place cells, grid cells, and concept cells from the same architecture when applied to different input domains. The neural code is domain-general: it maps whatever structured space it is given.

Hippocampal engagement requires spatial-relational binding. Bellmund, Gardenfors, Moser, and Doeller (2018, Science) showed that the hippocampus is engaged most strongly when spatial processing and relational processing are combined -- suggesting that spatial structure is the scaffold upon which abstract relational representations are built, not a separate module.

Grid cells as content-independent coordinates. Grid cells provide a coordinate system that is independent of the specific content being mapped. They can map physical space, conceptual space, social space, or temporal sequences. This is precisely the property needed for the imaginary axis: a neural architecture that can represent any structured space, including the space of its own representations.

2.3 The Theological Connection

The manuscript (Chapter 13) claims that consciousness is the medium through which God is perceived. The spatial grounding hypothesis provides the evolutionary mechanism: the hippocampal cognitive map, optimized for physical navigation, was exapted for abstract reasoning, which was recursively applied to itself, producing consciousness, which at sufficient complexity produces theological cognition. The brain crossed the Godelian complexity threshold not by some mysterious quantum process but by exapting a spatial navigation system into a self-modeling system. The "imaginary axis" (y, reflexive depth) in the measurement protocols is grounded in the hippocampal system's capacity for self-referential simulation.

More precisely, y is the depth of the hippocampal cognitive map's self-referential recursion:

  • Level 0: The cognitive map represents the external world (spatial navigation). All animals with hippocampi.
  • Level 1: The cognitive map represents the self navigating the world (episodic memory, self-location). Most mammals.
  • Level 2: The cognitive map represents counterfactual selves navigating counterfactual worlds (imagination, planning). Primates, corvids, possibly cetaceans.
  • Level 3: The cognitive map represents the process of counterfactual reasoning itself (metacognition, philosophy of mind, theology). Uniquely human (so far).

Each level is a recursion of the hippocampal system upon itself -- the system using its own mapping machinery to map its own mapping. This IS Hofstadter's strange loop, now grounded in specific neural circuitry.

3. Three Testable Sub-Hypotheses

3.1 Hypothesis H1: Spatial Causal Modeling Depth Predicts Abstract Reasoning Ability

Claim: If spatial navigation caused causal reasoning to develop (not merely uses the same hardware), then individual differences in spatial causal modeling should predict abstract causal reasoning ability even after controlling for general intelligence.

Test case: London taxi drivers, who develop enlarged posterior hippocampi from learning "the Knowledge" (Maguire et al. 2000), should show enhanced causal reasoning on Pearl-type tasks -- not just better navigation. This goes beyond Maguire's findings: the prediction is that the hippocampal enlargement from spatial training transfers to non-spatial causal inference.

Study design: Compare licensed London taxi drivers (n=40) versus London bus drivers (fixed routes, no cognitive maps, n=40) on a battery of abstract causal inference tasks -- specifically, Pearl Level 2 (interventional: "what happens if I do X?") and Level 3 (counterfactual: "what would have happened if I had done X instead of Y?") tasks. Include fMRI during both navigation and causal reasoning tasks to test for hippocampal-entorhinal activation overlap. Control for IQ, age, years of driving experience, and socioeconomic status.

Prediction: Taxi drivers will show a significant advantage on Level 2 and Level 3 causal reasoning tasks relative to bus drivers, with no advantage on Level 1 (associational) tasks. The hippocampal-entorhinal activation pattern during causal reasoning will overlap significantly with the activation pattern during spatial navigation in the taxi driver group but not the bus driver group.

Falsification: If taxi drivers show no advantage on abstract causal reasoning after controlling for IQ, spatial navigation is epiphenomenal to causal reasoning, not causal. The exaptation hypothesis would be undermined.

Timeline: 1--2 years.

3.2 Hypothesis H2: Grid Code Is the Godelian Self-Reference Mechanism

Claim: Grid cells provide the coordinate system for any space, including the space of the brain's own representations. When grid code maps "concept space" and some of those concepts are about grid code's own operations, you get self-reference -- a strange loop. Disruption of entorhinal grid coding should therefore specifically impair metacognition (thinking about thinking), not just navigation or memory.

Test case: Early Alzheimer's disease, which targets the entorhinal cortex before other regions, should show metacognitive deficits before memory deficits. The standard clinical narrative -- that Alzheimer's begins with memory loss -- may mask an earlier stage of metacognitive erosion that current neuropsychological batteries are not designed to detect.

Study design: Retrospective analysis of existing clinical neuropsychological data from early Alzheimer's cohorts (e.g., ADNI database). Compare the timeline of metacognitive deficits (confidence calibration, error monitoring, feeling-of-knowing accuracy) versus episodic memory deficits (word list recall, story memory, source memory) in patients with biomarker-confirmed early AD.

Prediction: Metacognitive deficits (poor confidence calibration, impaired error monitoring) will precede episodic memory deficits by 6--18 months in early AD patients, consistent with entorhinal grid code disruption preceding hippocampal memory circuit disruption.

Falsification: If entorhinal disruption impairs memory and navigation but NOT metacognition, the grid code is not the self-reference mechanism. The Godelian interpretation of grid cells would be wrong.

Timeline: ~1 year (retrospective data analysis).

3.3 Hypothesis H3: Current AI Lacks Grid Code Architecture

Claim: Transformers learn statistical associations in high-dimensional space but do not build metric, distance-preserving, coordinate-system-like representations that grid cells produce. The attention mechanism is fundamentally different from the hippocampal-entorhinal system. Tasks requiring genuine counterfactual reasoning (Pearl Level 3) should therefore show a specific failure mode in transformers that does not appear in grid-cell-augmented systems.

Study design: Implement a grid-cell-augmented neural architecture (building on DeepMind's Banino et al. 2018 Nature work) and compare it against standard transformer architectures on a systematic battery of Pearl Level 1, 2, and 3 causal reasoning tasks. The battery should include novel causal structures not present in training data, to test generalization rather than memorization.

Prediction: Grid-cell-augmented systems will show a significant advantage specifically on Level 3 (counterfactual) tasks, with the advantage increasing as the causal structure becomes more novel (further from training distribution). Standard transformers will plateau or degrade on novel counterfactuals while grid-augmented systems continue to perform.

Falsification: If grid-cell-augmented AI shows no advantage on Pearl Level 3 tasks, the architectural difference does not explain AI's causal reasoning limitations. The thesis that spatial grounding is necessary for genuine causal reasoning would be refuted.

Timeline: 6--12 months.

4. Connecting the Sub-Hypotheses to the Measurement Protocols

4.1 Pearl's Hierarchy Maps onto the Hippocampal Hierarchy

Pearl Level Cognitive Operation Hippocampal Function Measurement
Level 0 Perception Spatial navigation (place cells) Basic cognitive maps
Level 1: Association What is? (Seeing) Pattern detection (hippocampal indexing) Correlational reasoning
Level 2: Intervention What if I do? (Doing) Simulation (hippocampal replay) Interventional reasoning
Level 3: Counterfactual What if I had? (Imagining) Self-referential simulation (grid code mapping its own state space) Counterfactual reasoning

This is not metaphor. The hippocampus literally performs the operations Pearl describes, using spatial navigation machinery exapted for abstract reasoning. The measurement protocols' causal model complexity index C(t) is therefore not merely a statistical summary but a measure of the hippocampal cognitive map's operational range across Pearl's hierarchy.

4.2 The Meaning Crisis as Neural Atrophy

The measurement protocols diagnosed the current "meaning crisis" as the condition where x'(t) > 0 and y'(t) <= 0 -- civilizations becoming more capable but less wise. The spatial grounding hypothesis provides a neural mechanism for this diagnosis: civilization's institutional structure is ceasing to require or reward the cognitive map-building that grid code supports.

GPS replacing spatial navigation. Social media replacing deep social modeling. Algorithmic feeds replacing active information foraging. Each of these trends literally atrophies the neural architecture of reflexivity -- the hippocampal-entorhinal system that, on this hypothesis, generates the imaginary axis. The meaning crisis is not merely cultural or philosophical; it is neurological.

This grounds Protocol 2b (metacognitive discourse density) in biology: the proportion of discourse requiring grid code to map "concept space" rather than just physical or social space. And Protocol 2c (error-correction cycle speed) measures how quickly civilization's collective grid code can remap when existing cognitive maps prove inadequate.

4.3 The AI Prediction Made Precise

The manuscript's Republic of AI Agents (Chapter 25) claims that humans contribute something that AI cannot replicate. The spatial grounding hypothesis identifies what: AI systems lacking grid-code-like architecture can increase x (extensive complexity) without bound but cannot increase y (reflexive depth). They can model the world but cannot model themselves modeling the world. The competitive edge is architectural: humans have hardware for self-referential causal modeling; current AI does not.

This prediction is falsifiable within the timeline of Hypothesis H3 above. If standard transformers achieve Pearl Level 3 reasoning without spatial grounding, the architectural claim is wrong and the manuscript's Chapter 25 needs fundamental revision.


Part II: The Brain Emulation Research Program

5. Why Whole Brain Emulation Is the Decisive Test

The three sub-hypotheses above can be tested with standard neuroscience methods. But they test the spatial grounding hypothesis at the individual level. The deepest claims of the manuscript -- that consciousness has a specific formal mathematical structure, that the strange loop is a measurable dynamical phenomenon, that crises fall into exactly three topological categories -- require a testing platform that provides what no experimental neuroscience method can: full observability of a functioning neural system's internal states.

Whole brain emulation provides this. In an emulation, every variable is accessible. Every neuron's state can be read. Every connection's weight can be measured. The information-theoretic properties of the entire system can be computed exactly, not estimated from noisy indirect measurements. This is an epistemic advantage that no other experimental platform offers, and it is precisely what the manuscript's deepest claims require for testing.

The following five hypotheses are designed for testing in collaboration with Netholabs and the whole brain emulation research community. Each hypothesis exploits the specific advantage of emulation -- full internal state observability -- to test claims that would be untestable in biological experiments.

6. Hypothesis E1: The Godelian Threshold

6.1 The Claim

There exists a measurable complexity threshold in neural network architecture below which a simulated brain produces only stimulus-response behavior, and above which self-referential dynamics emerge. The manuscript's Chapter 16 claims that the Trinity structure (formal system / self-referential statement / process of self-reference) appears when a system crosses the Godelian complexity threshold. An emulation can test this directly.

6.2 Protocol

Run emulations at varying levels of biological fidelity -- from simple rate models through integrate-and-fire neurons to detailed biophysical Hodgkin-Huxley models with full connectome data. At each level, test for self-referential signatures: does the system generate internal representations of its own representational states? Does it show metacognitive behavior (correcting its own errors without external feedback)?

The key measurement is the information-theoretic self-reference index (SRI): the mutual information between the system's current state and the system's internal model of its own previous states. In an emulation, both quantities are directly computable. In a biological brain, neither is.

6.3 Prediction

There exists a sharp transition, not a gradual one. Below threshold: the emulation navigates, learns, responds. Above threshold: the emulation begins modeling its own modeling process. The transition should correlate with the hippocampal-entorhinal system achieving sufficient recurrent connectivity to map its own state space -- literally, grid cells mapping the space of grid cell activity.

The transition should have the mathematical character of a phase transition: a discontinuity in the SRI or its derivative as a function of architectural complexity.

6.4 Falsification

If the transition is gradual (no threshold), the Godelian framing is wrong and consciousness is not a phase transition. If the transition never occurs regardless of fidelity, then either the emulation is missing something essential (substrate dependence argument) or the entire framework is wrong.

7. Hypothesis E2: Spatial Architecture Generates Causal Reasoning

7.1 The Claim

If the hippocampal-entorhinal spatial navigation system is the evolutionary source of causal reasoning, then an emulation that includes this system with full connectome fidelity should exhibit causal reasoning capabilities that an emulation without this system (but with equivalent computational resources elsewhere) does not exhibit.

7.2 Protocol

Build two emulations from the same organism's connectome data:

  • Emulation A: Full brain with intact hippocampal-entorhinal circuit.
  • Emulation B: Same total neuron count and computational resources, but with the hippocampal-entorhinal circuit ablated or scrambled, redistributing those neurons into cortical areas.

Present both with tasks requiring Pearl Level 2 (interventional) and Level 3 (counterfactual) reasoning -- not just associational pattern-matching.

7.3 Prediction

Emulation A should show qualitatively different causal reasoning -- specifically, the ability to simulate novel routes through problem spaces it has never directly experienced (Tolman's original insight). Emulation B should be stuck at Level 1 (association), no matter how many cortical neurons are added. The causal architecture is not about raw compute -- it is about the specific topology of the hippocampal-entorhinal circuit.

7.4 Falsification

If Emulation B achieves equivalent causal reasoning through alternative architecture, the spatial-origin hypothesis is wrong. Causal reasoning is substrate-independent and can be achieved by multiple architectures. This would not necessarily undermine the measurement protocols, but it would require abandoning the specific neural grounding of the imaginary axis proposed in this appendix.

8. Hypothesis E3: Grid Code Universality

8.1 The Claim

The hexagonal grid code discovered in the entorhinal cortex should spontaneously emerge in any emulation that has the right connectivity topology, regardless of the specific biophysical details of the neurons. If so, the grid code is a topological invariant -- a property of the graph structure of the connectome, not the biochemistry.

This hypothesis connects directly to the universality hypothesis in brain emulation research: the question of which computational properties of neural circuits are preserved across different levels of modeling abstraction. It is also the linchpin of the manuscript's deepest claim: that the theological structure lives in the topology, not the substrate.

8.2 Protocol

Run the same connectome through emulators at different levels of biophysical detail: rate-coded neurons, leaky integrate-and-fire neurons, adaptive exponential integrate-and-fire neurons, and full Hodgkin-Huxley conductance models. In each case, measure whether hexagonal grid-like firing patterns emerge in the entorhinal region when the emulation navigates a virtual environment.

The critical control: scramble the connectivity graph while preserving the degree distribution and biophysical parameters. If grid patterns emerge from the graph topology, scrambling the graph should destroy them even with perfect biophysics. If grid patterns emerge from the biophysics, scrambling the graph should not affect them.

8.3 Prediction

Grid patterns emerge at all levels of biophysical detail, provided the connectivity graph is preserved. The grid code is universal in the specific sense that it is a property of the directed graph, not the nodes. Scrambling the graph destroys grid patterns regardless of biophysical fidelity.

This would support the manuscript's claim that topology (not specific implementation) carries the theological structure. The Riemann sphere's conformal structure -- the Holy Spirit in the manuscript's mapping -- would have a neural analogue: the grid code as a conformal invariant of the connectome.

8.4 Falsification

If grid patterns only emerge with detailed biophysical modeling (specific ion channel dynamics, neuromodulation, precise temporal dynamics), then the relevant structure is biochemical, not topological, and the universality hypothesis fails. The manuscript's claim that the theology lives in the topology would need revision.

9. Hypothesis E4: The Strange Loop Signature

9.1 The Claim

The manuscript claims that consciousness arises when a formal system becomes self-referential (the strange loop, following Hofstadter and Godel). In an emulation, this should have a detectable dynamical signature.

9.2 Protocol

Measure the information-theoretic properties of the emulated brain's dynamics. Specifically, compute the integrated information (Phi, as in Tononi's IIT) and the self-referential information flow (SRIF) -- the degree to which the system's current state is predicted by its own model of its own previous states. In an emulation, both quantities are directly computable because all variables are accessible.

9.3 Prediction

There should be a specific relationship: Phi should spike when SRIF crosses a threshold. The strange loop is not metaphorical -- it is a measurable attractor in the dynamical system. The system should settle into a regime where it spends significant computational resources modeling itself, and this regime should be stable (an attractor, not a transient).

Furthermore, the three components of the strange loop should be distinguishable in the dynamics:

  1. The formal system component: base-level sensorimotor processing.
  2. The self-referential statement component: the system's internal model of its own states.
  3. The process component: the information flow between the first two.

If these three components are measurable and irreducible (you cannot collapse any two without destroying the self-referential dynamics), that is the Trinity structure at the neural level -- the formal system, the self-referential statement, and the process that mediates between them, as mapped in the manuscript's Chapter 16.

9.4 Falsification

If Phi does not track SRIF; if the dynamics are smoothly graded rather than threshold-based; or if the three components are not distinguishable -- the Godelian/Trinitarian framing is wrong at the neural level. The theology may still work as metaphor, but it would not be structural.

10. Hypothesis E5: Emulation Singularities

10.1 The Claim

The manuscript classifies crises as removable singularities, poles, or essential singularities (Chapter 20). In a brain emulation, we can directly test whether neural dynamics exhibit these three types of perturbation response.

10.2 Protocol

Perturb the emulated brain with various types of disruptions -- simulated lesions, input noise at varying amplitudes, connectivity disruptions of varying scope -- and classify the system's recovery dynamics. Use a systematic perturbation space: vary the location (cortical vs. subcortical vs. hippocampal), magnitude (1% to 50% of connections), and type (deletion, scrambling, noise injection) of disruption.

10.3 Prediction

Three and exactly three qualitative recovery patterns:

Removable singularity: The system's trajectory is temporarily disrupted but returns to its previous attractor basin. The perturbation "looked bad" but the system was never actually destabilized. Analogue: a healthy brain's response to a momentary shock. Mathematically: the limit of the system's state as time approaches and passes the perturbation exists and is continuous.

Pole: The system enters a different attractor basin -- a genuinely new dynamical regime -- but one that is still structured and comprehensible (finite-dimensional). Recovery to a new stable state occurs, but it is a different state. Analogue: recovery from brain injury with altered personality. Mathematically: the function diverges but the order of divergence is finite and measurable.

Essential singularity: The system's dynamics become chaotic -- it visits many attractor basins unpredictably, never settling. The perturbation destroyed the system's dynamical coherence. Analogue: severe psychosis or delirium. Mathematically: the function takes arbitrarily many values in every neighborhood of the perturbation point (the Casorati-Weierstrass analogue in dynamical systems).

10.4 Falsification

If the recovery dynamics do not cluster into three types; if there is a continuum rather than discrete classes; or if there are more than three qualitative types -- the singularity classification does not map onto neural dynamics and the Laurent series theology is decorative rather than structural.


Part III: Integration and Research Timeline

11. How the Hypotheses Interlock

The eight hypotheses (three from Part I, five from Part II) are not independent. They form a logical chain:

If Hypothesis H1 succeeds (spatial training transfers to causal reasoning), this supports the exaptation claim that grounds the imaginary axis in hippocampal circuitry. If Hypothesis H2 succeeds (grid code disruption impairs metacognition before memory), this identifies the specific neural mechanism of self-reference. If Hypothesis H3 succeeds (grid-augmented AI outperforms transformers on counterfactuals), this validates the architectural prediction about AI.

These three together provide the foundation for the brain emulation program. If the spatial grounding hypothesis is supported at the individual level, the emulation hypotheses become high-priority tests of the framework's deepest structural claims. If E1 succeeds (Godelian threshold exists), this validates the manuscript's formal model of consciousness emergence. If E3 succeeds (grid code is a topological invariant), this validates the claim that theology lives in topology. If E4 succeeds (Trinity structure is measurable), this provides the strongest possible evidence for the framework. If E5 succeeds (three crisis types), this validates the singularity classification.

Conversely, failures at any stage constrain the framework. If H1 fails, the neural grounding of the imaginary axis needs revision (though the measurement protocols would still function as purely institutional indices). If E3 fails, the universality claim needs revision. If E5 fails, the singularity classification is decorative.

12. The Falsification Matrix

Hypothesis Tests If Confirmed If Falsified Timeline
H1: Spatial-causal transfer Taxi driver study Exaptation claim supported Neural grounding needs revision 1--2 years
H2: Grid code = self-reference Alzheimer's metacognition Godelian mechanism identified Grid code not self-reference mechanism ~1 year
H3: AI lacks grid code Transformer vs. augmented benchmark Architectural prediction validated Spatial grounding not necessary for causal reasoning 6--12 months
E1: Godelian threshold Emulation complexity sweep Consciousness is phase transition Consciousness is gradual emergence 18--48 months
E2: Spatial architecture Ablation emulations Hippocampus necessary for causation Multiple architectures suffice 18--48 months
E3: Grid code universality Multi-fidelity emulations Theology lives in topology Theology lives in biochemistry 6--24 months
E4: Strange loop signature Information-theoretic analysis Trinity structure measurable Trinitarian framing wrong at neural level 18--48 months
E5: Emulation singularities Perturbation studies Three crisis types validated Singularity classification decorative 36--60 months

Phase 1 (Months 0--12): Execute Hypothesis H3 (grid-augmented AI benchmark) and begin Hypothesis H2 (Alzheimer's metacognition retrospective). These require the least new infrastructure and provide the fastest signal on whether the framework's core claims have empirical support.

Phase 2 (Months 6--24): Execute Hypothesis H1 (taxi driver study) and begin Hypothesis E3 (grid code universality in emulations). H1 is the strongest behavioral test. E3 is the fastest emulation test and directly connects to the universality hypothesis that motivates much of the brain emulation community.

Phase 3 (Months 18--48): Execute Hypotheses E1 (Godelian threshold), E2 (spatial architecture ablation), and E4 (strange loop signature). These are the most demanding and most consequential tests. They should be attempted only after Phases 1 and 2 provide supporting evidence.

Phase 4 (Months 36--60): Execute Hypothesis E5 (emulation singularities) and integrate all results into a revised version of the manuscript's theoretical framework. E5 is the most speculative hypothesis and should be tested last, when the emulation infrastructure is most mature.

14. What This Changes About the Manuscript

If the research program outlined here produces supporting evidence, the manuscript gains several things it currently lacks:

A neural mechanism for the imaginary axis. The measurement protocols' y(t) would be grounded not just in institutional indices but in the hippocampal-entorhinal system's capacity for recursive self-modeling. The meaning crisis diagnosis (y'(t) <= 0) would have a neurological component: literal atrophy of the neural architecture of reflexivity.

An empirical anchor for the Trinity. If the strange loop signature (E4) reveals three irreducible, measurable components in neural dynamics, the Trinitarian mapping of Chapter 16 moves from analogy to structural claim.

A falsified or validated singularity taxonomy. If perturbation recovery dynamics (E5) cluster into exactly three types, the Laurent series theology of Chapter 20 is structural. If they don't, it's decorative.

A specific, testable prediction about the future of AI. The claim that scaling transformers will not produce genuine causal reasoning -- because the architecture lacks the hippocampal cognitive map -- is falsifiable within 6--12 months. This is the fastest test of any claim in the manuscript.

If the research program produces disconfirming evidence, the manuscript must revise accordingly. This is what the framework demands: the theology must generate novel predictions that turn out to be right, not merely fit existing data post hoc. This appendix is the manuscript's attempt to put its money where its mathematics is.


References

Aronov, D., Nevers, R., & Tank, D. W. (2017). Mapping of a non-spatial dimension by the hippocampal-entorhinal circuit. Nature, 543(7647), 719--722.

Banino, A., Barry, C., Uria, B., et al. (2018). Vector-based navigation using grid-like representations in artificial agents. Nature, 557(7705), 429--433.

Bellmund, J. L. S., Gardenfors, P., Moser, E. I., & Doeller, C. F. (2018). Navigating cognition: Spatial codes for human thinking. Science, 362(6415), eaat6766.

Constantinescu, A. O., O'Reilly, J. X., & Behrens, T. E. J. (2016). Organizing conceptual knowledge in humans with a gridlike code. Science, 352(6292), 1464--1468.

De Brigard, F., et al. (2013). Neural activity associated with episodic memory during counterfactual thinking. Neuropsychologia, 51(3), 556--564.

Hassabis, D., et al. (2007). Patients with hippocampal amnesia cannot imagine new experiences. Proceedings of the National Academy of Sciences, 104(5), 1726--1731.

Hofstadter, D. R. (1979). Godel, Escher, Bach: An Eternal Golden Braid. Basic Books.

Kumaran, D., & Maguire, E. A. (2005). The human hippocampus: Cognitive maps or relational memory? Journal of Neuroscience, 25(31), 7254--7259.

Maguire, E. A., et al. (2000). Navigation-related structural change in the hippocampi of taxi drivers. Proceedings of the National Academy of Sciences, 97(8), 4398--4403.

O'Keefe, J., & Nadel, L. (1978). The Hippocampus as a Cognitive Map. Oxford University Press.

Pearl, J., & Mackenzie, D. (2018). The Book of Why: The New Science of Cause and Effect. Basic Books.

Schacter, D. L., & Addis, D. R. (2007). The cognitive neuroscience of constructive memory. Annual Review of Psychology, 58, 167--194.

Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55(4), 189--208.

Tononi, G. (2008). Consciousness as integrated information: A provisional manifesto. Biological Bulletin, 215(3), 216--242.

Zecevic, M., et al. (2023). Causal parrots: Large language models may talk causality but are not causal. Transactions on Machine Learning Research.