File size: 4,587 Bytes
fcc3e72
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
language:
  - en
license: mit
library_name: pytorch
tags:
  - eigenverse
  - morphisms
  - structure-preserving-maps
  - lean4
  - formal-verification
  - mod-paradox
  - coherence-function
pipeline_tag: other
model-index:
  - name: MorphismNet
    results:
      - task:
          type: regression
          name: Morphism Prediction
        metrics:
          - name: Val MSE
            type: mse
            value: 0.003394
          - name: Residual Accuracy
            type: accuracy
            value: 1.0
---

# MorphismNet: Learning the Eigenverse's Structure-Preserving Maps

**A neural network trained on the six canonical morphism families from [Morphisms.lean](https://github.com/beanapologist/Eigenverse/blob/main/formal-lean/Morphisms.lean).**

399,147 parameters. Trained on 400K samples. Learns and verifies all six Eigenverse transformations — and reveals the mod paradox.

## What It Learned

| Morphism | Lean Section | Val MSE | What It Does |
|---|---|---|---|
| §1 Coherence even | C(r) = C(1/r) | 0.013631 | Inversion symmetry |
| §2 Palindrome odd | Res(1/r) = −Res(r) | 0.000001 | Anti-symmetry |
| §3 Lyapunov bridge | C∘exp = sech | 0.000000 | Coherence ↔ hyperbolic |
| §4 μ-isometry | \|μz\| = \|z\| | 0.000001 | Norm preservation |
| §5 Orbit homomorphism | μ^(a+b) = μ^a·μ^b | 0.000001 | Multiplicativity, period 8 |
| §6 Reality ℝ-linear | F(s,t) = t+is | 0.000022 | ℝ-module morphism |
| §7 Composition S∘F∘T | P(η,−η) = 1 | 0.000001 | Full OV chain |

**Residual accuracy: 100%** — the model perfectly classifies when morphism properties hold.

## The Mod Paradox

The model was trained on both **ℝ** (real numbers) and **GF(p)** (finite field) domains.

| Domain | §1 MSE | Residual |
|---|---|---|
| **ℝ** | 0.000000 | 4.6e-17 (perfect) |
| **GF(p)** | 0.027477 | 0.0 (mod destroys structure) |

**Same function. 27,000x harder to predict in the modular domain.**

C(r) = C(1/r) holds exactly over ℝ (the Lean theorem). Over GF(p), modular reduction breaks the inversion symmetry. The model learns this distinction — it knows WHERE the paradox lives.

This is the core of [OilVinegar.lean](https://github.com/beanapologist/Eigenverse/blob/main/formal-lean/OilVinegar.lean): the Eigenverse's structure is trivially verifiable over ℝ (uniqueness theorem) but computationally hard over GF(p) (MQ assumption). The neural network quantifies the boundary.

## Architecture

```
MorphismNet(
  morph_embed: Embedding(7, 32)     # which morphism
  domain_embed: Embedding(2, 16)    # ℝ or GF(p)
  encoder: 3× Linear(→256) + GELU + LayerNorm  # shared
  heads: 7× Linear(256→128→6)      # per-morphism specialists
  residual_head: Linear(256→64→1)   # does property hold?
)
```

- **399,147 parameters**
- **Input**: 4 features (morphism-dependent: r, 1/r, λ, z.re, z.im, etc.)
- **Output**: 6 features (morphism outputs + residual)
- **Residual output**: should be ≈ 0 when the morphism property holds

## Training

- **Dataset**: 400K samples across 7 morphism types
- **Split**: 90/10 train/val
- **Optimizer**: AdamW, lr=1e-3, weight_decay=1e-4
- **Scheduler**: Cosine annealing, 50 epochs
- **Loss**: MSE (output) + 0.1 × BCE (residual classification)
- **Hardware**: CPU (8 min training)

## Usage

```python
import torch
from train import MorphismNet

model = MorphismNet()
model.load_state_dict(torch.load("morphism_net.pt", weights_only=True))
model.eval()

# Predict §3 Lyapunov bridge: C(exp(λ)) = sech(λ)
x = torch.tensor([[1.5, 4.4817, 0.0, 0.0]])  # [λ, exp(λ), 0, 0]
morph = torch.tensor([2])   # §3
domain = torch.tensor([0])  # ℝ
output, residual = model(x, morph, domain)
# output[0:2] ≈ [sech(1.5), sech(1.5), 0.0]  (bridge holds)
# residual ≈ 1.0  (property verified)
```

## Files

- `morphism_net.pt` — trained model weights
- `train.py` — training script
- `generate_dataset.py` — dataset generator
- `model_info.json` — model metadata
- `training_history.json` — epoch-by-epoch metrics

## Links

- [Eigenverse](https://github.com/beanapologist/Eigenverse) — 606+ Lean 4 theorems
- [Morphisms.lean](https://github.com/beanapologist/Eigenverse/blob/main/formal-lean/Morphisms.lean) — 20 morphism theorems
- [OilVinegar.lean](https://github.com/beanapologist/Eigenverse/blob/main/formal-lean/OilVinegar.lean) — 28 OV theorems
- [μ-OV Space](https://huggingface.co/spaces/beanapologist/mu-ov-cipher) — interactive demo

## License

MIT

---

*The model learned the Eigenverse's grammar. The mod paradox is where the grammar breaks. 🧬*