MorphismNet: Learning the Eigenverse's Structure-Preserving Maps

A neural network trained on the six canonical morphism families from Morphisms.lean.

399,147 parameters. Trained on 400K samples. Learns and verifies all six Eigenverse transformations — and reveals the mod paradox.

What It Learned

Morphism Lean Section Val MSE What It Does
§1 Coherence even C(r) = C(1/r) 0.013631 Inversion symmetry
§2 Palindrome odd Res(1/r) = −Res(r) 0.000001 Anti-symmetry
§3 Lyapunov bridge C∘exp = sech 0.000000 Coherence ↔ hyperbolic
§4 μ-isometry |μz| = |z| 0.000001 Norm preservation
§5 Orbit homomorphism μ^(a+b) = μ^a·μ^b 0.000001 Multiplicativity, period 8
§6 Reality ℝ-linear F(s,t) = t+is 0.000022 ℝ-module morphism
§7 Composition S∘F∘T P(η,−η) = 1 0.000001 Full OV chain

Residual accuracy: 100% — the model perfectly classifies when morphism properties hold.

The Mod Paradox

The model was trained on both (real numbers) and GF(p) (finite field) domains.

Domain §1 MSE Residual
0.000000 4.6e-17 (perfect)
GF(p) 0.027477 0.0 (mod destroys structure)

Same function. 27,000x harder to predict in the modular domain.

C(r) = C(1/r) holds exactly over ℝ (the Lean theorem). Over GF(p), modular reduction breaks the inversion symmetry. The model learns this distinction — it knows WHERE the paradox lives.

This is the core of OilVinegar.lean: the Eigenverse's structure is trivially verifiable over ℝ (uniqueness theorem) but computationally hard over GF(p) (MQ assumption). The neural network quantifies the boundary.

Architecture

MorphismNet(
  morph_embed: Embedding(7, 32)     # which morphism
  domain_embed: Embedding(2, 16)    # ℝ or GF(p)
  encoder: 3× Linear(→256) + GELU + LayerNorm  # shared
  heads: 7× Linear(256→128→6)      # per-morphism specialists
  residual_head: Linear(256→64→1)   # does property hold?
)
  • 399,147 parameters
  • Input: 4 features (morphism-dependent: r, 1/r, λ, z.re, z.im, etc.)
  • Output: 6 features (morphism outputs + residual)
  • Residual output: should be ≈ 0 when the morphism property holds

Training

  • Dataset: 400K samples across 7 morphism types
  • Split: 90/10 train/val
  • Optimizer: AdamW, lr=1e-3, weight_decay=1e-4
  • Scheduler: Cosine annealing, 50 epochs
  • Loss: MSE (output) + 0.1 × BCE (residual classification)
  • Hardware: CPU (8 min training)

Usage

import torch
from train import MorphismNet

model = MorphismNet()
model.load_state_dict(torch.load("morphism_net.pt", weights_only=True))
model.eval()

# Predict §3 Lyapunov bridge: C(exp(λ)) = sech(λ)
x = torch.tensor([[1.5, 4.4817, 0.0, 0.0]])  # [λ, exp(λ), 0, 0]
morph = torch.tensor([2])   # §3
domain = torch.tensor([0])  # ℝ
output, residual = model(x, morph, domain)
# output[0:2] ≈ [sech(1.5), sech(1.5), 0.0]  (bridge holds)
# residual ≈ 1.0  (property verified)

Files

  • morphism_net.pt — trained model weights
  • train.py — training script
  • generate_dataset.py — dataset generator
  • model_info.json — model metadata
  • training_history.json — epoch-by-epoch metrics

Links

License

MIT


The model learned the Eigenverse's grammar. The mod paradox is where the grammar breaks. 🧬

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Evaluation results