SCM Foundations
What are Signed Common Meadows?
Signed Common Meadows (SCM) are an algebraic structure where division is totalized: every element has an inverse, and 1/0 yields a single absorptive bottom element (⊥) rather than an exception. ZeroProofML builds on the pioneering work of Jan A. Bergstra and John V. Tucker on meadows and common meadows.
Mathematical Background
A meadow extends a field (like ℝ or ℂ) with:
- Total inverse: Every element
xhas an inverse, including zero:0^{-1} = ⊥ - Absorptive bottom: ⊥ absorbs all operations:
x + ⊥ = ⊥x · ⊥ = ⊥⊥ / x = ⊥
- Closure: Operations remain within the meadow; no exceptions, no NaN propagation
This provides a mathematically rigorous foundation for arithmetic that never breaks.
Weak Sign Structure
Standard meadows are unsigned regarding the bottom element. To support applications where the direction of approach to a singularity matters (e.g., robotics, control systems), we layer a sign operator on top:
s(z) = {
z/|z| if z ≠ 0 and z ≠ ⊥
0 if z = 0
⊥ if z = ⊥
}
Properties
- Real inputs: 4-signed (+1, -1, 0, ⊥)
- Complex inputs: Projects onto the unit circle (weak sign)
- History-aware: When
|z|enters a small singular band|z| ≤ τ_sign, the operator returns the last observed orientation to prevent oscillation
This orientation tracking is critical for inverse kinematics, collision avoidance, and other geometric applications where sign ambiguity matters.
Practical Implementation
Scalar Values
SCM operations on scalars (zeroproof.scm.value, zeroproof.scm.ops) return SCMValue(⊥) on division-by-zero or domain errors:
from zeroproof.scm.ops import scm_div
from zeroproof.scm.value import scm_real
x = scm_real(1.0)
y = scm_real(0.0)
result = scm_div(x, y) # SCMValue(⊥)
Array/Tensor Representation
For arrays and tensors, SCM values are represented as:
- Numeric payload (the actual values)
- Bottom mask (boolean tensor marking ⊥ entries)
Important: The mask is authoritative. Payload values at masked entries are undefined and must be ignored. Use strict decoding to map them to NaN if you need an IEEE sentinel.
# Masked representation
value = torch.tensor([1.0, 2.0, 3.0])
bottom_mask = torch.tensor([False, True, False]) # Second element is ⊥
# Strict decode (for IEEE interop)
decoded = torch.where(bottom_mask, torch.nan, value)
No Layer-by-Layer Guards
Unlike v0.3, there's no "guard mode" checking at every layer. Singularity handling is pushed to:
- Explicit masks during forward pass
- Single decode at the output boundary
- Gradient policies during backpropagation
This dramatically simplifies the computational graph.
Fracterm Flattening
Common meadow theory guarantees that any term over the meadow signature can be flattened into a single rational function P(x)/Q(x). ZeroProofML exploits this for Fused Rational Units (FRUs).
Why Flatten?
Instead of checking for singularities at every division operation in a deep rational network, we:
- Symbolically flatten small rational subgraphs into
P(x)/Q(x) - Check only the final denominator
Q(x)for zeros - Get O(1) singularity detection instead of O(depth)
Example
Consider a two-layer rational expression:
f(x) = ((x + 1) / x) / (x - 1)
Flattening produces:
P(x) = x + 1
Q(x) = x(x - 1)
Now singularities live at x ∈ {0, 1} and can be detected by checking |Q(x)| < τ_infer once.
Engineering Constraints
To avoid degree explosion, flattening is restricted to:
- Small rational heads (not whole models)
- Depth limit: L ≤ 5 layers
- Local application: The rest of the network stays in standard SCM or projective form
Implementation
zeroproof.scm.fractermprovides symbolic utilities for representing and simplifyingP/Qtermszeroproof.layers.fru.FractermRationalUnitenforces degree-growth and depth constraints- Automatic flattening of arbitrary PyTorch subgraphs is a future enhancement
Terminology
| Term | Definition |
|---|---|
| Bottom (⊥) | Absorptive error element; any operation involving ⊥ yields ⊥ |
| Sign operator s(·) | Maps values to unit magnitude with orientation; returns ⊥ unchanged |
| Weak sign | Sign function extended to complex numbers (unit circle projection) |
| Projective tuple | Pair ⟨N,D⟩ representing the same value as N/D; ⟨1,0⟩ denotes ⊥ |
| Coverage | Fraction of predictions that remain finite (non-⊥) under SCM semantics |
| Fracterm flattening | Rewriting rational subgraphs as a single P(x)/Q(x) for efficient singularity checks |
Design Principles
The SCM core follows these principles:
- Train on smooth, infer on strict: Use projective tuples or gradient policies for smooth training; enforce strict meadow semantics at deployment
- Algebraic strictness: No epsilon-regularization hacks; division by zero is a valid algebraic state
- Explicit over implicit: Singularity handling via masks and policies, not hidden guard branches
- Safety by construction: Configurable thresholds (τ_infer, τ_train) with explicit gap regions
References
- Bergstra, Jan A., and Alban Ponse. ["Division by zero in common meadows."(https://link.springer.com/chapter/10.1007/978-3-319-15545-6_6)] Software, Services, and Systems: Essays Dedicated to Martin Wirsing on the Occasion of His Retirement from the Chair of Programming and Software Engineering. Cham: Springer International Publishing, 2015. 46-61.
Next Steps
- Training Guide - Learn how to train with projective tuples and SCM losses
- Inference & Deployment - Deploy with strict SCM semantics
- API Reference - Explore SCM operations and layers