Getting Started with ZeroProofML
ZeroProofML is a PyTorch library that enables neural networks to learn functions with singularities using Signed Common Meadow (SCM) semantics. Division by zero yields an absorptive bottom element (⊥) with sign tracking, not exceptions.
Installation
From PyPI (recommended)
pip install zeroproofml[torch]
From Source
-
Clone the repository:
git clone https://github.com/domezsolt/zeroproofml.git cd zeroproofml -
Create a virtual environment and install:
python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate pip install -e ".[torch]"
Quick Example
import torch
from zeroproof.scm.ops import scm_add, scm_div
from zeroproof.scm.value import scm_real, scm_bottom
# Basic SCM operations
x = scm_real(3.0)
y = scm_real(0.0)
print(scm_div(x, y)) # SCMValue(⊥) - division by zero yields bottom
print(scm_add(scm_bottom(), x)) # SCMValue(⊥) - bottom absorbs addition
Training a Rational Model
from torch.utils.data import DataLoader, TensorDataset
from zeroproof.inference import InferenceConfig, SCMInferenceWrapper
from zeroproof.layers.projective_rational import ProjectiveRRModelConfig, RRProjectiveRationalModel
from zeroproof.losses.implicit import implicit_loss
from zeroproof.training import SCMTrainer, TrainingConfig
# Create data
x = torch.linspace(-1.0, 1.0, 2048).unsqueeze(-1)
y = 1.0 / (x + 0.1) # Function with singularity at x = -0.1
train_loader = DataLoader(TensorDataset(x, y), batch_size=256, shuffle=True)
# Build model
model = RRProjectiveRationalModel(
ProjectiveRRModelConfig(
input_dim=1,
output_dim=1,
numerator_degree=3,
denominator_degree=2
)
)
# Wrap for strict inference
wrapped = SCMInferenceWrapper(model, config=InferenceConfig(tau_infer=1e-6, tau_train=1e-4))
# Define loss
def loss_fn(outputs, lifted_targets):
P, Q = outputs
Y_n, Y_d = lifted_targets
return implicit_loss(P, Q, Y_n, Y_d)
# Train
optimizer = torch.optim.AdamW(model.parameters(), lr=1e-3)
trainer = SCMTrainer(
model=model,
optimizer=optimizer,
loss_fn=loss_fn,
train_loader=train_loader,
config=TrainingConfig(max_epochs=20)
)
trainer.fit()
# Inference with safety masks
wrapped.eval()
decoded, bottom_mask, gap_mask = wrapped(x)
print(f"Bottom detections: {bottom_mask.sum().item()}")
Two Modes of Operation
SCM Mode (Default)
All operations propagate ⊥ according to meadow axioms. Gradient policies (clamp, project, reject, passthrough) shape backpropagation near singularities.
Use when:
- You need strict algebraic semantics throughout
- Deploying safety-critical models
- Working with small rational functions
Projective Mode (Optional)
Rational heads use homogeneous tuples ⟨N,D⟩ during training for smooth optimization. Ghost gradients flow through denominators approaching zero. Decode to SCM only at inference boundaries.
Use when:
- Training deep rational networks
- Need stable gradients near singularities
- Optimizing complex loss landscapes
What's New in v0.4
Breaking changes from v0.3:
- Single bottom element: No Transreal tags (+∞, −∞, Φ). A single ⊥ represents all singular states.
- Sign tracking: Weak sign operator provides orientation information near singularities.
- Explicit gradient policies: No implicit guards. Choose your policy in
zeroproof.autodiff.policies. - Projective training: New ⟨N,D⟩ tuple mode with detached renormalization.
Next Steps
- SCM Foundations - Understand the algebraic basis
- Training Guide - Learn projective learning and loss functions
- Inference & Deployment - Deploy with strict SCM semantics
- API Reference - Explore the full API surface