Docs

API Reference

Complete reference for the ZeroProofML v0.4 public API. For conceptual background, see SCM Foundations.

Table of Contents

SCM Core

zeroproof.scm.value

Scalar SCM values for Python/notebook use.

SCMValue

Immutable container carrying either a numeric payload or the absorptive bottom ⊥.

from zeroproof.scm.value import SCMValue, scm_real, scm_bottom

v = scm_real(3.14)
assert not v.is_bottom
assert v.value == 3.14

b = scm_bottom()
assert b.is_bottom

Attributes:

  • value: numeric payload (undefined if is_bottom == True)
  • is_bottom: boolean flag

Factory Functions

scm_real(x: float) -> SCMValue
scm_complex(z: complex) -> SCMValue
scm_bottom() -> SCMValue

zeroproof.scm.ops

Arithmetic and transcendental operations respecting meadow axioms.

Arithmetic

scm_add(a, b) -> SCMValue
scm_sub(a, b) -> SCMValue
scm_mul(a, b) -> SCMValue
scm_div(a, b) -> SCMValue  # b=0 → ⊥
scm_inv(a) -> SCMValue     # a=0 → ⊥
scm_neg(a) -> SCMValue
scm_pow(a, n) -> SCMValue

All accept SCMValue or Python scalars.

Transcendental

scm_log(a) -> SCMValue    # a≤0 → ⊥
scm_exp(a) -> SCMValue
scm_sqrt(a) -> SCMValue   # a<0 → ⊥
scm_sin(a) -> SCMValue
scm_cos(a) -> SCMValue
scm_tan(a) -> SCMValue

zeroproof.scm.fracterm

Symbolic rational term manipulation for Fused Rational Units.

from zeroproof.scm.fracterm import FracTerm

# Represent P(x)/Q(x)
term = FracTerm(
    numerator_coeffs=[1.0, 0.0, 1.0],   # x² + 1
    denominator_coeffs=[1.0, -1.0]       # x - 1
)

# Check singularities
singular_points = term.find_zeros_denominator()

Autodiff

zeroproof.autodiff.policies

Gradient handling strategies.

GradientPolicy Enum

from zeroproof.autodiff.policies import GradientPolicy

GradientPolicy.CLAMP        # Zero ⊥ gradients, clamp finite to [-1,1]
GradientPolicy.PROJECT      # Mask gradients on ⊥ paths
GradientPolicy.REJECT       # Always zero gradient
GradientPolicy.PASSTHROUGH  # Debug: propagate through ⊥

Context Manager

from zeroproof.autodiff.policies import gradient_policy

with gradient_policy(GradientPolicy.PROJECT):
    loss.backward()

Policy Application

apply_policy(
    gradient: float,
    is_bottom: bool,
    policy: GradientPolicy | None = None
) -> float

# Vectorized version
apply_policy_vector(
    gradients: Tensor,
    bottom_mask: Tensor,
    policy: GradientPolicy | None = None
) -> Tensor

zeroproof.autodiff.projective

Projective tuple operations.

ProjectiveSample

from zeroproof.autodiff.projective import ProjectiveSample

sample = ProjectiveSample(numerator=3.0, denominator=1.0)
# Represents 3.0/1.0 = 3.0

Encoding/Decoding

encode(value: SCMValue) -> ProjectiveSample
# Finite: (x, 1)
# Bottom: (1, 0)

decode(sample: ProjectiveSample, tau: float = 1e-6) -> SCMValue
# |D| < tau → ⊥
# else → N/D

Renormalization

renormalize(
    numerator: Tensor,
    denominator: Tensor,
    gamma: float = 1e-9,
    stop_gradient: callable | None = None
) -> tuple[Tensor, Tensor]

Detached renormalization for projective training. Auto-detects backend (PyTorch/JAX) and applies appropriate stop_gradient.

Layers

zeroproof.layers.scm_rational

Basic rational layer with SCM semantics.

from zeroproof.layers import SCMRationalLayer

layer = SCMRationalLayer(
    numerator_degree=3,
    denominator_degree=2,
    input_dim=10
)

output, bottom_mask = layer(x)

zeroproof.layers.projective_rational

Projective rational model for training.

from zeroproof.layers.projective_rational import (
    RRProjectiveRationalModel,
    ProjectiveRRModelConfig
)

config = ProjectiveRRModelConfig(
    input_dim=5,
    output_dim=1,
    numerator_degree=4,
    denominator_degree=3,
    hidden_dims=[64, 32]
)

model = RRProjectiveRationalModel(config)
N, D = model(x)  # Returns projective tuple

zeroproof.layers.fru

Fused Rational Units with fracterm flattening.

from zeroproof.layers.fru import FractermRationalUnit

fru = FractermRationalUnit(
    input_dim=3,
    max_depth=5,
    numerator_degree=3,
    denominator_degree=2
)

Losses

Implicit Loss

from zeroproof.losses.implicit import implicit_loss

loss = implicit_loss(
    N: Tensor,          # Numerator
    D: Tensor,          # Denominator
    Y_n: Tensor,        # Target numerator
    Y_d: Tensor,        # Target denominator
    gamma: float = 1e-9 # Stability constant
) -> Tensor

Cross-product form: (N·Y_d - D·Y_n)² / (sg(D²Y_d² + N²Y_n²) + γ)

Margin Loss

from zeroproof.losses.margin import margin_loss

loss = margin_loss(
    D: Tensor,
    tau_train: float = 1e-4,
    mask_finite: bool = True
) -> Tensor

Penalizes denominators approaching zero: mean(max(0, τ_train - |D|)²)

Sign Consistency Loss

from zeroproof.losses.sign import sign_consistency_loss

loss = sign_consistency_loss(
    N: Tensor,
    D: Tensor,
    Y_n: Tensor,
    Y_d: Tensor,
    tau_sing: float = 1e-3
) -> Tensor

Projective cosine for singular targets: 𝟙(|Y_d|<τ_sing) · (1 - cos(⟨N,D⟩, ⟨Y_n,Y_d⟩))

Coverage & Rejection

from zeroproof.losses.coverage import coverage_metric, rejection_loss

# Coverage
cov = coverage_metric(bottom_mask: Tensor) -> float
# Returns 1 - mean(bottom_mask)

# Rejection loss
loss = rejection_loss(
    current_coverage: float,
    target_coverage: float = 0.95
) -> Tensor

Combined Loss

from zeroproof.training.loss import SCMTrainingLoss

loss_fn = SCMTrainingLoss(
    lambda_margin=0.1,
    lambda_sign=1.0,
    lambda_rejection=0.01,
    tau_train=1e-4,
    tau_sing=1e-3,
    gamma=1e-9
)

total_loss = loss_fn(outputs=(N, D), targets=(Y_n, Y_d))

Training

Target Lifting

from zeroproof.training.targets import lift_targets

Y_n, Y_d = lift_targets(
    targets: Tensor,
    tau_sing: float = 1e-3
)
# Finite: (y, 1)
# ±inf: (±1, 0)
# NaN: (1, 0)

SCMTrainer

from zeroproof.training import SCMTrainer, TrainingConfig

config = TrainingConfig(
    max_epochs=100,
    gradient_policy=GradientPolicy.PROJECT,
    coverage_threshold=0.90,
    coverage_patience=10,
    use_amp=True,
    tau_train_min=1e-4,
    tau_train_max=1e-4
)

trainer = SCMTrainer(
    model=model,
    optimizer=optimizer,
    loss_fn=loss_fn,
    train_loader=train_loader,
    val_loader=val_loader,
    config=config
)

history = trainer.fit()

Adaptive Sampling

from zeroproof.training.sampler import AdaptiveSampler

sampler = AdaptiveSampler(
    dataset,
    initial_weights=None,
    update_frequency=10
)

loader = DataLoader(dataset, batch_sampler=sampler)

Inference

Strict Inference

from zeroproof.inference import strict_inference, InferenceConfig

config = InferenceConfig(
    tau_infer=1e-6,
    tau_train=1e-4
)

decoded, bottom_mask, gap_mask = strict_inference(
    N: Tensor,
    D: Tensor,
    config: InferenceConfig
)

Returns:

  • decoded: N/D with ⊥ positions undefined
  • bottom_mask: True where |D| < τ_infer
  • gap_mask: True where τ_infer ≤ |D| < τ_train

SCMInferenceWrapper

from zeroproof.inference import SCMInferenceWrapper

wrapped = SCMInferenceWrapper(
    model,
    config=InferenceConfig(tau_infer=1e-6)
)

decoded, bottom_mask, gap_mask = wrapped(x)

Utilities

IEEE Bridge

from zeroproof.utils.ieee_bridge import from_ieee, to_ieee

# IEEE → SCM
scm_val = from_ieee(float('nan'))  # → ⊥
scm_val = from_ieee(float('inf'))  # → ⊥

# SCM → IEEE
ieee_float = to_ieee(scm_bottom())  # → nan

# Batch conversion
from_ieee_batch(values: list[float]) -> list[SCMValue]
to_ieee_batch(values: list[SCMValue]) -> list[float]

Float64 Enforcement

from zeroproof.utils.dtype import ensure_float64

tensor = ensure_float64(tensor)
# Converts to float64 if needed, warns on precision loss

Backend Integrations

NumPy

Vectorized SCM operations with mask propagation:

import numpy as np
from zeroproof.scm.ops import scm_div_numpy

x = np.array([1.0, 2.0, 3.0])
x_mask = np.array([False, False, False])
y = np.array([1.0, 0.0, 1.0])
y_mask = np.array([False, False, False])

result, result_mask = scm_div_numpy(x, y, x_mask, y_mask)
# result_mask[1] == True (division by zero)

Available:

  • scm_add_numpy, scm_sub_numpy, scm_mul_numpy, scm_div_numpy
  • scm_inv_numpy, scm_neg_numpy

PyTorch

All layers return (output, bottom_mask) tuples:

import torch
from zeroproof.layers import SCMRationalLayer

layer = SCMRationalLayer(3, 2)
output, bottom_mask = layer(torch.randn(128, 10))

# Coverage
coverage = (~bottom_mask).float().mean()

# Safe decode
decoded = torch.where(bottom_mask, torch.nan, output)

Vectorized ops:

  • scm_*_torch variants in zeroproof.scm.ops

JAX

Functional SCM operations for JAX:

import jax.numpy as jnp
from zeroproof.scm.ops import scm_mul_jax

x = jnp.array([1.0, 2.0, 3.0])
x_mask = jnp.array([False, False, False])

result, result_mask = scm_mul_jax(x, x, x_mask, x_mask)

For projective training, see zeroproof.autodiff.projective as a reference and adapt to JIT-compiled code.

Installation by Backend

# PyTorch
pip install zeroproofml[torch]

# JAX
pip install zeroproofml[jax]

# Development (all backends)
pip install zeroproofml[dev,torch,jax]

Next Steps