/ZeroProofML 0.5.1: Strict Inference You Can Audit

In ZeroProofML, strict inference means that a trained rational or projective model does not merely return a floating-point value at deployment time. It returns a decoded payload together with explicit masks such as bottom_mask and gap_mask. Those masks say when the model has crossed into a mathematically undefined or unsafe region, instead of hiding that state behind a large finite value, a clipped output, or a later nan_to_num cleanup step. v0.5.1 carries that contract through export, validation, and handoff.
The 0.5 line now has a DOI-backed artifact boundary for the paper and reproduction path, with checksum validation and recorded run metadata. That may sound like release plumbing, but it is central to the project. Thresholds, masks, generated datasets, export metadata, and fallback policies all affect how a result should be interpreted. v0.5.1 makes those boundaries more visible by standardizing what gets recorded around runs: manifests, run provenance, environment details, validation reports, checksums, and regenerated summaries.
The benchmark runner has been promoted into a more consistent artifact system. DOSE, RF, and IK runs now share a clearer directory structure with per-seed outputs, aggregated summaries, paired statistics, claim audits, run provenance, metric logs, and optional HTML reports. That makes the benchmark workflow much less fragile. You can rerun, validate, compare, and archive a result without reverse-engineering whatever a domain script happened to write. It also makes the figures more useful: they are tied back to recorded run artifacts and focused manifests.
Version 0.5.1 also tightens ONNX exports. A downstream runtime needs to know which outputs are decoded values, which outputs are masks, what those masks mean, and how batch axes should be interpreted. The export bundle now carries that metadata explicitly. The bundle describes the strict inference tuple, including decoded, bottom_mask, and gap_mask, alongside input/output signatures, batch-axis semantics, package versions, and mask semantics. This is the most important practical change in the release: the model output carries an explicit strict-inference contract through export and runtime validation, rather than relying on convention.
The stable strict inference output still includes a merged bottom_mask. In common-meadow terms, there is only one absorptive bottom value, ⊥. The stable contract is deliberately simple: this entry decoded to a finite value, or it reached bottom. The new release also continues the experimental provenance work. Provenance is not part of the stable default inference contract, and it does not introduce new bottom values. It is an opt-in diagnostic layer that partitions entries already marked by bottom_mask according to diagnostic origin, for example entries attributed to non-finite inputs or near-zero denominators versus entries attributed to a semantic rejection path. That distinction can matter for fallback systems. A deployment policy may want to route one diagnostic category to an analytic fallback while rejecting another. The robotics reference path demonstrates that kind of policy without changing the underlying common-meadow semantics: the algebraic bottom is still the same ⊥, while provenance gives operators more context about why the stable mask was raised.
The map is a deterministic policy probe over the generated RR IK dataset. It deliberately includes accepted, analytic-route, and semantic-reject points so the distinction is visible; it is not a new stable inference-output schema.
Robotics models are rarely consumed as isolated predictions. They run in loops, under latency budgets, near joint limits, and sometimes near singularities. v0.5.1 adds more structure around that evaluation path. The trajectory evaluator records tracking error, fallback behavior, latency, and joint-limit safety in a reusable summary format. The point is not to turn a small generated trajectory set into a headline benchmark. The point is to make future deployment checks comparable. If a model, threshold, or fallback policy changes, there is now a clearer place to look for the effect.
Version 0.5.1 also adds the downstream composability report. It follows strict outputs through a small multi-stage pipeline: ingest, serialization, deserialization, aggregation, and decision. It is easy for an upstream model to know that something should be rejected, only for a later API boundary or aggregation step to erase that state. The strict payload can keep reject state and optional provenance attached long enough for the downstream decision to respect it.
The new release includes a compact FRU strict-check path as well. It can flatten a composed rational expression into a checked P/Q pair and retain denominator provenance. For supported FRU expressions, the system can retain denominator-source information that helps explain why an entry reached the common-meadow bottom ⊥. The strict behavior remains tied to the rational expression structure, and the export/checking path can use that information.
The ROS 2 work in v0.5.1 is still a beta integration surface, but it now has a lightweight non-ROS smoke path. The repository can check message shapes, node behavior, package layout, and replay helpers without requiring every development machine to be a full ROS workspace.
Strict inference should not disappear when the model is exported, routed, serialized, or integrated. It should remain visible all the way through the system. ZeroProofML v0.5.1 is a step toward making this operational. The release adds stronger reproducibility boundaries, more inspectable benchmark artifacts, explicit ONNX bundle contracts, experimental provenance-aware routing, experimental downstream payload checks, robotics trajectory summaries, FRU validation hooks, and a cleaner ROS 2 smoke path.
Enter your email to receive our latest newsletter.
Don't worry, we don't spam
ZeroProofML 0.5.1 makes strict rational inference inspectable across export, validation, and deployment handoff.
A controlled Lennard-Jones dimer test shows where smooth neural potentials can look accurate yet extrapolate too softly in the repulsive core.
Why smooth activations create "Soft Walls" near poles, and how Signed Common Meadows (SCM) fix it for robotics, pharma, and electronics.