Looking for Problems: Where Should We Test ZeroProofML Next?

  • Home
  • /Looking for Problems: Where Should We Test ZeroProofML Next?

image

Looking for Problems: Where Should We Test ZeroProofML Next?

15 Jul 2025

We built ZeroProofML to handle a specific mathematical structure. Functions that can be expressed as rational expressions P(x)/Q(x)P(x)/Q(x), where Q(x)Q(x) going to zero creates singularities.

In our recent experiments, we validated this architecture against the Physics Trinity. The results transformed our understanding of where rational inductive biases shine.

  1. Pharma (Asymptotic): We discovered atomic repulsion walls (1/r121/r^{12}) with >3,000x lower error than standard MLPs.
  2. Electronics (Spectral): We extrapolated resonance peaks (1/Q1/Q) with 70% yield where baselines failed completely.
  3. Robotics (Geometric): We proved that rational biases provide 31.8x lower variance in control, even when the singularity is regularized.

But these three domains are just the proof-of-concept. The underlying operator—Signed Common Meadows (SCM)—is a general-purpose tool for embedding rational topology into neural networks. We are genuinely uncertain where else this matters, and we need your help figuring that out.

What makes a good candidate?

We used to ask for "functions with poles." We can now be more specific. You are likely a good candidate if your physics is fighting your neural network in one of these three ways.

  1. The "Soft Wall" (Asymptotic)
  • The Symptom: Your physics has a barrier where forces should go to infinity (like atomic collisions). Your neural network smooths this into a finite ramp. As a result, your simulation allows objects to pass through each other ("Ghost Molecules") under optimization pressure.
  • The Fix: Our Improper SCM head (degP>degQ\deg P > \deg Q) guarantees super-linear growth (O(xk)O(x^k)), forcing the network to respect the hard boundary.
  1. The "Clipped Peak" (Spectral)
  • The Symptom: Your data has sharp resonance lines or spectral peaks. Your network smears them out, underestimating the amplitude and losing phase coherence. This happens because standard layers treat the Real and Imaginary parts independently.
  • The Fix: Our Shared-Complex SCM head guarantees phase coherence, physically moving the poles to the stability boundary to capture the sharpest peaks.
  1. The "Panic Spike" (Geometric)
  • The Symptom: Your control system needs to shut down smoothly as it approaches a limit (like a robot arm locking). Instead, your learned controller oscillates or outputs dangerous velocity spikes near the boundary. This is the network's piecewise-linear bias failing to capture the rational damping curve.
  • The Fix: Our Bounded SCM head naturally models the bell-shaped damping topology, providing a deterministic "safety stop" without heuristic clipping.

Plausible (but untested) frontiers

We have identified several areas that seem mathematically identical to our success stories but remain completely untested.

  • Power Systems (Voltage Collapse): The equations governing power flow are rational. When a grid approaches voltage collapse, the system Jacobian becomes singular (V0V \to 0). Predicting the distance to collapse is a high-stakes rational extrapolation problem.
  • Quantitative Finance (Correlation Singularities): During market stress, correlation matrices lose rank. The determinant goes to zero. Inverting these matrices to calculate risk (1/det1/\det) causes massive instability. Standard models smooth over these crash events, potentially underestimating tail risk.
  • Fluid Dynamics (Shock Waves): Shock waves represent discontinuities in pressure and density. While not simple poles, they are sharp transitions that smooth approximators struggle to capture. Rational approximation theory suggests rational functions approximate step functions far better than polynomials do.

Where this DOES NOT help

Honesty is critical. We have found that ZeroProofML adds overhead (1.1×1.1\times compute) without benefit in:

  • Classification: Softmax is already a rational function normalized to probability. It works fine.
  • Computer Vision: Pixels on a grid rarely exhibit asymptotic behavior.
  • Smooth Manifolds: If your function doesn't blow up, go to zero, or become undefined, SCM is overkill.

The Ask

If you encounter division-by-zero errors in training, NaN propagation that breaks your models, or regions where your learned functions plateau incorrectly near mathematical limits, let us know.

We are particularly interested in collaborations where you have domain expertise and real data, and we can contribute the mathematical framework. Be warned. We might tell you this isn't the right tool for your problem. That is valuable information too.

The repository is open source at github.com/domezsolt/ZeroProofML with examples for all three domains. If you are working with singular functions and willing to experiment, we would like to find out together. Email dome@zeroproofml.com with problem descriptions. Negative results get published too. Science progresses by mapping the boundaries of failure as much as success.

Modified on 06 Jan 2026, after the release of v0.4.0.

Join our newsletter!

Enter your email to receive our latest newsletter.

Don't worry, we don't spam

Related Articles

blog cover image
07 Jan 2026

Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

Standard MLPs create 'Soft Walls' that allow atoms to pass through each other. Here is how we built a 'Hard Wall' with much better physics.

blog cover image
08 Sep 2025

Why Your Neural Network Can't Learn 1/x (And What We Did About It)

Why smooth activations create "Soft Walls" near poles, and how Signed Common Meadows (SCM) fix it for robotics, pharma, and electronics.

blog cover image
15 Jul 2025

Looking for Problems: Where Should We Test ZeroProofML Next?

We validated the 'Physics Trinity' (Pharma, Electronics, Robotics). Now we need your help finding the next singularity.

Looking for Problems: Where Should We Test ZeroProofML Next? | ZeroProofML | ZeroProofML