Looking for Problems: Where Should We Test ZeroProofML Next?

  • Home
  • /Looking for Problems: Where Should We Test ZeroProofML Next?

image

Looking for Problems: Where Should We Test ZeroProofML Next?

15 Jul 2025

We built ZeroProofML to handle a specific mathematical structure: functions that can be expressed or approximated as rational expressions P(x)/Q(x), where Q(x) going to zero creates singularities. Our validation shows it works for robotics inverse kinematics—30-47% error reduction in near-singularity regions, 12× training speedup versus ensembles, deterministic behavior. But inverse kinematics is just one application of this mathematical pattern. We're genuinely uncertain where else this matters, and we need your help figuring that out.

Here's what makes a good candidate problem: you're learning a function with poles (outputs that should genuinely diverge to ±∞ at specific inputs), your current neural network approach creates smooth "dead zones" where it should spike, and accurate behavior near those singularities actually impacts your application performance. Bad candidates: removable singularities that disappear with better problem formulation, smooth functions that look pole-like but aren't, or cases where the singularity is so rare you can just ignore those samples. The framework works best when the underlying mathematics has explicit rational structure—not all division-by-zero problems fit this pattern.

We've identified several areas that seem plausible but remain completely untested. In computational physics, surrogate models for PDEs with singular source terms might benefit—think learning Green's functions with known pole structure, or physics-informed neural networks representing solutions with asymptotic singularities. In quantitative finance, correlation matrices lose rank during market stress, covariance estimation fails with limited samples, and derivative Greeks diverge near expiry—all potentially rational in structure. Power systems have natural rational functions in power flow equations and face convergence issues near voltage collapse. But "seems plausible" isn't validation. We need real problems, real data, and honest assessment of whether this helps.

Here's where we've found it doesn't help, which is equally important: smooth optimization problems with no actual singularities, classification tasks, sequence modeling, computer vision, or anywhere the mathematical structure isn't fundamentally rational. We also struggled with problems where singularities form dense sets rather than isolated points, and where the pole locations themselves are what you're trying to learn rather than being implicit in the function structure. If your current approach works fine, there's no reason to add complexity.

What we're asking from you: if you encounter division-by-zero errors in training, NaN propagation that breaks your models, or regions where your learned functions plateau incorrectly near mathematical singularities, let us know. Share the problem structure—even if it doesn't work out, understanding why helps refine where this approach applies. The repository is open source at github.com/domezsolt/ZeroProofML with examples for inverse kinematics. We're particularly interested in collaborations where you have domain expertise and real data, and we can contribute the mathematical framework. Be warned: we might tell you this isn't the right tool for your problem. That's valuable information too.

The honest reality is that most PhD research projects find unexpected applications through years of exploration, not through the original intended use case. Inverse kinematics might be our "penicillin killing bacteria" moment—the obvious application that proves the concept works. But where's the broader antibiotic revolution? We don't know yet. If you're working with singular functions and willing to experiment, we'd like to find out together. Email dome@zeroproofml.com with problem descriptions, and we'll be candid about whether we think it's worth trying. Negative results get published too—science progresses by mapping what doesn't work as much as what does.

Join our newsletter!

Enter your email to receive our latest newsletter.

Don't worry, we don't spam

Related Articles

blog cover image
08 Sep 2025

Why Your Neural Network Can't Learn 1/x (And What We Did About It)

Why smooth activations create dead zones near poles—and how rational layers with tagged infinities fix it for robotics IK and beyond.

blog cover image
27 Aug 2025

We Reduced Errors by 47%... in One Specific Region (And Why That Matters)

Why bucketed metrics near singularities matter more than overall averages for certain robotics IK deployments.

blog cover image
15 Jul 2025

Looking for Problems: Where Should We Test ZeroProofML Next?

Call for real problems with rational structure and true singularities—help us find where ZeroProofML matters beyond IK.

Looking for Problems: Where Should We Test ZeroProofML Next? | Your Site Name | ZeroProofML