/Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

Imagine you are training a neural network to simulate a drug molecule. You feed it gigabytes of quantum chemistry data. It learns the bond lengths. It learns the angles. It achieves state-of-the-art accuracy on the test set. Then you run a simulation. For a few nanoseconds, everything looks fine. Then, suddenly, two carbon atoms drift too close together. Instead of bouncing off each other, they slide through each other. Your simulation crashes. You have created a "Ghost Molecule."
This happened because your neural network failed to learn the most fundamental law of atomic physics. The Pauli Exclusion Principle. Two things cannot occupy the same space at the same time. In our recent paper, we showed why standard Deep Learning architectures create these Ghost Molecules, and how a specific architectural change fixes it, reducing physical violations by a factor of 3,000.
The "Soft Wall" Problem
In physics, atomic repulsion is modeled by the Lennard-Jones potential. As the distance between atoms approaches zero, the energy shoots to infinity (). This is a Hard Wall. It is an infinite energy barrier that no force in the universe can overcome.
Neural networks (specifically MLPs with ReLU activations) struggle to learn Hard Walls. Even if you feed them the inputs as , the network combines them linearly.
A Soft Wall creates a finite repulsive force. If your simulation optimizer pushes hard enough (e.g. trying to satisfy a bond constraint elsewhere), it can overpower the Soft Wall. The atoms overlap. The physics breaks.
The Solution: Improper Rationality
We solved this by changing the Inductive Bias of the output layer. Instead of a linear neuron, we used a Signed Common Meadows (SCM) rational layer.
Crucially, we configured it to be Improper. In rational function theory, an improper rational function is one where the degree of the numerator is greater than the denominator ().
By setting and in our architecture, we mathematically guaranteed that the repulsive force would grow quadratically (or faster) as the atoms approached. The network didn't have to "learn" to be stiff. It was architecturally incapable of being soft.
The Test
To prove this matters, we designed a stress test called The Tether.
We placed two simulated atoms at a safe distance and attached a virtual spring to one of them. We then dragged it relentlessly toward the other atom, increasing the spring tension until something broke.
The numbers were staggering. Across 10 random seeds, ZeroProofML reduced the extrapolation error in the dangerous "Core" region () by >3,200x (Log-MAE 1.33 vs 4.84).
Why this matters for AI Science
We spend a lot of time making neural networks bigger, deeper, and wider. But for scientific problems, Architecture is Destiny. If the underlying physics contains a singularity (like ), a standard neural network will always underestimate it. It smooths the world because it was designed to classify images, not to simulate fermions. By aligning the architecture with the algebra of the singularity (using Improper Rational functions), we didn't just get lower error. We got Valid Physics. We turned a ghost story into a simulation tool.
Enter your email to receive our latest newsletter.
Don't worry, we don't spam
Standard MLPs create 'Soft Walls' that allow atoms to pass through each other. Here is how we built a 'Hard Wall' with much better physics.
Why smooth activations create "Soft Walls" near poles, and how Signed Common Meadows (SCM) fix it for robotics, pharma, and electronics.
We validated the 'Physics Trinity' (Pharma, Electronics, Robotics). Now we need your help finding the next singularity.