Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

  • Home
  • /Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

image

Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

07 Jan 2026

Imagine you are training a neural network to simulate a drug molecule. You feed it gigabytes of quantum chemistry data. It learns the bond lengths. It learns the angles. It achieves state-of-the-art accuracy on the test set. Then you run a simulation. For a few nanoseconds, everything looks fine. Then, suddenly, two carbon atoms drift too close together. Instead of bouncing off each other, they slide through each other. Your simulation crashes. You have created a "Ghost Molecule."

This happened because your neural network failed to learn the most fundamental law of atomic physics. The Pauli Exclusion Principle. Two things cannot occupy the same space at the same time. In our recent paper, we showed why standard Deep Learning architectures create these Ghost Molecules, and how a specific architectural change fixes it, reducing physical violations by a factor of 3,000.

The "Soft Wall" Problem

In physics, atomic repulsion is modeled by the Lennard-Jones potential. As the distance rr between atoms approaches zero, the energy shoots to infinity (1/r121/r^{12}). This is a Hard Wall. It is an infinite energy barrier that no force in the universe can overcome.

Neural networks (specifically MLPs with ReLU activations) struggle to learn Hard Walls. Even if you feed them the inputs as 1/r1/r, the network combines them linearly.

  • The Math: An MLP approximates the function as ymx+by \approx mx + b.
  • The Physics: This creates a ramp, not a wall. We call this a "Soft Wall."

A Soft Wall creates a finite repulsive force. If your simulation optimizer pushes hard enough (e.g. trying to satisfy a bond constraint elsewhere), it can overpower the Soft Wall. The atoms overlap. The physics breaks.

The Solution: Improper Rationality

We solved this by changing the Inductive Bias of the output layer. Instead of a linear neuron, we used a Signed Common Meadows (SCM) rational layer.

Crucially, we configured it to be Improper. In rational function theory, an improper rational function is one where the degree of the numerator PP is greater than the denominator QQ (degP>degQ\deg P > \deg Q).

  • Standard MLP: Extrapolates Linearly (O(x)O(x)).
  • Improper SCM: Extrapolates Super-Linearly (O(xk)O(x^k)).

By setting degP=3\deg P = 3 and degQ=1\deg Q = 1 in our architecture, we mathematically guaranteed that the repulsive force would grow quadratically (or faster) as the atoms approached. The network didn't have to "learn" to be stiff. It was architecturally incapable of being soft.

The Test

To prove this matters, we designed a stress test called The Tether.

We placed two simulated atoms at a safe distance and attached a virtual spring to one of them. We then dragged it relentlessly toward the other atom, increasing the spring tension until something broke.

  1. Analytic Physics (Oracle): The atoms never touched. The energy barrier held infinitely.
  2. Steel-Man MLP: The "Soft Wall" held up to a weight of 1,250. Then it collapsed. The atoms fused.
  3. ZeroProofML: We maxed out the test weight at 3,000. The wall held. The atoms remained separate.

The numbers were staggering. Across 10 random seeds, ZeroProofML reduced the extrapolation error in the dangerous "Core" region (r<0.5σr < 0.5\sigma) by >3,200x (Log-MAE 1.33 vs 4.84).

Why this matters for AI Science

We spend a lot of time making neural networks bigger, deeper, and wider. But for scientific problems, Architecture is Destiny. If the underlying physics contains a singularity (like 1/r121/r^{12}), a standard neural network will always underestimate it. It smooths the world because it was designed to classify images, not to simulate fermions. By aligning the architecture with the algebra of the singularity (using Improper Rational functions), we didn't just get lower error. We got Valid Physics. We turned a ghost story into a simulation tool.

Join our newsletter!

Enter your email to receive our latest newsletter.

Don't worry, we don't spam

Related Articles

blog cover image
07 Jan 2026

Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

Standard MLPs create 'Soft Walls' that allow atoms to pass through each other. Here is how we built a 'Hard Wall' with much better physics.

blog cover image
08 Sep 2025

Why Your Neural Network Can't Learn 1/x (And What We Did About It)

Why smooth activations create "Soft Walls" near poles, and how Signed Common Meadows (SCM) fix it for robotics, pharma, and electronics.

blog cover image
15 Jul 2025

Looking for Problems: Where Should We Test ZeroProofML Next?

We validated the 'Physics Trinity' (Pharma, Electronics, Robotics). Now we need your help finding the next singularity.

Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion | ZeroProofML | ZeroProofML