Neurosymbolic AI: The Architecture of a Semantic Neural Network. How to Teach LLMs to Calculate

LLMs fail at elementary math. Corporations spend billions, but ultimately are forced to attach calculators to computing machines of incredible power. All attempts to fix this via Chain-of-Thought, fine-tuning on arithmetic tasks, or context expansion have failed.
I conducted a series of experiments to understand why, and came to the conclusion that neural networks are simply not meant for discrete arithmetic. Their true purpose is continuous transformations.
This article describes the implementation of a novel neural network architecture that combines the precision of symbolic AI with the generalization capabilities of LLMs. As always, experiments and code are included.

















