Embedded AI - Intelligence at the Deep Edge
“Intelligence at the Deep Edge” is a podcast exploring the fascinating intersection of embedded systems and artificial intelligence. Dive into the world of cutting-edge technology as we discuss how AI is revolutionizing edge devices, enabling smarter sensors, efficient machine learning models, and real-time decision-making at the edge.
Discover more on Embedded AI (https://medium.com/embedded-ai) — our companion publication where we detail the ideas, projects, and breakthroughs featured on the podcast.
Help support the podcast - https://www.buzzsprout.com/2429696/support
Embedded AI - Intelligence at the Deep Edge
Pi and the Mirage of Patternicity
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In April 2025, a claim began circulating online: pi is gradually increasing around the 7,237th decimal place. A math enthusiast in Cincinnati named April Simons had apparently flagged the anomaly. Prof F.O. Olsday, head of the Number Theory Group at Princeton, was quoted confirming it. Cosmologists were linking it to the accelerating expansion of the universe. The same algorithm, the same hardware, different results. A 4 becoming a 5. Persistent. Inexplicable.
Except that "F.O. Olsday" is a phonetic rearrangement of "Fool's Day." And April Simons was posting from Cincinnati on the first of April.
Pi has not changed. It cannot change. It is a fixed ratio determined by Euclidean geometry, and every one of its digits is as immutable as the definition that produces them. The 7,237th digit was a 4 before 2016, it was a 4 after 2016, and it will remain a 4 until the heat death of the universe and beyond.
But here is what matters: the joke worked. It worked on humans, and it would work on machines.
This episode examines why both biological and artificial neural networks are structurally vulnerable to detecting patterns in structurally empty data, a phenomenon with a clinical name: apophenia. We trace the evolutionary logic behind false positive pattern detection, from Skinner's superstitious pigeons to the fusiform face area that fires on toast. We then show how the same asymmetry, optimising for recall at the expense of precision, is recapitulated in trained neural networks through simplicity bias, the documented tendency of gradient-descent-trained models to latch onto whichever statistical regularity is easiest to extract, regardless of whether it reflects causal structure.
If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!