Embedded AI - Intelligence at the Deep Edge

Why Large Language Models think differently to us

David Such Season 4 Episode 3

Send us a text

This episode explores the world of embeddings, mathematical representations that allow Large Language Models (LLMs) like ChatGPT to “think” in thousands of dimensions. While humans are limited to conceptualizing in three dimensions, LLMs operate in 2048 or more, using embeddings to encode meaning and capture semantic relationships between words. 

The discussion contrasts this form of statistical pattern recognition with the richer, experience-driven reasoning of the human brain. It also introduces a new technique called ‘vec2vec,’ which enables translation between embeddings from different models. While powerful, this raises potential security concerns about reverse-engineering sensitive data from vector databases. The episode sheds light on the impressive capabilities of LLMs, while also questioning what it means for a machine to “understand.”

Support the show

If you are interested in learning more then please subscribe to the podcast or head over to https://medium.com/@reefwing, where there is lots more content on AI, IoT, robotics, drones, and development. To support us in bringing you this material, you can buy me a coffee or just provide feedback. We love feedback!