Denise Holt, reporting LIVE from IWAI 2023. In this groundbreaking video, Dr. Friston unveils his latest research on Active Inference, shedding light on its potential as a systematic blueprint for Artificial General Intelligence (AGI). Watch as Friston and his VERSES team pioneer a new kind of AI that mimics biological intelligence, making any data 'smart'.
Company Mentioned
“The collective progress in this field showcases Active Inference’s potential as a systematic blueprint for Artificial General Intelligence (AGI).”
As Chief Scientist at and pioneer of the Free Energy Principle that underlies Active Inference, Friston and his VERSES team have developed an entirely new kind of AI that mimics biological intelligence, able to take any amount of data and make it ‘smart.’
In an exclusive presentation at the recent held in Ghent, Belgium, this past September 13–15, 2023, world-renowned neuroscientist Dr. Karl Friston gave fewer than 100 global Active Inference experts and researchers a glimpse into the future of next-generation artificial intelligence. His talk provided perspective into some of the latest research related to state spaces, universal generative models, belief sharing between agents, active learning, and structured learning.
Dr. Friston is at the forefront of developing ‘Active Inference’ models, revolutionizing how intelligent systems learn and adapt. The information shared at IWAI is built on more than a decade of groundbreaking work and collaboration with fellow researchers in the at University College London.
Watch the full episode NOW on .
The Fabric of Intelligence: State Spaces and Factor Graphs
At the heart of understanding intelligence lies the idea of ‘generative world models’ — these are probabilistic frameworks that encapsulate the underlying structure and dynamics involved in the creation of sensory data. As Friston explained, developing a that can be widely applied has been a core pursuit.
The challenge lies in modeling the intricacies of ‘state spaces’ — capturing not just sensory data itself but the hidden variables and control factors that determine how stimuli unfold over time. By incorporating hierarchical and factorial structure, increased temporal depth, and an emphasis on Agent dynamics and control paths, Friston has worked to construct expressive yet interpretable models of complex environments.
As he suggested, the resulting grammar of cognitive machinery — factor graphs equipped with message passing — may provide a foundation for advancing distributed and embedded intelligence systems alike.
Exploring the Nature of State Spaces
A significant portion of the talk focused on state spaces within these generative models. Friston highlighted how understanding these spaces is vital for grasping the dynamics of how our brain predicts and reacts to stimuli.
Friston has focused heavily on developing more complete generative models of disputed state spaces, generalizing and expanding on partially observed Markov decision processes. His models place special emphasis on representing paths and control variables over time, while accounting for a variety of factors that can operate independently or in conjunction with each other, across different layers or levels of the model. The direct modeling of dynamics and control in his work sheds new light on understanding concepts like intentional planning as opposed to behaviors influenced by prior preferences.
Emergent Communication Systems Among Multiple Agents
. Friston showed that when Intelligent Agents pursue ‘active learning’ with a focus on gaining information, this approach inherently leads to the development of common communication methods and linguistic structures. Essentially, as Agents strive to enhance their model’s accuracy and effectiveness by exchanging beliefs, self-organized patterns evolve, spontaneously establishing a basis for mutual understanding and predictability.
Developing methods for seamlessly pooling knowledge and achieving consensus while retaining specialization is an active research direction. The integrated components must balance stability with flexibility, adapting core beliefs slowly while remaining receptive to external influences. As Friston noted, much like human maturational trajectories, childlike malleability fades, yet the integration of new ideas remains possible through niche environments.
Assembling a Universal Generative World Model
A universal ‘world model’ that can be widely applied remains an ambitious goal. Friston discussed two potential approaches — an initially simple model that grows structurally to capture more complexity versus a massively interconnected model that relies more on attentional spotlights. He also noted the potential for composing such models from subgraphs of specialized Intelligent Agents, allowing for parallel development of appropriately-scoped pieces. Belief sharing through communication protocols, such as the developed by VERSES AI, enables the learning Agents to converge on a mutually predictable, and thereby more efficient, joint world model.
Belief Sharing Through Active Learning
Belief sharing and its impact on active learning formed another cornerstone of Dr. Friston’s presentation. He revealed how our brains share and update beliefs, a process central to learning and adapting to new environments.
In his talk, Friston demonstrated belief sharing in an example with multiple Intelligent Agents playing a hiding game. This emergent communication system arises naturally from the drive to maximize the information and efficiency of the shared world model.
Information-Theoretic Insights into Active Learning
In a more technical sense, Friston views active learning as a process where updates to the model are seen as actions aimed at reducing expected free energy. This perspective is rooted in information theory, with free energy representing the gain in information, bounded by certain preferences that arise from updating specific parameters. As a result, the model learns mappings that are both sparse and rich in mutual information, effectively representing structured relationships. Friston illustrated this through a simulation of
Structured Learning: A New Perspective
Finally, structured learning was discussed as a novel approach within the realm of . Dr. Friston provided insights into how structured learning frameworks could offer more efficient ways of understanding and interacting with the world.
Structured Learning through Guided Model Expansion
Friston explored active model selection, framing the expansion of a model’s structure as similar to goal-directed optimization through expected free energy. Gradually introducing new elements, such as states or factors, following simple heuristic principles, allows for the automatic expansion of the model’s complexity, to better reflect and improve its alignment with the learning environment. Friston demonstrated applications ranging from unsupervised clustering to the development of distinct, non-overlapping representations in an Agent simulation.
Structured Deep Learning
Intelligence requires not just statistical modeling, like what we see in current AI machine learning/deep learning systems, but recognizing structural composition — the patterns of parts to wholes and dependencies that characterize natural systems that occur in the real world. Friston outlined strategies for progressively evolving the architecture of real-world models and making improvements to modular elements in response to fresh data, all centered around clear goals.
The guided increase in complexity astonishingly leads to the natural development of organized and distinct feature areas, classifies them into clear categories, and controls their behavior patterns, all from raw multimedia data streams. Such capabilities get closer to flexible human-like learning that moves far beyond the current passive pattern recognition systems of today, into Autonomous Intelligent Systems interacting with and reshaping their environments.
The Future of Active Inference AI
Overall, Dr. Karl Friston’s presentation at IWAI 2023 demonstrates that his work continues to push Active Inference AI into new territory through innovations like inductive planning, information-theoretic formalisms, emergent communication systems, and structured learning. As the models, methods, and insights mature, the prospects grow ever stronger for realizing artificial intelligence as multitudes of Intelligent Agents that can learn, act, and cooperate at the level of human intelligence.
While Friston’s presentation was meant only as a glimpse, The foundation of Dr. Karl Friston’s impact in the realm of AI is driven by his clarity of vision and deep intellectual insight that are actively shaping the evolution of this next era of these technologies.
Watch NOW on :
Episode Chapters:
00:00 — Introduction by Tim and Karl Friston’s Opening Remarks
06:09 — The Significance of Free Energy in Understanding Life
07:10 — Exploring Markov Blankets and Factor Graphs
09:56 — Hierarchical Generative Models and Time Perception
13:36 — Active Inference and Planning with Expected Free Energy
15:40 — Joint Free Energy and Generalized Synchrony in Physics
16:40 — The Role of Paths and Intentions in Physics and Active Inference
17:35 — Specifying Endpoints of Paths as Goals
30:26 — Communication and Belief Sharing in Federated Systems
31:35 — Generative Models and Perception in Agents
40:19 — Model Comparison and Latent State Discovery
41:25 — Active Model Selection and Handwritten Digit Styles
45:30 — Structured Learning and Object Localization
49:10 — Epistemic Foraging and Confidence in Learning
50:21 — The Dynamics of Information Gain and Learning
51:29 — Conclusion and Q&A Begins
01:19:33 — Emphasis on Constraints in the Free Energy Principle
01:23:40 — Closing Remarks
01:24:55 — Karl Friston’s Reflections on the Field’s Evolution and the Next Generation
Explore the future of AI with Active Inference and gain insights into this revolutionary approach!Visit my blogat , and Subscribe to my channelson,,,, and more, and subscribe to my newsletter on, for more ground breaking content.