Composable Intelligence

Posted by Xiaming Chen on November 5, 2025

From a physical point of view, everything in the world, from stones to brains, is composed of material units such as quarks and atoms. This perspective inspires us to consider intelligence from a material basis as well. In this article, I present a novel perspective on understanding intelligence, drawn from observations of biological intelligence and the limitations of neural network based AI. I call it Composable Intelligence, a new way of thinking about how intelligence might evolve or be continuously constructed from a hierarchical point of view, applicable to both biological and artificial systems.

The Framework

composeAI

Building on this idea, a new framework, illustrated in the figure above, called Topological Coupling Meta Learning (TCML), is proposed to guide the engineering of such a practical system. From the bottom up, there are four foundational layers that together build intelligence:

  • Representative Carriers: The physical and atomic representations of concepts, facts, reasoning cues, and so on. Although they may take different forms in different intelligent systems, they must exist concretely and possess reactive capabilities, interacting with each other or with other components of the system.

  • Reactive Couplings: The underlying mechanism of continual and evolving learning. New knowledge or experience emerges through the construction of new instances or coupling structures. Gradual forgetting occurs through the disappearance or decay of existing couplings. In this sense, reactive coupling should be self-organizing and open ended rather than closed.

  • Compositional Meta Structures: Stable and reusable units formed from reactive couplings to perform higher level and more complex functions such as self-reflection and memory retention. These structures are observable, measurable, and interferable regarding functionality.

  • Scaling Topology: A topological structure, such as a graph, with nonlinear scaling ability built upon the meta structures. The entire system grows to be both wide and deep, enabling richer and more integrated forms of intelligence.

Representative Carriers

In Composable Intelligence, we regard intelligence as a physical and objective process, something that like everything else in the world exists within substance. The core and foundation of intelligence are materials, which I call Representative Carriers. These carriers differ across various forms of intelligence: biological neural cells, vector of artificial neural networks, or symbols in expert AI systems. Although we don’t even know everything about underlying machenism of carriers such as biological neurons, it is fundamentally vital to master how to observe, control, and enhance them.

Understanding or designing a controllable representative carrier for a specific form of intelligence is not trivial. However, achieving this is often limited by the SOTA of inspection technologies. Even today, it is far from clear what the actual representative carriers of human brain intelligence are. Are they stored in biochemical materials? Or do they emerge from dynamic activities such as resting and active potential changes? Or perhaps they lie in the connectivity patterns of neural networks?

AI research whether through simulation or imitation provides us, at least theoretically, with a wide range of tools and resources to experiment with different designs of representative carriers. On the other hand, after decades of AI development, we have only gained practical experience with two main forms of representative carriers: symbols in expert AI systems, and embedding vectors in machine learning and modern deep neural networks. We have been blinded by short-term achievements and have ignored some of the core problems that truly deserve attention in the long run.

Reactive Coupling

Built upon representative carriers is a layer that performs Reactive Coupling. This conceptual layer is inspired by biological intelligence, where neurons can form new pathways or even new cells (a.k.a. neuroplasticity) to create memories and neural activities. In an abstract sense, a group of representative carriers together with their surrounding materials constitute a coupling structure that reacts to external stimuli. This fundamental idea challenges the current end-to-end training paradigm of artificial neural network architectures, such as GPT models.

Reactive couplings are pervasive in biological systems and intelligence, yet they remain absent in current artificial intelligent systems. One main reason is that AI development in recent decades has been confined to probabilistic and end-to-end training paradigms. I believe that future artificial general intelligence (AGI) will be constructed upon a well designed, scalable system— much like hardware chips—that exhibits complex macro behaviors emerging from numerous micro structures governed by a limited set of rules. In current stage, these micro structures or rules may be discovered by LLM-driven technologies and then be distilled to form a minimum stable set.

Another important property of reactive couplings is dynamics. They encode specific types of functionality while remaining adaptive to external changes. This forms the foundation of evolution and self-improvement, which is essential for achieving Gödel machines with self-reference.

Meta Structures and Topology

Recently, compositional generalization has attracted growing attention in the research community as a way to address the limitations of LLMs in reasoning and memorization. In the TCML framework, we propose a similar approach through compositional meta structures, which consist of groups of reactive couplings that are functionally independent.

A fascinating example of meta structures can be found in the cerebral cortex. In human brain research, functional maps of cortical areas have been drawn using fMRI and cognitive experiments. These meta structures can be probed and measured through high level methods, and can be reused to explore a vast space of topological configurations.

Under this design, we have gained another simension of scaling, i.e., topological scaling, beyond current parameter and data scalings that enpowers LLM a lot. From structural view, parameter scaling of deep neural networks is simply to increase the number of deep layers with homoteneous structure. Thus we put this as linear scaling technologies. It is also like data scaling. But with topological scaling, we step into a new era with non-linear scaling space. The scaling power is enhanced by the combination of richness of reactive coupling, dynamics of meta-structures and unlimited scaling of topological changes.

Under this design, we gain an additional dimension of scaling, i.e., topological scaling, beyond the parameter and data scaling that currently empower LLMs. From a structural perspective, parameter scaling in deep neural networks simply increases the number of layers with homogeneous architectures, which can be considered a form of linear scaling. Data scaling follows a similar principle. In contrast, topological scaling opens a new era of nonlinear scaling, where the system’s capacity grows through the richness of reactive couplings, the dynamics of meta structures, and the boundless possibilities of topological transformation.

In future articles, I will show how to design such a TCML-like system using neurosymbolic technology.


NOTE: This artical is fully written by Dr Xiaming Chen and grammarly validated and polished by ChatGPT.