On AI, Brain, and Mind Research. Q&A with Danko Nikolić
Q1. Your theory of hierarchical adaptations (practopoiesis) offers a unique biological perspective on functional organization and mind emergence. How do you envision this theoretical framework translating into practical AI system architectures? What fundamental shifts in how we design machine learning systems could emerge from truly understanding biological intelligence organization?
I believe this can be practically applied by adding one more adaptive level to our current AI stack. Today, we have two levels:
- Neural networks – the architectures that process information.
- Learning algorithms – the mechanisms that train those networks.
What’s missing is a third adaptive level that sits between these two. I call these stateful gates. These gates transiently rewire the network, operating on the scale of a second or so. By doing this, they make the entire system far more flexible, significantly boosting its intelligence.
With stateful gates, we will no longer need huge amounts of training data or massive computational power. This approach allows for leaner, smarter systems. More details can be found at www.gating.ai.
I believe stateful gates are the missing piece of the puzzle we’ve been searching for to build the kind of AI we’ve always dreamed of.
Future AI built on stateful gates—grounded in my theory of how intelligence emerges in biology—will exhibit many of the advantages we still see in living brains compared to silicon-based systems. These include:
- Compact architectures (“small brains”)
- Low energy demands
- Continual learning
- Learning from very small datasets
- A high capability for abstraction
- The ability to adapt to truly novel situations
Q2. The AI-Kindergarten concept represents a fascinating approach to developing AI systems through environmental interaction and self-directed learning. Can you elaborate on how this differs from current reinforcement learning paradigms? What are the most significant technical and philosophical challenges in creating AI systems that genuinely learn like biological entities do?
From the outside, both reinforcement learning (RL) and AI-Kindergarten might seem similar. In both cases, an agent acts, receives feedback, and uses that feedback to improve future actions.
However, under the hood, they are fundamentally different.
- Reinforcement learning operates with two adaptive levels: the neural network itself and its learning algorithm.
- AI-Kindergarten, on the other hand, is designed to support up to four adaptive levels.
Here’s how the hierarchy looks:
- Neural network – standard connectionist architecture.
- Stateful gates – inserted between the network and the learning mechanism, allowing for transient rewiring and flexible adaptation.
- Learning mechanism – trains the neural network.
- Inductive bias learning mechanism – adjusts the learning algorithms themselves over time, making them increasingly efficient.
At Robots Go Mental, we’ve already developed one such mechanism for level four, called Guided Transfer Learning, but we see many more possibilities for the future.
The biggest challenge is what I call the “Brute Force Zeitgeist”. Today, the dominant approach is to throw as many resources as possible at a problem—more compute, more data, more memory. While this works to some degree, it’s inherently limited because the costs grow astronomically. It’s even questionable whether today’s leading AI companies, such as OpenAI or Anthropic, can ever become sustainably profitable. The common mindset is: “Brute force works. Good enough. Why bother with anything else?” This slows progress because society—and the industry—are not patient enough to explore more efficient, biologically inspired solutions like those offered by AI-Kindergarten.
Q3. Your work on ideasthesia explores how semantic associations influence sensory perception. How might this understanding reshape how we approach multimodal AI systems and human-AI interaction? What implications does this have for developing AI that can truly understand context and meaning rather than just processing patterns?
Ideasthesia began as a new way to understand synesthesia, a phenomenon where the senses appear to mix. For example, a person might see colors when they hear music or associate specific numbers with tastes or textures. Our research at the Max Planck Institute revealed that synesthesia is semantic in nature. It’s not directly caused by sensory input, as traditionally thought, but by concepts. This led me to coin the term ideasthesia, combining “idea” (meaning concept in ancient Greek) and “esthesia” (meaning sensation).
Cognitive science has long established that humans think with concepts. Ideasthesia goes further by showing that even basic sensory experiences are mediated by concepts. This suggests there is no such thing as raw perception or raw experience—everything is filtered through meaning.
For AI, this has profound implications. If we want to achieve true understanding and high-level intelligence, we must first figure out how to represent and process concepts, not just patterns. This revives deep philosophical questions, including John Searle’s Chinese Room argument, and underscores the need to design AI that interacts with concepts in a human-like way, rather than merely manipulating symbols.
Q4. With over 20 years in brain and mind research, you’re working to bridge one of the most fundamental questions in cognitive science. From your perspective, what are the key insights from neuroscience that the AI field is still largely overlooking? How might a deeper understanding of consciousness and subjective experience inform the next generation of AI systems?
One major discovery that neuroscience has not fully appreciated is the role of G-protein-gated ion channels and metabotropic receptors. It’s well known that these membrane proteins affect how neurons and dendritic branches process information. What hasn’t been widely recognized is that they may hold the key to higher intelligence. These proteins allow for instantaneous and temporary rewiring of neural circuits—on the scale of milliseconds to fractions of a second. This enables processes like thinking, perception, attention, and decision-making. I explored this idea in detail in this paper. Unfortunately, the field has been slow to recognize the potential and invest in testing these hypotheses.
These biological mechanisms directly inspired my concept of stateful gates. In the brain, they likely represent the middle adaptive level between long-term plasticity and immediate neural activation. If we want to truly understand both human intelligence and advanced AI, this is where we need to focus our efforts.
Q5. Looking ahead, how do you see the convergence of your neuroscientific insights, practical AI development experience, and theoretical frameworks like practopoiesis shaping the future of artificial intelligence? What would success look like in creating truly biological-like AI, and what are the most promising research directions to get there?
After many years of slow, painstaking work, I finally feel I’m starting to bring everything together—theory, neuroscience, and practical AI. It took a long time because these are incredibly hard problems, but now I expect rapid progress. Fingers crossed!
- Practopoiesis guided the development of AI-Kindergarten, a framework for training biologically inspired AI.
- The identification of metabotropic receptors and G-protein-gated ion channels provided the biological basis for these adaptive capabilities.
- This directly led to the creation of stateful gates for practical AI applications.
I am convinced this is the future of AI. Beyond AI itself, these ideas could also revolutionize how we study human cognition, shedding light on complex phenomena like attention, perception, intelligence, and consciousness. The path forward involves:
- Understanding practopoiesis.
- Exploring the role of this new “in-between” adaptive level.
- Studying implementations—from mathematical models to biological membranes.
- Formulating new hypotheses.
- Rigorously testing them.
The original practopoiesis paper can be found here.
I’m also writing a book on Stateful Gating.

Danko Nikolić
Danko Nikolić is a neuroscientist, cognitive scientist, and theorist dedicated to bridging the gap between brain and mind. He holds degrees in Civil Engineering (1992) and Psychology (1994) from the University of Zagreb, and earned his M.A. (1997) and Ph.D. (1999) in Psychology from the University of Oklahoma.
After joining the Max Planck Institute for Brain Research in 1999, he pioneered parallel recordings from hundreds of neurons in the visual cortex using multi-electrode probes. Over time, his work has spanned electrophysiology, visual cognition, behavioral studies, and the investigation of phenomenal experiences such as synesthesia and ideasthesia.
Nikolić is best known for developing practopoiesis, a theoretical framework explaining how adaptive, intelligent systems—biological or artificial—self-organize and produce mental phenomena. He also introduced ideasthesia, showing that meaning and conceptual processing play a central role in sensory experiences. Through his AI-Kindergarten project, he explores how adaptive AI systems can be “bred and raised” rather than rigidly programmed.
Currently, Nikolić is an entrepreneur building cutting-edge AI technologies as the founder of Robots Go Mental and Gating AI, companies dedicated to advancing machine intelligence beyond the limitations of deep learning.