Knowledge Graphs For Understanding






Building AI That Actually Understands: The Brain Simulator 3 Approach

After four decades in artificial intelligence research, I've witnessed countless promising algorithms come and go. While I remain optimistic about experimentation and innovation in AI, I've become increasingly convinced that today's systems are missing something fundamental: the common sense and understanding that every human naturally possesses.

Rather than chase the latest trends, I'm pursuing what I call "the sure thing." If we can decode the techniques used by the human brain and replicate them in software, we'll create AI systems that are not just more powerful, but genuinely smarter and more robust. We're not there yet, but we have a solid foundation to build upon.



The Power of Unified Intelligence

The human brain's remarkable capability stems from how it weaves together knowledge from multiple senses into a single, coherent internal model. This holistic approach to intelligence is what we're attempting to recreate with Brain Simulator 3—a software system designed to mirror the brain's unified processing architecture.

At the heart of our approach lies the Universal Knowledge Store (UKS), a central hub where concepts, relationships, and rules converge. Surrounding this core are specialized modules for language, vision, and action, all connected through a unified graph of knowledge. This architecture moves us beyond isolated AI tasks toward something much closer to genuine brain-like intelligence.




 Why Knowledge Must Be Structured as a Graph

Recent research has reinforced a crucial insight: knowledge in the brain must be represented as a graph of nodes connected by relationships. While this concept aligns naturally with the brain's network of neurons joined by synapses, deeper analysis reveals that these nodes are actually clusters of neurons, and relationships are represented by multiple synaptic connections.

This graph structure enables your brain to represent simple facts—like "Fido is a dog"—that immediately connect to a vast web of related concepts. Dogs have four legs. Dogs are mammals. Dogs bark. This network allows for powerful inferences without requiring every fact to be memorized separately.

I call this phenomenon "attribute inheritance," and it's one of the key mechanisms that allows our minds to generalize and create rules, dramatically reducing the amount of information we need to explicitly store. Of course, these rules require exceptions—penguins are birds, but penguins don't fly—and our system accounts for this complexity.

Unlike traditional lists or databases, graphs naturally handle cause-and-effect reasoning and unify perception across multiple senses. When a child sees a ball, their brain instantly links the visual image to the concept of "ball," connects it to experiences of throwing and bouncing, and generates predictions about what might happen next and what movements might be needed to catch it. This is the essence of how humans generalize, imagine, and predict.


Recognition, Action, and Specialized Agents

Human thinking can be broadly divided into two primary functions: recognizing things and situations in our environment, and then acting on that recognition. This involves querying our existing knowledge store to identify familiar patterns while simultaneously adding new, unrecognized information to expand our understanding. These processes work together to build our internal mental model of the world around us.

Beyond basic recognition and storage, our brains employ what I call "agents"—specialized processes that continuously refine and organize knowledge. One agent might notice that Fido, Rover, and Spot all share a common attribute like "has a tail" and automatically elevate that characteristic to the general concept of "dog." The moment you learn that Rex is a dog, attribute inheritance instantly tells you Rex has a tail without requiring additional information.

Other agents create new classifications (like "beagles"), instantiate individuals from those classes (like "Snoopy"), or prune away unused information. Together, these mechanisms make the brain's knowledge graph both remarkably compact and incredibly flexible.


The Universal Knowledge Store in Action

The UKS serves as Brain Simulator 3's memory and reasoning hub—a unified repository where all information is stored as interconnected nodes and relationships. This isn't merely a passive database; it's where inheritance, exceptions, bidirectionality, and generalization converge to create meaningful knowledge. Without this central structure, inputs would remain disconnected fragments, and outputs would be nothing more than pre-programmed responses.

One of the UKS's most important features is its multi-threading capability. Multiple processes can interact with the knowledge store simultaneously without interfering with each other or corrupting data. One thread might handle visual processing, converting images into objects and attributes, while another simultaneously processes language input in the form of words and phrases, and yet another runs queries to answer questions. All these activities occur in parallel while the UKS maintains consistency across the knowledge base.

This concurrent design is crucial because realistic brain-like systems can't afford to process information in a single-file queue. In biological brains, vision, hearing, touch, and motor control operate simultaneously, constantly feeding information to and retrieving from the same mental model. By making the UKS multi-threadable, Brain Simulator 3 mirrors this natural concurrency, allowing modules for perception, reasoning, and action to interact with the central knowledge store in real time.


A Unified Architecture for Multiple Modalities

Surrounding the UKS are specialized modules that connect it to the outside world through a hub-and-spoke model. This means every sense and every action uses the same internal structure—the UKS graph. This architecture fundamentally differentiates Brain Simulator 3 from most current AI systems.

Instead of building separate models for language, vision, and robotics, each with isolated representations, our approach unifies them through a single graph structure. Information gained through one channel—such as learning that balls bounce through visual observation—becomes instantly available to other channels, whether for speech ("the ball is bouncing") or action planning ("I need to catch the ball").

Just as the human brain integrates perception and action through its internal model, the UKS serves as common ground for all modules to share and reason about knowledge.


 Exploring the System Through Interactive Interface

The Brain Simulator 3 application provides a live window into the UKS graph through an intuitive user interface. The main display presents the knowledge store in a tree view, allowing you to trace how simple facts like "Fido is a dog" cascade into inherited attributes such as "four legs," "mammal," and "alive." Real-time changes to the UKS data are highlighted in green, letting you watch knowledge evolve as it happens.

The interface includes tools for building knowledge through "add statement" and "clause" dialogues. These allow you to enter new facts by defining subjects, relationships, and targets. Clauses enable more complex entries, creating compound or conditional relationships such as "Fido goes outside if the weather is sunny."

A query dialogue lets you ask questions directly against the UKS. You might ask "what animals are mammals?" or "which pets have four legs?" and watch the system retrieve answers by following chains of inheritance, exceptions, and bidirectional links. This makes it possible to both observe how knowledge is stored and verify how the system reasons across the network.



 Expanding Capabilities Through New Interfaces

Ongoing development focuses on making the UKS more accessible and powerful through innovative interfaces. One major project involves integrating ChatGPT-style large language models to translate everyday English into structured UKS knowledge and queries. These LLMs can also serve as sources to bulk-generate hundreds of facts about any concept in a form that flows directly into the UKS.

Another significant project is developing vision interfaces that convert images into UKS-searchable information. We're currently working with the MNIST database of handwritten digits, with the goal of enabling a photo of a cat on a chair to become a set of relationships: "This is a cat. The cat is on the chair. The chair is furniture." By representing visual data in the UKS, the system can reason about what it sees using the same mechanisms it uses for language, creating a truly unified model that ties words, images, and actions together.


Beyond Simulation: A Framework for Understanding

Brain Simulator 3 represents more than just a neural network simulator—it's a comprehensive framework for building AI that actually understands. The UKS provides a foundation where knowledge can be stored, inherited, and reasoned with, while surrounding modules translate real-world inputs into structured data and convert reasoning back into meaningful actions.

With ongoing development spanning natural language integration to vision interfaces, this architecture is steadily evolving into a robust platform for experimenting with genuine common sense reasoning. We're not just processing information—we're building systems that truly comprehend it.

The journey toward human-like AI understanding is complex and challenging, but by grounding our approach in the fundamental principles of how biological brains actually work, we're confident we're on the right path. The Brain Simulator 3 project represents a significant step toward AI that doesn't just compute, but genuinely understands the world around it.

---

To learn more about the Brain Simulator 3 project, visit the GitHub repository where you can download and experiment with the software yourself. 

Comments

Popular posts from this blog

Video From YouTube

GPT Researcher: Deploy POWERFUL Autonomous AI Agents

Building AI Ready Codebase Indexing With CocoIndex