⚠️ Important Notice for Companies Using This in Commercial Products

The code in this repository is released under the MIT License. However, companies that incorporate this code (in whole or in part) into revenue-generating commercial products or services, or that implement any patents held by Moonlight Technologies for commercial gain, must enter into a separate "Enterprise Commercial License".

Common situations that require the enterprise license:

➡️ If this applies to your organization, please contact: ✉️ info@evospikenet.org

Personal use, academic research, proofs-of-concept, and prototype work by startups remain free under the MIT license.

Some build system components, portions of documentation, source-code comments and unit tests have been generated with GitHub Copilot.

Because the project is large, documentation may not always be perfectly synchronized with implementation — always verify against the latest source code.

EvoSpikeNet — Distributed Brain Simulation Framework

Last updated: 2026-02-25 Version: v0.1.0

1. Project Overview

EvoSpikeNet is a scalable distributed brain simulation framework inspired by the principles of functional specialization and integration in biological brains. Specialized neural modules (vision, language, motor, etc.) run as independent processes and are dynamically coordinated and integrated by a central Prefrontal Cortex (PFC) module.

A flagship capability is the Q‑PFC feedback loop implemented in the PFC. The PFC measures uncertainty in its own decisions (cognitive entropy), uses that measurement to simulate a quantum‑inspired modulation circuit, and feeds the result back into its neuronal dynamics — an advanced self‑referential control mechanism.

Asynchronous distributed communication uses the Zenoh pub/sub model to connect simulation nodes. Multi‑process and multi‑node execution is supported to enable truly scalable neuromorphic systems beyond a single device. Gradient synchronization for distributed training is optionally available via torch.distributed (the system can run without explicit initialization of distributed backends).

Proof‑of‑Concept Programs

See proof-of-concept/POC_README.md for example PoC programs.

Future Applications

See future_apps/README.md for planned application scenarios.

Documentation

2. Launching the Web UI

The Web UI provides an interactive interface for submitting multimodal prompts, running simulations, monitoring real‑time state and retrieving results. After startup, access the UI at http://localhost:8050.

Key features of the distributed brain runtime

Q‑PFC Feedback Loop

The PFC dynamically modulates working memory and processing based on cognitive entropy using a QuantumModulationSimulator. This is a core innovation of the framework.

Spiking Neural Network Language Models

Trimo dal Multimodal Models

Visualization and Monitoring

Plugin Architecture

Microservice Architecture

Static Analysis / CI

Load Balancing, Configuration, and Engines

Federated Learning (Flower)

REST API and Python SDK

Web UI

Launch Options (high level)

Multiple launch options are provided. All approaches allow opening the UI at http://localhost:8050 unless otherwise stated.

2.1 NVIDIA NGC Container (GPU‑optimized)

2.2 Recommended: Integrated Docker System

Basic start:

cp .env.example .env
docker compose up

Use --profile full to start development profiles including dev, mkdocs, and training APIs.

2.3 Legacy shell scripts

2.4 RAG System

See rag-system for RAG deployments (Milvus/Elasticsearch). Typical workflow builds/starts the rag services, then uploads documents via the provided API.

2.5 Integrated Test Menu

An interactive test menu is available (python3 tests/unit/test_menu.py) to select and run grouped test categories and generate reports.

2.6 Patent Verification Test System

Automated Docker‑based verification for a set of patent implementations (MT25‑EV001 … MT25‑EV030).

3. Large‑Scale Training and Quickstart

EvoSpikeNet provides pipelines for distributed LLM training aligned to node types. Example quickstart steps:
# Show config
python scripts/train_llm_models.py --config config/training_config.yaml --show-color-config
# Vision node training (rank 9)
python scripts/train_llm_models.py --config config/training_config.yaml \
--category color_learning --color-level minimum --rank 9 --download-data --gpu
# Language node training (rank 0)
python scripts/train_llm_models.py --config config/training_config.yaml \
--category langtext --rank 0 --gpu
# Or run the provided launcher
./scripts/start_japanese_training.sh

4. Infrastructure as Code (IaC)

Terraform, Ansible, Kubernetes and Docker Compose are integrated for reproducible environment management and deployments. Common helper Makefile targets are provided (make env-setup, make terraform-apply, make docker-up, etc.).

Docker and Environment

Prerequisites: Docker (with docker compose), Docker Compose v2+, NVIDIA Container Toolkit (for GPU usage).

Build images:
docker compose build

Start services (example):
docker compose up

Project Layout

Additional Documentation

See docs/ for detailed technical documents. Key documents include concept guides, SDK documentation, distributed brain system design, testing guides and implementation indexes. Many documents also include English variants where noted.

Contact and Licensing

For commercial licensing or patent questions contact info@evospikenet.org 


For detailed, section‑by‑section usage and the most recent updates, consult the docs/ directory and the various READMEs across this repository.