⚠️ Important Notice for Companies Using This in Commercial Products
The code in this repository is released under the MIT License. However, companies that incorporate this code (in whole or in part) into revenue-generating commercial products or services, or that implement any patents held by Moonlight Technologies for commercial gain, must enter into a separate "Enterprise Commercial License".
Common situations that require the enterprise license:
Embedding the framework (whole or part) into a paid SaaS, application, or service.
Providing AI features to customers for compensation.
Large-scale internal usage that is directly tied to revenue generation.
➡️ If this applies to your organization, please contact: ✉️ info@evospikenet.org
Personal use, academic research, proofs-of-concept, and prototype work by startups remain free under the MIT license.
Some build system components, portions of documentation, source-code comments and unit tests have been generated with GitHub Copilot.
Because the project is large, documentation may not always be perfectly synchronized with implementation — always verify against the latest source code.
EvoSpikeNet — Distributed Brain Simulation Framework
Last updated: 2026-02-25 Version: v0.1.0
1. Project Overview
EvoSpikeNet is a scalable distributed brain simulation framework inspired by the principles of functional specialization and integration in biological brains. Specialized neural modules (vision, language, motor, etc.) run as independent processes and are dynamically coordinated and integrated by a central Prefrontal Cortex (PFC) module.
A flagship capability is the Q‑PFC feedback loop implemented in the PFC. The PFC measures uncertainty in its own decisions (cognitive entropy), uses that measurement to simulate a quantum‑inspired modulation circuit, and feeds the result back into its neuronal dynamics — an advanced self‑referential control mechanism.
Asynchronous distributed communication uses the Zenoh pub/sub model to connect simulation nodes. Multi‑process and multi‑node execution is supported to enable truly scalable neuromorphic systems beyond a single device. Gradient synchronization for distributed training is optionally available via torch.distributed (the system can run without explicit initialization of distributed backends).
Proof‑of‑Concept Programs
See proof-of-concept/POC_README.md for example PoC programs.
Future Applications
See future_apps/README.md for planned application scenarios.
Documentation
Official docs site: https://evospikenet.readthedocs.io — full API reference, tutorials and architecture guides. Built with MkDocs (can be served locally on port 8001).
2. Launching the Web UI
The Web UI provides an interactive interface for submitting multimodal prompts, running simulations, monitoring real‑time state and retrieving results. After startup, access the UI at http://localhost:8050.
Key features of the distributed brain runtime
Zenoh‑based asynchronous pub/sub for robust node-to-node communication.
PFC with ChronoSpikeAttention for task routing and the Q‑PFC feedback loop for self‑modulation.
Hierarchical functional modules (visual, auditory, language, motor, etc.) implemented as parent/child pipelines across ranks/nodes.
Secure communications (PSK + DH key exchange) with AES‑256‑GCM and session key management (Forward Secrecy). See docs/SECURE_DISTRIBUTED_BRAIN.md.
Spatial cognition and generation pathways (Where dorsal/What ventral) with dedicated spatial nodes. See docs/DISTRIBUTED_BRAIN_SPATIAL_NODES.md.
Q‑PFC Feedback Loop
The PFC dynamically modulates working memory and processing based on cognitive entropy using a QuantumModulationSimulator. This is a core innovation of the framework.
Spiking Neural Network Language Models
SpikingEvoTextLM (formerly SpikingTextLM): an snnTorch‑based spiking Transformer with TAS‑Encoding and ChronoSpikeAttention. Backwards compatible legacy variants remain for migration.
Trimo dal Multimodal Models
SpikingEvoMultiModalLM integrates text, image and audio processing.
Visualization and Monitoring
Real‑time spike clusters, network topology and training curves. Reuses RAG visualization layout for integrated UI.
Plugin Architecture
Dynamic plugin system with seven plugin types (Neuron, Encoder, Plasticity, Functional, Learning, Monitoring, Communication). Entry points support automatic discovery via setuptools. Plugins follow an initialize → activate → execute → deactivate lifecycle.
Microservice Architecture
Training, Inference, Model Registry and Monitoring split into independent services for scalable deployments. API Gateway exposes a unified entry point (port 8000).
Static Analysis / CI
Integrated checks: Black, isort, Flake8, Pylint, mypy, Bandit, interrogate.
Pre‑commit hooks run many checks before commits. CI runs security scans and docstring coverage checks.
Load Balancing, Configuration, and Engines
Multiple adaptive load‑balancing strategies and instance pooling.
Pydantic‑based typed configuration, environment layering and hot reload.
Multiple SNN core engines (LIF, Izhikevich, EntangledSynchrony) and recommended encoders SpikingEvoVisionEncoder / SpikingEvoAudioEncoder.
Federated Learning (Flower)
Optional integration for privacy‑aware federated training across nodes.
REST API and Python SDK
FastAPI backend with a typed EvoSpikeNetAPIClient SDK, retry/backoff behavior, and Jupyter integration (%evospikenet_connect, %%evospikenet_generate).
Web UI
Dash‑based multipage app for generating data, training, inference, analysis and system administration. Real‑time monitoring for node state, energy and spike activity.
Launch Options (high level)
Multiple launch options are provided. All approaches allow opening the UI at http://localhost:8050 unless otherwise stated.
2.1 NVIDIA NGC Container (GPU‑optimized)
Use docker-compose.ngc.yml for an NGC-optimized stack and optionally run a Jupyter notebook environment (port 8888).
2.2 Recommended: Integrated Docker System
Copy .env.example to .env and configure BUILD_TARGET / ENABLE_GPU.
Basic start:
cp .env.example .env
docker compose up
Use --profile full to start development profiles including dev, mkdocs, and training APIs.
2.3 Legacy shell scripts
GPU: sudo ./scripts/run_frontend_gpu.sh
CPU: sudo ./scripts/run_frontend_cpu.sh
2.4 RAG System
See rag-system for RAG deployments (Milvus/Elasticsearch). Typical workflow builds/starts the rag services, then uploads documents via the provided API.
2.5 Integrated Test Menu
An interactive test menu is available (python3 tests/unit/test_menu.py) to select and run grouped test categories and generate reports.
2.6 Patent Verification Test System
Automated Docker‑based verification for a set of patent implementations (MT25‑EV001 … MT25‑EV030).
3. Large‑Scale Training and Quickstart
EvoSpikeNet provides pipelines for distributed LLM training aligned to node types. Example quickstart steps:
# Show config
python scripts/train_llm_models.py --config config/training_config.yaml --show-color-config
# Vision node training (rank 9)
python scripts/train_llm_models.py --config config/training_config.yaml \
--category color_learning --color-level minimum --rank 9 --download-data --gpu
# Language node training (rank 0)
python scripts/train_llm_models.py --config config/training_config.yaml \
--category langtext --rank 0 --gpu
# Or run the provided launcher
./scripts/start_japanese_training.sh
4. Infrastructure as Code (IaC)
Terraform, Ansible, Kubernetes and Docker Compose are integrated for reproducible environment management and deployments. Common helper Makefile targets are provided (make env-setup, make terraform-apply, make docker-up, etc.).
Docker and Environment
Prerequisites: Docker (with docker compose), Docker Compose v2+, NVIDIA Container Toolkit (for GPU usage).
Build images:
docker compose build
Start services (example):
docker compose up
Project Layout
evospikenet/ — core framework source
frontend/ — Dash Web UI
tests/ — pytest unit and integration tests
scripts/ — convenience scripts for running, training and tests
examples/ — example programs
Additional Documentation
See docs/ for detailed technical documents. Key documents include concept guides, SDK documentation, distributed brain system design, testing guides and implementation indexes. Many documents also include English variants where noted.
Contact and Licensing
For commercial licensing or patent questions contact info@evospikenet.org
For detailed, section‑by‑section usage and the most recent updates, consult the docs/ directory and the various READMEs across this repository.