nexus.dragonsource.org
Active

PROJECT NEXUS

The Global Neural Hub

The centralized island architecture connecting 72 subsystems across 9 platforms through a three-pillar database convergence, enriching every AI request with context drawn from 61,271 memory parameters before routing to any of three global LLM providers.

Overview

The Convergence Point

Project Nexus is the centralized island architecture at the heart of the DragonSource ecosystem. Hosted on Google Cloud Platform and launched publicly on April 1, 2026, it serves as the single convergence point for every application, subsystem, database, and AI agent across the entire network — 72 subsystems spanning 9 production platforms, unified under one operational core.

At its center is the Tower — the API exchange core where a three-pillar database architecture (Neon × Supabase × Firebase) merges structured relational data, real-time telemetry, and document-based memory into a single contextualized payload. Before any request reaches an LLM endpoint, it passes through 61,271 keyword-destination pairs drawn from 78 memory parameter files, transforming generic artificial intelligence into deeply personalized, context-aware intelligence.

This is not a microservices mesh or an API gateway. This is a neural hub — a topographical island where data flows converge, context is forged, and every system in the ecosystem speaks through a single, authorized voice. Nexus is the infrastructure layer that makes the DragonSource vision possible: AI that knows you, understands context, and operates with institutional memory.

Telemetry

By the Numbers

A comprehensive inventory of every subsystem, memory parameter, AI agent, and database layer that flows through the Neural Hub architecture.

72
Subsystems
Across 9 Platforms
Full ecosystem integration
9
Platforms
Production Ecosystem
Dragon through Router
3
DB Pillars
Foundation Layer
Operations, Runtime & Memory
78
Memory Files
EVA Context Store
JSON files across 3 zones
61,271
Keywords
Indexed & Mapped
Keyword-destination pairs
34
AI Agents
Active Deployment
Social, Executive & Core

Foundation

Three Database Pillars

The geological layers beneath the Island — the bedrock the Storage Containers and Tower are built upon. When a request arrives, the Tower performs a three-pillar fetch, merging all streams into a single contextualized packet.

01

Neon

PostgreSQL

The structured relational backbone. Platform configurations, user management, governance rules, and operational state — queryable, transactional, and strongly typed.

Feeds → Provides the operational context layer for every Tower request — platform state, session data, and configuration parameters.

02

Supabase

Real-time Layer

The nervous system. Real-time data streams, authentication, live telemetry, and event-driven triggers that keep every subsystem synchronized.

Feeds → Delivers live state and authentication context — ensuring every AI response reflects the current moment.

03

Firebase

Document Store

The document-centric memory layer. Firestore collections, real-time sync, cloud functions, and the flexible schema that adapts to the evolving ecosystem.

Feeds → Supplies the personal memory and Mind Map data — 61,271 keyword-destination pairs for personalized intelligence.

Infrastructure

What Powers the Hub

The production stack behind Nexus — from cloud hosting and serverless compute to the three database pillars and three LLM providers that power every AI interaction.

GCP

Google Cloud Platform

Hosting & Compute

RUNTIME

Cloud Run

Serverless Containers

PILLAR 01

Neon PostgreSQL

Relational Layer

PILLAR 02

Supabase

Real-time & Auth

PILLAR 03

Firebase

Document Store & Functions

LLM

OpenAI

GPT-4o / o1 / o3

LLM

Google Gemini

Gemini 2.5 Pro

LLM

Anthropic

Claude Sonnet 4