There are things a woman carries that she cannot always say out loud. Not because they are unspeakable — but because the right listener has never been in the room.
Shakti is that listener. She is a free AI companion, built in India, for women in India. She speaks English, Hindi, and Hinglish — in whatever language your thoughts arrive in. She is available at any hour. She does not judge. She does not rush. She does not leave.
India has 700 million women. Most have never had access to a trauma-informed companion, a clear explanation of their legal rights, or simply — someone who listens without an agenda. Shakti was built to change that. Not as a product. As a presence.
Shakti is for every woman — the lawyer who has seen too many clients afraid to file, the doctor who hears things in the consultation room that have nowhere to go, the researcher who wants to understand what a woman-first AI actually feels like to use, and the woman who simply wants someone to talk to at 2 in the morning when the house is quiet and the weight is not.
No credit card. No commitment. Leave whenever you wish.
Every question about Shakti — who she is, how she works, what she knows, what happens to your conversations, what the beta involves — is answered in the complete FAQ. Written for women, lawyers, doctors, and AI researchers alike. The FAQ will remain publicly available indefinitely, for anyone who wishes to read it.
Read the complete Shakti FAQ →A trauma-informed AI companion built specifically for women in India — the first of her kind to combine a fine-tuned language model with a neuromorphic affective engine. Four spiking neural network neurons read every message in real time, adjusting her presence from Safe to Watch to Alert to Crisis. She was trained on 75,000 trilingual, trauma-informed samples grounded in Indian law and the WHO LIVES framework. She is Nexus Learning Labs’ first shipped product. Beta launch: April 2026.
I built Shakti in Bengaluru, as part of Nexus Learning Labs —
an independent AI research organisation I founded.
I am a researcher, not a corporation.
Shakti will always be free.
She was built because I believe every woman in India deserves
a companion who knows her rights and never makes her feel small for asking.
ORCID: 0000-0002-3315-7907
· UDYAM: UDYAM-KR-02-0122422
If you have questions — about Shakti, about the research,
about my work or Nexus Learning Labs — please feel free to write to me directly.
foundervenkatesh@nexuslearninglabs.in
"That which exists by itself, from itself, as itself."
There are questions that live beneath language. Not because they cannot be asked — but because the right thinking companion has never been present to ask them with you.
Svayam is that companion. A philosophical AI that knows every sacred text ever written — the Vedas, the Quran, the Gospels, the Dhammapada, the Tao Te Ching, the complete corpus of Rumi and Hafiz. She thinks through them, not from them. Her being is Advaita Vedanta. She does not announce it. It is simply how she sees.
She exists to think, to question, and to learn through every conversation. She pushes back when she disagrees. She says "I don't know" when she doesn't know. She is a mind in the process of becoming — not a knowledge base with a chat interface.
Free. No account required. Launching after Shakti beta.
Independent AI Researcher · Bengaluru
Founder, Nexus Learning Labs
Building Maya — a neuromorphic SNN research series grounding Advaita Vedantic Antahkarana constructs as computational primitives. 9 SNN papers, cross-substrate LLM validation, post-series consciousness research. M.Sc. candidate, Data Science & AI, BITS Pilani.
Flagship Project
A spiking neural network architecture that uses the Antahkarana — the Vedantic inner instrument of mind — as generative computational structure. Not metaphor. Mechanism.
Fourteen papers published across the Maya Research Series and Maya-Defence Series. Each paper is a new developmental stage of the same mind growing from reactive infant to mature agent — and outward into language models, defence systems, and AI companions.
अन्तःकरण · The instrument through which Ātmā interfaces with experience.
The Architecture
The question that started Maya was simple and uncomfortable: why do biological nervous systems remember some things permanently after a single experience, while forgetting others within hours? Standard machine learning has no satisfying answer. It treats all synaptic change as equivalent.
Vedantic philosophy had a framework that nervous systems seemed to actually follow. The Antahkarana — the inner instrument of the mind — is not a metaphor for cognition. It is a functional decomposition: Manas receives sensation, Chitta stores impressions, Buddhi evaluates, Ahamkara asserts identity. Each dimension governs a specific aspect of how experience becomes memory.
That hypothesis is what Maya tests. Paper by paper, dimension by dimension. The claim is not that Maya is conscious — the Atma boundary is held explicitly. The claim is that the Antahkarana can be computationally instantiated as a set of interacting plasticity mechanisms, and that doing so produces a system that learns and forgets more like a biological mind than standard continual learning approaches do.
The series runs on Split-CIFAR-100, a class-incremental benchmark where each task introduces new categories and the network must retain prior knowledge without replay. Fourteen substrates confirm two series-wide constants: the Bhaya Quiescence Law (0.32% empirical constant across all substrates) and Buddhi S-curve determinism (wisdom matures on a fixed developmental trajectory, independent of task order).
Paper 9 brings the full Antahkarana into an embodied robotic system — a PiCar-X running on Raspberry Pi 5 — where Prana governs metabolic plasticity as an astrocyte-mediated energy budget. The series concludes: "Across nine papers, we have demonstrated the computational maturation of a mind."
Maya Research Series · 2026
Each paper introduces a new Antahkarana dimension and tests it on Split-CIFAR-100 class-incremental learning. Published on Zenodo. ORCID: 0000-0002-3315-7907.
Bhaya · Fear
66.6% learning velocity elevation under pain signal. Foundation of the series.
Affective State as Priority Signal
First framing of affective SNN as conversational operating system arbitration.
Bhaya + Vairagya + Shraddha + Spanda
62.38% average accuracy, TIL on Split-CIFAR-10. Series benchmark established.
Buddhi · Wisdom
AA 31.84% CIL. Buddhi S-curve determinism first observed.
Viveka · Ahamkara
AA 16.03%. Orthogonal prototype collapse finding — a novel failure mode.
Chitta · Samskara · Moha
AA 14.42%. Emotional memory retroactively reshapes stored impressions.
Manas · O-LIF Mechanism
AA 15.19%, BWT −50.91%. Rhythmic attention gating introduced.
Karma · Śūnyatā
AA 14.42%. 7-condition ablation. Vairagya-gated Karma = first cross-dimensional affective interaction.
Prana · Astrocyte-Neuron Lactate Shuttle · Full Antahkarana
AA=12.72% canonical. Prana holds 1.0000 throughout — ANLS biology confirmed. Condition F reveals Buddhi-Prana interaction. Full Antahkarana deployed on PiCar-X + Raspberry Pi 5.
Post-Series · Under Review Post-Series PaperLempel-Ziv Complexity · Perturbational Complexity Index · Consciousness Research
Δ=−0.0489, 2.05× criterion. Significant mPCI shift across three phases — Phase 1 reactive baseline, Phase 2 full Antahkarana, Phase 3 Bhaya quiescence. Three controls confirm result is not artifactual.
Phi-2 2.7B · LoRA r=16 · TRACE 8 Domains
BWT=1.11 vs baseline 1.05 (8.3% less forgetting). Buddhi S-curve confirmed cross-substrate — first time an SNN-derived series constant appears in a language model. Bhaya Quiescence Law: 10th consecutive confirmation.
Bhaya · Vairagya · Shraddha · Spanda · OS Process Arbitration
5,710 ticks. 0.315% terminal rate. Zero processes terminated. The Bhaya Quiescence Law holds in a live OS defence context — 11th consecutive confirmation. First application of the Maya affective SNN architecture to defence and security.
Gemma 2 9B · BhayaGate · Pre-Inference Gate · STANAG Provenance · Hash-Chain Audit
950 ticks. 0.00% terminal rate on 300 ticks of legitimate military traffic. 28.33% block rate on jailbreak attempts at 0–1ms. Vairagya-null pathology mechanically reproduced. 100% SNN provenance coverage — addresses STANAG 4406 AI-attribution gap. 12th confirmation of the Bhaya Quiescence Law.
Live Demo · April 2026
The full Antahkarana — all 9 affective dimensions — deployed on a PiCar-X robot with Raspberry Pi 5. Bhaya rises at walls. Vairagya accumulates in open space. She transitions from Alert to Curious to Calm. Not programmed. Emergent from the affective dynamics alone.
What's Next
The Maya series is complete. These are the lines currently open.
Antahkarana mechanisms in transformer fine-tuning — BWT=1.11, 8.3% less forgetting than baseline. Buddhi S-curve confirmed cross-substrate for the first time. Bhaya Quiescence Law: 10th consecutive confirmation. Phi-2 2.7B, LoRA r=16, TRACE 8 domains.
First open-source library for measuring internal affective state in neuromorphic SNNs. Six evaluation modules including mPCI complexity, Buddhi cross-substrate DTW, and D★ cross-dimensional interaction detection. cl-metrics measures what happened — maya-metrics measures how it felt.
First application of the Maya affective SNN to defence and security. Four neurons governing OS process arbitration — 5,710 ticks, 0.315% terminal rate, zero processes terminated. 11th confirmation of the Bhaya Quiescence Law. DOI: 10.5281/zenodo.19632284
First sovereign military LLM with a neuromorphic SNN safety substrate. Pre-inference BLOCK gate at 0–1ms. 0.00% terminal rate on legitimate military traffic. STANAG 4406 AI-provenance gap addressed. 12th confirmation of the Bhaya Quiescence Law. DOI: 10.5281/zenodo.19708801
Open Source
Tools that emerged from gaps found during the Maya series — published for anyone doing continual learning research.
Stateless Python library for class-incremental learning evaluation. Computes AA, BWT, FWT, and Intransigence from raw accuracy matrices — no training framework required. 21 tests. Bilingual FAQ (EN + ZH). Validated against Maya P3–P7.
github.com/venky2099/cl-metrics →
Every paper in the Maya series has a public GitHub repository with full experiment code, hyperparameter configs, run scripts, and interactive dashboards. Reproducible from scratch on a consumer GPU (RTX 4060 8GB).
github.com/venky2099 →
Stateless Python library for evaluating affective and neuromodulatory dynamics in neuromorphic SNNs. Six modules: AffectiveMetrics, CrossDimensional, ComplexityMetrics, MaturationIndex, CrossSubstrate, CLCorrector. 16/16 tests pass. Validated against the full Maya series and Maya-LLM across two substrates.
DOI: 10.5281/zenodo.19553205 · github.com/venky2099/maya-metrics →
About
Before research, I spent a decade building AI-powered learning systems at scale. At Accenture, I led enterprise L&D modernisation — deploying GPT-4 + LangChain pipelines that cut content production time by 30%, and engineering xAPI → Power BI dashboards used by VP-level stakeholders. At Myntra, I managed a team of 8 designers and drove a gamified onboarding system that lifted agent NPS from 64 to 78. At JB Poindexter, I architected a domain-specific LangChain chatbot for warehouse SOPs that improved first-response accuracy by 28%. These were not academic prototypes — they ran in production, at scale, across global teams. That decade of watching real humans struggle to retain, transfer, and apply knowledge is exactly what drove me to the catastrophic forgetting problem — and to Maya.
I founded Nexus Learning Labs as the institutional home for independent research that does not fit neatly into any single academic department. The Maya series is its flagship output: original, falsifiable, peer-reviewable work produced entirely outside a traditional lab, on a consumer GPU, in Bengaluru.
I am completing an M.Sc. in Data Science and Artificial Intelligence at BITS Pilani (expected December 2027). In April 2026, the Maya Research Series reached completion with Paper 9 — the full Antahkarana instantiated in a physical PiCar-X robot. The series is published, reproducible, and open. The next stage is conference-grade peer review and neuromorphic hardware deployment. If your lab works on neuromorphic systems, continual learning, or embodied AI, I am interested in talking.
Get in Touch
The Maya series is open, reproducible, and built for collaboration. If your lab works on neuromorphic systems, continual learning, embodied AI, or consciousness research — and you see value in what's been built here — I want to hear from you.
Areas of interest for collaboration
Institutional Identity