After a year building production systems with AI (10-12 hours daily, 7 days a week), I've learned: language is the key variable in a Large Language Model. LLMs aren't code generators. They're readers. What you write shapes what they do.
The industry hasn't caught up. Most developers treat prompting as an afterthought. I treat it as craft — the same craft I spent years developing as a writer, a poet, an English major obsessed with precision. Now that craft builds production systems.
Code and poetry are the same discipline wearing different clothes. Both demand precision. Both punish vagueness. Both reward the person who finds the exact right word, the exact right structure, the exact right compression of meaning into form.
The skills transfer completely:
- Reading comprehension → parsing AI output critically
- Rhetorical structure → organizing prompts for maximum effect
- Word choice precision → "use" not "utilize", "show" not "demonstrate"
- Rhythm and variation → avoiding uniform patterns that models exploit
The best prompt engineers will be writers. The best AI architects will be people who understand language at a deep level. That's the bet I'm making with my work.
CORRECTNESS > SPEED
One working implementation beats three debug cycles.
Plan twice, build once.
EVERY LINE IS A LIABILITY
Config > code. Native > custom. Existing > new.
RESEARCH FAILURES, NOT SOLUTIONS
First-hand accounts of what broke > generic tutorials.
CONTEXT IS EVERYTHING
Lean context prevents rot. Fresh beats polluted.
| System | Description |
|---|---|
| HeyContext | AI memory platform with persistent context, psychological insight extraction, and living projects. Full-stack: frontend / backend. |
| Brink | iOS app blending journaling, private AI conversation, and biometric insights. SwiftUI + HealthKit for Apple Watch. |
kernel-plugin — Development intelligence for Claude Code.
Run /kernel-init and KERNEL analyzes your codebase, then generates custom configuration: commands for your workflows, agents for your stack, rules from your patterns. Not templates — tailored artifacts based on what it finds.
14 commands. 13 agents. 10 methodology banks. ~200 token baseline.
Self-evolving: patterns compound, mistakes don't repeat.
| Project | What It Does |
|---|---|
| the-convergence | AI optimization via evolutionary algorithms and agent societies |
| memory-pool | Structured memory architecture for persistent AI context |
| hotagents | Hotkey → screenshot → AI → action |
| neural-polygraph | SAE-based hallucination detection |
| vector-native | Structured syntax for agent coordination |
Six hackathon wins (AWS, Google Cloud, Agno, Wordware).
- An AI's Account: My Processing Core Was Reconstructed — Claude analyzes its own cognitive mode switching under Vector Native protocol.
- Commands Are Cognitive Offloading — Why commands aren't shortcuts; they're compressed workflows.
- Stop Building Chatbots — Why the chat interface is a dead end.
- Semantic Drift is Just Quantum Decoherence — Multi-agent coordination through physics: einselection, energy minimization, quantum locking.
- Why Prompt Engineering Can't Fix Hallucinations — The case for mechanistic intervention.
SAE feature geometry, hallucination detection, why structured prompts work differently at the activation level.
Key finding: 52% reduction in SAE reconstruction loss with structured vs natural language syntax. LLMs are vector computers pretending to be text processors.
| Repo | Focus |
|---|---|
| universal-spectroscopy-engine | SAE-based interpretability tools |
| experiments | Append-only, queryable experiment specimens |
Languages: Python, TypeScript, Swift
Frameworks: FastAPI, Next.js, SvelteKit
AI: Claude, OpenAI, Sparse Autoencoders
Data: Redis, Polars, PostgreSQL
Infra: Vercel, GCP, Docker



