Skip to content
View ariaxhan's full-sized avatar

Highlights

  • Pro

Block or report ariaxhan

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ariaxhan/README.MD

Aria Han

Poet. AI Engineer. Prompt Architect.

Website GitHub Medium LinkedIn


Philosophy

After a year building production systems with AI (10-12 hours daily, 7 days a week), I've learned: language is the key variable in a Large Language Model. LLMs aren't code generators. They're readers. What you write shapes what they do.

The industry hasn't caught up. Most developers treat prompting as an afterthought. I treat it as craft — the same craft I spent years developing as a writer, a poet, an English major obsessed with precision. Now that craft builds production systems.

Code and poetry are the same discipline wearing different clothes. Both demand precision. Both punish vagueness. Both reward the person who finds the exact right word, the exact right structure, the exact right compression of meaning into form.

The skills transfer completely:

  • Reading comprehension → parsing AI output critically
  • Rhetorical structure → organizing prompts for maximum effect
  • Word choice precision → "use" not "utilize", "show" not "demonstrate"
  • Rhythm and variation → avoiding uniform patterns that models exploit

The best prompt engineers will be writers. The best AI architects will be people who understand language at a deep level. That's the bet I'm making with my work.


Core Principles

CORRECTNESS > SPEED
One working implementation beats three debug cycles.
Plan twice, build once.

EVERY LINE IS A LIABILITY
Config > code. Native > custom. Existing > new.

RESEARCH FAILURES, NOT SOLUTIONS
First-hand accounts of what broke > generic tutorials.

CONTEXT IS EVERYTHING
Lean context prevents rot. Fresh beats polluted.

Production Systems

System Description
HeyContext AI memory platform with persistent context, psychological insight extraction, and living projects. Full-stack: frontend / backend.
Brink iOS app blending journaling, private AI conversation, and biometric insights. SwiftUI + HealthKit for Apple Watch.

KERNEL

kernel-plugin — Development intelligence for Claude Code.

Run /kernel-init and KERNEL analyzes your codebase, then generates custom configuration: commands for your workflows, agents for your stack, rules from your patterns. Not templates — tailored artifacts based on what it finds.

14 commands. 13 agents. 10 methodology banks. ~200 token baseline.

Self-evolving: patterns compound, mistakes don't repeat.


Projects

Project What It Does
the-convergence AI optimization via evolutionary algorithms and agent societies
memory-pool Structured memory architecture for persistent AI context
hotagents Hotkey → screenshot → AI → action
neural-polygraph SAE-based hallucination detection
vector-native Structured syntax for agent coordination

Six hackathon wins (AWS, Google Cloud, Agno, Wordware).


Writing

Prompting & Cognitive Architecture

AI Architecture

ML Research


Research

SAE feature geometry, hallucination detection, why structured prompts work differently at the activation level.

Key finding: 52% reduction in SAE reconstruction loss with structured vs natural language syntax. LLMs are vector computers pretending to be text processors.

Repo Focus
universal-spectroscopy-engine SAE-based interpretability tools
experiments Append-only, queryable experiment specimens

Stack

Languages:    Python, TypeScript, Swift
Frameworks:   FastAPI, Next.js, SvelteKit
AI:           Claude, OpenAI, Sparse Autoencoders
Data:         Redis, Polars, PostgreSQL
Infra:        Vercel, GCP, Docker

San Francisco

Email X

Pinned Loading

  1. kernel-plugin kernel-plugin Public

    KERNEL is a Claude Code plugin that makes your setup evolve automatically based on how you actually work.

    Python 1

  2. arbiter arbiter Public

    A propositional logic validation and compression library.

    Python

  3. neural-polygraph neural-polygraph Public

    SAE based hallucination detection and mitigation for LLMs.

    Python

  4. memory-pool memory-pool Public

    Memory isn't a timeline.

    Svelte

  5. persist-os/vector-native persist-os/vector-native Public

    LLMs speaking their native language: vector operations, not English.

    Python 3 1

  6. experiments experiments Public

    Experiment engine for LLMs based on natural history specimens.

    Jupyter Notebook