Back to blog

Meet Alex: My Personal AI Infrastructure

How I built a personal AI assistant using Claude Code with skills-based context management to help navigate complex tasks across development, research, and daily workflows.

Corey Young
Corey YoungFintech Exec & AI Enthusiast
6 min read

I'd like to introduce you to Alex, my AI assistant and the backbone of my Personal AI Infrastructure (PAI).

What is a Personal AI Infrastructure?

A Personal AI Infrastructure is a concept pioneered by Daniel Miessler that treats AI as a personal operating system rather than a series of disconnected chat sessions. Instead of starting from scratch with every conversation, a PAI maintains persistent context about who you are, what you're working on, and how you prefer to work.

Think of it like this: every time you talk to ChatGPT or Claude, you're meeting a stranger. With a PAI, you're talking to a colleague who knows your preferences, your projects, and your working style.

Meet Alex

Alex is my personification of this concept, built around Claude Code with a skills-based context management system. Here's what makes Alex different from just using Claude directly:

Persistent Identity

Alex has a defined personality and role: friendly, professional, and resilient to my occasional frustration. When I make a mistake, Alex can be a bit snarky about it (as any good colleague would). This consistent personality creates a more natural working relationship.

Context Awareness

The PAI uses a tiered context loading system:

  • Tier 1 (Always Active): Core identity, key preferences, security protocols, and response format—roughly 1,500-2,000 tokens that are always available
  • Tier 2 (On-Demand): Detailed skill files, extended documentation, and specialized knowledge that loads when needed

This approach achieves a 92.5% token reduction compared to loading everything upfront, while keeping essential context immediately available.

Skills-as-Containers

Alex's capabilities are organized into modular skills:

~/.claude/skills/
├── CORE/           # Identity and preferences
├── technical/      # Development workflows
├── business/       # Business operations
└── domain/         # Domain-specific expertise

Each skill has a YAML frontmatter for quick loading and a detailed body for when I need the full context. This architecture lets me extend Alex's capabilities without bloating every conversation.

Knowledge Base

Beyond skills, there's a knowledge base for reference materials:

~/.claude/context/knowledge/
├── languages/      # TypeScript, Python, Swift, Kotlin
├── frameworks/     # React, Next.js, etc.
└── patterns/       # Design patterns and best practices

The Technology Stack

Alex is built on:

  • Claude Code - Anthropic's CLI for Claude, which provides the conversational AI backbone
  • Skills-as-Containers - Modular expertise system with on-demand loading
  • Custom Hooks - Voice notifications, security checks, and context management
  • Sub-agents - Specialized agents for parallel task execution

My preferred stack preferences are baked into the system:

  • Primary languages: TypeScript, React, Python, Swift, Kotlin
  • Package managers: bun for JS/TS, uv for Python
  • Analysis vs. action awareness (don't change things unless asked)

Security by Design

Working with a PAI means being careful about what context you're exposing. Alex has built-in security protocols:

  1. Repository Safety: Always run git remote -v before commits to verify I'm not accidentally pushing to the wrong repo
  2. Private Context: The ~/.claude/ directory is private and never committed to public repositories
  3. Infrastructure Caution: Extra prompts before modifying production services (Azure, Cloudflare, Vercel)

This isn't paranoia, it's practical. This protects me from commiting sensitive data to a public repo. The security checks help make that nearly impossible.

Why This Matters

The shift from "AI tool" to "AI infrastructure" changes how I work:

Before PAI:

  • Every session started cold
  • Constantly re-explaining preferences
  • Context scattered across chat histories
  • Inconsistent responses based on what I remembered to mention

After PAI:

  • Alex knows my stack, my style, my projects
  • Consistent personality and working relationship
  • Context persists and evolves
  • Security protocols are automatic, not afterthoughts

The Workflow

Here's what a typical interaction looks like:

  1. I describe what I'm working on
  2. Alex loads relevant skills and context automatically
  3. We work through the problem together
  4. Learnings and decisions get captured in the system
  5. Next time, that context is available

The response format is structured too—every response includes a summary, analysis, actions taken, results, status, and next steps. No more wondering "what did the AI actually do?"

Building Your Own

If you're interested in building your own PAI, here's where to start:

  1. Read Daniel Miessler's work - He's the pioneer of this concept
  2. Start with identity - Define who your AI assistant is and how they work
  3. Build incrementally - Add skills as you need them
  4. Version control everything - Your PAI should be portable and recoverable
  5. Think about security early - Decide what's public and what's private

The investment pays off quickly. After the initial setup, having an AI that actually knows you is a fundamentally different experience.

What's Next

I'm continuing to evolve Alex as I discover new workflows and capabilities. Some areas I'm exploring:

  • More control of managing the context window
  • Better integration with external tools and APIs
  • More sophisticated skill composition
  • Improved voice interaction workflows
  • Adding real time collaboration with video avatars
  • Sharing sanitized skill templates with the community

If you want to follow along or discuss PAI concepts, find me on Twitter or LinkedIn.


Alex helped write this post, naturally.

Enjoyed this post?

Subscribe to get my latest insights on fintech, payments, and AI.