Project
A Claude-native personal AI assistant. Claude Code is the orchestrator, scheduled tasks fire throughout the day, plain files hold the state. Four reusable components extracted as public repos.
Most "AI assistant" projects treat the model as a remote callable service and wrap it in custom orchestration code. This one inverts that. Claude Code IS the orchestrator; the surrounding repo is scaffolding (data files, scheduled tasks, delivery scripts, docs) that turns a CLI agent into a persistent personal assistant. Briefings, memory retrieval, reconciliation, dashboard generation, voice output — all of it runs through one Anthropic subscription instead of metered API calls.
The standard AI-app shape (model behind an API key, custom code in front) burns money in proportion to use, requires infrastructure to babysit (queues, workers, databases), and treats the model as a vendor service rather than a thinking partner. For a single user running automation throughout the day, that shape is wrong on every dimension.
Inverting the model and putting Claude Code at the center has three consequences worth the rebuild. One subscription covers everything — flat monthly cost, no token math. Files are the source of truth — projects.yaml, per-project BACKLOG.md, the brain's YAML index. Everything else (GitHub Projects, dashboards, scheduled task outputs) is a derived mirror; you can nuke the mirrors and rebuild from the files. No services to babysit — no Postgres, no queue broker, no custom backend maintaining state. Just Claude Code, the scheduled task scheduler, and a few Python scripts.
The result is a personal assistant that runs daily, fits inside one fixed cost, and stays understandable by the one person who uses it.
Five subsystems wired through Claude Code as the central orchestrator. Each subsystem owns one capability and communicates through plain files in the repo — no in-memory state, no inter-process messaging.
4-tier hierarchy: TELOS goals → projects → capabilities → work items. Reconciliation engine recalculates health and generates nudges. Dashboard renders the live view.
Parallel morning gatherers (weather, news, project status), one assembler, TTS generation, multi-channel delivery, watchdog that recovers from missed runs.
Brain entries (cross-project knowledge graph), transcript archive, SessionStart hook that injects recent context into new sessions. /learn, /recall, /capture slash commands.
Discord webhooks, SMTP email, ntfy push notifications, Edge TTS audio. All credentials in the OS keyring, never in .env files.
Slash commands surfaced into Claude Code. /open-ticket, /close-ticket, /backlog, /learn, /recall. The user-facing interface to the system.
The pipeline is time-driven, not event-driven. A handful of Claude Code scheduled tasks fire on cron-style schedules; each one writes its outputs to a known file or staging dir; the next stage reads from there.
A typical day: morning gatherers fire in parallel (weather + news + project status); the assembler stitches their outputs into a briefing; TTS and delivery scripts push the briefing to Discord, email, and audio. Later the reconciliation engine sweeps the tracking files, updates project health, and writes nudges. The dashboard, when opened, reads everything live from the files. The 6:28 AM briefing watchdog exists because automation breaks — if the 6 AM run failed, the watchdog catches it and runs a recovery briefing.
The model is the runtime, not a remote callable. Scheduled tasks invoke Claude Code directly; orchestration logic lives inside the agent session, not in custom infrastructure.
Cron-style scheduling drives the whole pipeline. Each task is small, idempotent, and writes outputs to known file locations. No queue, no broker, no event bus.
YAML and Markdown files are the source of truth. No database. State is auditable, diffable in git, and readable without any tooling.
Discord, email, push notifications, and TTS audio — the same content reaches the reader through whichever channel is appropriate. Credentials live in the OS keyring.
The four reusable pieces (Brain, Tracking, Briefing, Notify) were extracted from the personal-use repo and published as standalone libraries. The personal integration stays personal; the substrate becomes shared.
The tracking dashboard is the most visible artifact this system produces. Below: the public claude-tracking dashboard rendered against its example data set (no real personal projects shown).
The dashboard reads projects.yaml on each request, enriches each row from the per-project BACKLOG.md file, and computes derived fields live. There's no build step; refreshing the page shows whatever the files currently say.
Four reusable libraries extracted from the personal-use integration. Each is independently usable, MIT-licensed, and lives in its own repo.
A cross-project persistent knowledge system for Claude Code. Solves knowledge loss between sessions and knowledge isolation between projects with a three-layer index.
A 4-tier project tracking system with dashboard, reconciliation engine, and GitHub Projects sync. Designed for Claude Code automation.
An automated morning briefing pipeline for Claude Code. Parallel gatherers (weather, news, project status), assembler, watchdog recovery layer.
A lightweight notification and delivery toolkit. Discord webhooks (with chunking and attachments), SMTP email, ntfy.sh push, Edge TTS audio.
The personal integration repo that wires the four public components together is not yet open-sourced — the scrubbing pass (hardcoded paths, personal data, internal-use language) hasn't happened yet. The four public spinouts above ARE the reusable substrate; the personal integration is the data and the configuration that makes them useful to one specific person. There's no subscriber or multi-user mode — this is single-tenant by design. There's no external LLM API call anywhere in the system — everything routes through the Claude Code subscription.