What This Track Covers

This page is for AI work that stays close to engineering reality: local model experiments, prompt and evaluation workflows, lightweight automation, and developer tooling that improves how work gets done.

I am especially interested in experiments around Ollama, local inference, productivity assistants, and the cost, privacy, and latency tradeoffs that matter when AI leaves the demo stage.

Planned Updates

Local Model Notes

Setup details, limitations, and observations from running local LLM stacks.

Workflow Automation

Small but useful systems for engineering productivity, content workflows, and backend support tasks.

AI Product Thinking

Where AI helps, where it gets in the way, and how to keep experiments grounded in value.