human-centered ai

Building clear, reliable LLM features & agentic workflows.

An ongoing exploration of how to build AI systems that behave in ways humans can understand and rely on. These notes and experiments examine evaluation, agent design, and product patterns for aligning complex models with real-world expectations.

About Alex →

Philosophy

AI systems should be understandable, predictable, and aligned with real human needs. I believe in designing features and workflows that behave reliably, expose their reasoning when possible, and give people meaningful control. Building AI is not just about capability; it is about clarity, constraints, and careful evaluation.

View the 10 principles →

notebook

Working notes on evaluation, agent design, and the practical realities of building LLM-powered systems. These notes capture early ideas, half-finished thoughts, and frameworks in progress — a place to reason in the open and refine how we build reliable, understandable AI.

View notes →

Experiments

A collection of small, thoughtful AI tools and prototypes that explore retrieval, evaluation, summarization, and interactive visualization. Each project reflects my approach: build, test, refine, and understand how real users interact with AI systems.

View experiments →

About me

Hi, I am Alex Thorpe, an AI Product Leader with a background in innovation platforms, LLM evaluation, and vector-based retrieval systems. I work at the intersection of product, engineering, and research — translating complex problems into practical, thoughtful solutions.

Outside of product work, I build micro-tools, love photography, designing games, experiment with embeddings, and explore new ways AI can help people think more clearly and creatively.

Contact →