I get asked a lot about AI. What tools I use, how I stay current, whether it’s really as transformative as everyone says. So I figured I’d pull back the curtain and walk you through what my actual AI workflow looks like. Not the polished version, but the real one. The messy, time-consuming, occasionally frustrating, reality of trying to stay AI-fluent in a world that updates faster than any human can keep up with.
My Toolkit (It’s Not Just One Thing)
My primary AI assistant is Claude. It’s where I do most of my thinking, writing, and building. But here’s the thing most people don’t realize: using AI well almost never means using just one tool.
For personal tasks I use Google’s Gemini, particularly the browser extension, which is great for quick in-context work while I’m browsing. And I use ChatGPT regularly, especially for technical architecture discussions when planning out new projects. Each model has different strengths, and I’ve learned to play them off each other.
Here’s what that looks like. Say I’m building a new tool or kicking off a development project. I’ll start in ChatGPT, describe what I’m trying to build, share some reference material, maybe link to some videos of the technology I’m exploring, this is called adding context or context engineering. I’ll ask it to help me think through the architecture and produce a PRD (Product Review Document). Then I’ll take that PRD into Claude Code and ask it to build a plan. I’ll take Claude’s plan, bring it back to ChatGPT in the same thread, and say “poke holes in this.” ChatGPT will challenge the architecture, flag gaps, suggest alternatives. I’ll review that feedback, refine the plan, and then hand it back to Claude Code to start building. No single model has all the answers, and the best results come from triangulating between them.
The Signal-to-Noise Problem
The hardest part of keeping up with AI isn’t the volume of information. It’s the signal-to-noise ratio. There is an overwhelming amount of AI content out there: podcasts, YouTube channels, LinkedIn posts, newsletters, research papers. Most of it is either hype, repetition, or irrelevant to the work we actually do.
I follow over 50 sources across newsletters, LinkedIn, and YouTube. I use an AI tool called Recall as my AI second brain, essentially my long-term memory for everything I’ve found valuable.
But the real skill isn’t collecting information. It’s filtering it. I try to focus my attention on things that are directly aligned with projects I’m actively working on.
Here’s a concrete example. For a client project, I’ve been building a deterministic RAG system, a pipeline that needs to pull structured information out of complex medical documents containing charts, graphs, tables, and clinical data. That’s a hard problem, and over the past several months two tools crossed my radar that both had strong signal for this work: Landing AI and Google’s LangExtract.
They do different things well. Landing AI excels at extracting information from visually complex documents: scanned forms, tables without gridlines, charts, mixed media layouts, checkboxes. It’s particularly strong in regulated industries like pharma where visual grounding and compliance matter. LangExtract, on the other hand, is built for extracting structured entities from text-heavy documents like clinical notes, legal contracts, and research papers. It’s open-source, model-agnostic, and gives you precise text-level traceability, meaning every extraction maps back to its exact source location in the document.
What made both of these worth my time wasn’t just their individual capabilities. It’s that they’re complementary. You could use Landing AI to parse a complex PDF into structured text or markdown, then feed that output into LangExtract for domain-specific entity extraction. That’s a pipeline that gives you visual document understanding on the front end and schema-enforced, citation-grounded structured data on the back end. For a deterministic RAG system in a medical context, that’s a compelling architectural fit.
That’s what good signal filtering looks like. I didn’t spend time on these tools because they were trending. I spent time on them because they solved specific problems in a system I was actively building, and understanding how they fit together gave me a clearer picture of the architecture I needed.
When AI can give you the ability to build anything, you have to be disciplined about where you invest your time.
Learning by Building
In all honesty, most of my AI learning happens in my personal time. I try to carve out a few hours during the work week, but there’s always something more immediate to do. So evenings and weekends become my lab. I’ll read about something, decide if it’s interesting, and then spend a few hours building something with it. Building is how I learn best.
Some of those projects use AI as the core engine. Others don’t use AI at runtime at all. My single most useful tool, the one that gets used every day, is a time tracker I built for my wife’s business. She manages multiple clients with different hour allocations, a mix of fixed-cost and time-and-materials engagements. We designed a simple UI together: start a clock, stop a clock, create projects, track remaining hours on fixed-cost work. It’s a traditional web app with a frontend, backend, and database. No agent loop, no LLM in the pipeline. But I built the entire thing using AI as my development partner.
I think that distinction matters. There’s a tendency to think “using AI” means building systems where AI is the runtime engine: chatbots, agents, automated workflows. But using AI to build better tools faster, even traditional ones, is just as powerful. Both approaches exist side by side, and both are valuable.
The Gap Is Bigger Than You Think
If there’s one thing I’d encourage everyone reading this to take away, it’s this: spend more time playing with AI. Not watching demos, not reading about it, but actually using it. Ask it questions. Give it tasks. Break it. Try to make it do something useful for you, fill out a form, research a trip, review T&Cs for a hire car you just booked for your next vacation.
The gap between AI-aware and AI-fluent is enormous, and the only way to close it is through hands-on experience. The more you use it, the more opportunities you start to see. The more opportunities you see, the more ideas you have for applying it. It compounds on itself. But it starts with just spending time with the tools.
I won’t pretend it’s easy. Keeping up with this space is genuinely like having a second job. But the payoff, in capability, in efficiency, in just understanding where this technology is going and what it means for the work we do, is worth every hour.
— Matt
Tools Mentioned
AI Assistants & Models
- Claude (Anthropic) — Primary AI assistant for thinking, writing, and building
- Claude Code — Command-line development tool for planning and building projects
- ChatGPT (OpenAI) — Technical architecture discussions and PRD generation
- Gemini (Google) — Browser extension for in-context research
Curation & Knowledge Management
- Kara Keep — Bookmarking tool, primarily for saving LinkedIn posts
- AI Recall — AI knowledge database for long-term reference
- Custom Chrome Extension — Built to scrape content and images from LinkedIn posts into a local database
Personal Projects
- AI Council — Custom web page hitting four AI APIs simultaneously for side-by-side comparison
- Personal AI Assistant (Judy) — Email/appointment monitoring, food tracking, voice-controlled home automation
- Time Tracker — Web app built using AI for client hour tracking and project management
Referenced Technologies
- LangExtract (Google) — Open-source structured document extraction for deterministic RAG pipelines
- Agent Loop Architecture — Concept for AI-managed multi-service orchestration