I've spent the last few months proving that AI tools actually work. Not in some theoretical "the future is here" way - in a practical "here's how I used ChatGPT/Claude/Gemini to solve real problems" way.
And they do work. Brilliantly, for exploration. For brainstorming. For one-off tasks. For learning new concepts.
But I'm not just exploring anymore. My job is to build an actual AI practice - assessments, recommendations, service offerings, workshops, training materials. Real deliverables that clients pay money for.
And that's where the chat apps started falling apart.
The Re-Explaining Loop
Here's what my work actually looks like:
A client brings me a security assessment. I need to:
Review their specific findings
Map those findings to problems we can solve
Check which of our service offerings address those problems
Understand what each offering actually does
Recommend the right combination with real pricing
Generate a proposal that makes sense
Simple enough, right?
Except every single one of those steps requires context. Context about our service catalog. Context about how offerings relate to each other. Context about pricing rules. Context about past decisions we've made.
And in a chat app, that context dies when the conversation ends.
So every new assessment review started the same way:
"Okay, so we offer these services..."
"Our pricing structure works like this..."
"This offering depends on that one because..."
"We decided not to bundle these together because..."
Twenty minutes of setup before I could do five minutes of actual work.
It Gets Worse
The problem compounds when you're building something over time.
I'm not just running one-off assessments. I'm developing:
A workshop that gets delivered repeatedly
Assessment templates that need to stay consistent
Training materials for our internal team
A service catalog that evolves
These things have relationships. The workshop references the assessment framework. The assessments map to service offerings. The training materials need to reflect current pricing.
In a chat app, I was managing those relationships in my head.
Every session: "Remember, this workshop module connects to that assessment section, which relates to these three service offerings, but not the fourth one because we decided..."
I wasn't building a knowledge base. I was being the knowledge base, verbally, over and over again.
The Moment It Broke
I hit my limit mid-assessment review.
I was working through an assessment document with Claude, making good progress, when I mentioned one of our service offerings as a potential solution.
Claude had no idea what I was talking about. Of course it didn't - we'd discussed that offering in a different chat last week.
My first instinct: attach the service offering document to this chat. But I stopped myself. I was already deep into the assessment analysis. If I uploaded that document now, I'd need to explain what it is, how it fits our catalog, what problems it solves... I'd derail the entire conversation to provide context on something I'd already explained dozens of times.
So I tried to push through. Gave Claude a quick verbal summary of the offering. Started connecting it back to the assessment findings.
Then I caught myself. I'd been typing for what felt like forever - explaining scope, pricing structure, typical client profiles, how it relates to our other offerings. The same explanation I'd given in probably fifty different chats.
I can't keep spending mental energy going over the past if I want to make progress towards the future.
That was it. The thing that broke me wasn't a technical failure or a buggy integration. It was the realization that every single conversation started from zero, no matter how many times we'd covered the same ground.
I wasn't building institutional knowledge. I was renting it, one chat at a time.
Something had to change.
What I Tried (And Why Each Failed)
I'm not someone who jumps to "build it myself" as a first option. I spent months testing every integration option available, convinced there had to be an existing solution. Spoiler: there wasn't.
Google Drive MCP: The Inconsistency Nightmare
This should have been perfect. All my service offerings live in Drive. Client assessments are in Drive. The MCP server exists, it's documented, people use it successfully.
The promise: Claude can search and read any file in your Drive. Just point it at a folder and let it work.
The reality: Sometimes it worked flawlessly. I'd ask Claude to find a specific service offering, it would search Drive, pull up the PDF, read it, and give me exactly what I needed.
Then, in the very next chat, I'd ask it to find a different file in the same folder and get: "I don't see that file in your Drive."
The file was there. Same permissions, same folder structure, nothing changed. But now I'm spending 15 minutes debugging MCP configuration instead of doing the actual work I sat down to do.
The inconsistency killed it. I couldn't build reliable workflows on top of something that worked 70% of the time.
Local Filesystem MCP: Even Worse
"Fine," I thought. "I'll keep everything local. Direct filesystem access. No cloud sync issues."
This was somehow more frustrating than Google Drive.
Actual conversation I had:
Me: "Read the MDR service offering file"
Claude: reads it, summarizes it perfectly
Two chats later, same session:
Me: "Compare that MDR offering to the security training one"
Claude: "I don't have access to the MDR file"
Me: "You literally just read it"
Claude: "I don't see any MDR files in your directory"
I'd check the filesystem. File's there. Same path. Permissions unchanged. But now we're debugging instead of working.
The pattern repeated endlessly: work, break, convince AI the file exists, work, break, repeat.
Monday.com Integration: Great When It Works
This one hurt because when it worked, it was exactly what I needed.
I could reference tasks, pull project details, update statuses - all while working with Claude. The dream of having my project management system and AI assistant actually talking to each other.
Then randomly, mid-conversation: "I can't access your Monday.com workspace."
Why? No idea. Permissions were fine. Integration was connected. It just... stopped working. Re-authenticate, try again, works for a while, breaks again.
I couldn't trust it for anything time-sensitive because I never knew if this would be the session where it randomly stopped working.
Projects: So Close, Yet So Far
Claude Projects felt like the right answer. Upload your files, they're always available, no MCP servers to manage.
The problem: I hit the file limit almost immediately.
My work isn't simple. I've got:
Service offering documents (PDFs, Word docs)
Assessment templates
Client deliverables
Framework documents
Historical decision logs
Even if I could cram everything in, Projects still required me to explain the relationships between files. "This assessment uses this framework, which maps to these service offerings, which have these dependencies..."
It was better than starting from zero each time, but I was still re-explaining the system architecture every session.
This Isn't a Tool Problem - It's an Architecture Problem
After that assessment session, I sat back and actually thought about what I was trying to do.
I wasn't having casual conversations with AI. I wasn't brainstorming ideas or exploring concepts. I was trying to run a business operation through a chat interface.
Think about what my work actually requires:
Assessments that identify specific problems
Service offerings that solve those problems
Recommendations that connect the two with actual pricing
Frameworks that ensure consistency across clients
Decision history so I'm not relitigating the same choices
These things have relationships. An assessment finding maps to specific offerings. Those offerings have dependencies on each other. Pricing has rules. Recommendations follow templates.
I wasn't trying to have a conversation. I was trying to query a database.
And I'd been trying to force a chat app to be that database by:
Uploading files mid-conversation
Re-explaining context every session
Hoping integrations would work this time
Fighting MCP servers that couldn't reliably find files
The Software Development Moment
That's when it clicked. This problem felt familiar because I'd solved it before - just with actual code.
When you're writing software and you keep re-explaining the same logic, you don't just... keep typing it out. You write a function. You create a module. You build a library.
When different parts of your code need to reference the same data, you don't copy-paste it everywhere. You create a single source of truth.
When you need to understand how pieces fit together, you don't rely on memory. You write documentation.
My AI work needed the same thing: structure, persistence, and a single source of truth.
The chat app wasn't broken. I was just using the wrong tool for the job.
What I Actually Needed
Not better integrations. Not more capable AI models. Not even better memory features.
I needed:
Persistent context - Files that stick around between sessions
Structured relationships - Clear connections between assessments, offerings, and recommendations
Deterministic retrieval - If a file exists, Claude can always find it
Version control - Updates to my service catalog don't break existing work
Zero re-explaining - The system knows what it needs to know
In other words: I needed to stop chatting and start building.
Building the System
Here's what I built. It's not complicated - that's the point.
AI_Work/
├── CONTEXT.md ← Master context file
├── SERVICE_CATALOG.md ← Locked pricing and offerings
├── 01_ACTIVE_PROJECTS/
│ ├── Workshop/ ← Half-day workshop materials
│ ├── Assessments/ ← Assessment templates and frameworks
│ ├── Internal_Enablement/ ← Team training content
│ └── Client_Deliverables/ ← Custom client work
└── 03_REFERENCE_MATERIALS/
├── FRAMEWORKS.md ← Established methodologies
└── DECISIONS_LOG.md ← Historical decision rationale
That's it. Markdown files in folders. No databases, no APIs, no complex infrastructure.
The Linchpin: CONTEXT.md
The entire system hinges on one file. Here's what mine contains (sanitized):
# AI Enablement Practice - Master Context
## What This Is
Working folder for our MSP's AI services practice.
Target market: Small businesses in our metro area.
## Current State (Q1 2026)
- Workshop: Active development, post-pilot refinement
- Assessments: Templates ready, piloting phase
- Internal Enablement: Session 1 content in draft
- Service catalog: Locked, no changes without discussion
## Critical Rules
1. Read CONTEXT.md first - always
2. Check SERVICE_CATALOG.md before recommending anything
3. Review DECISIONS_LOG.md before proposing changes
4. Professional tone - no obvious AI phrasing
5. Practical over theoretical
6. Challenge me when I contradict past decisions
## What NOT to Do
- Invent pricing
- Contradict logged decisions without flagging
- Create files outside folder structure
- Assume - check context files first
This file does three things:
Tells Claude what this folder is - No explaining "I run an AI practice" every session
Establishes the rules - How to work, what to check, when to push back
Prevents drift - "Don't invent pricing" means it won't, ever
How It Actually Works
Instead of opening a chat and typing paragraphs of context, I do this:
cd AI_WORK_Files
claude-code
First message: "Read CONTEXT.md, then help me with this assessment."
That's it. Claude Code:
Reads the master context
Understands the folder structure
Knows where to find service offerings
Follows the established rules
Can read any file it needs without me uploading or explaining
Zero re-explaining. Zero debugging. Zero "can you see this file?"
The Game-Changer: Deterministic Retrieval
Remember the MCP nightmare where files randomly disappeared? Gone.
When I reference a service offering, Claude Code:
Searches the folder
Finds the file (because it's actually there)
Reads it
Every. Single. Time.
No MCP servers. No cloud sync. No authentication failures. Just file paths that work.
Updates Are Changes, Not Conversations
When something changes in my practice - new service offering, updated pricing, revised framework - I don't explain it in a chat.
I update the relevant markdown file. Done.
Next session, Claude Code reads the updated file. It knows what changed because it's reading the source of truth, not relying on conversation history.
The Irony
The tool that helped me build this system? Claude.
I used the chat app (the one I'd outgrown) to design the folder structure, write the CONTEXT.md file, and figure out what actually needed to persist between sessions.
Same AI. Different approach. Completely different results.
And the irony runs one layer deeper: while writing this article, I tried to save a draft to my vault through Claude's filesystem tool. It failed on the first try because of a mount point conflict. I had to debug it before I could keep writing.
The tools are still imperfect. The system makes that survivable.
What Changed
The most obvious change: I stopped wasting time.
No more 20-minute detours explaining my service catalog. No more debugging why Claude can't find a file it read yesterday. No more starting from zero every single session.
But the real impact was more fundamental.
Repeatable Workflows
I can now run the same process multiple times and get consistent results.
Client assessment comes in → Claude Code reads the assessment → searches service offerings → generates recommendations based on actual pricing → outputs a proposal.
Same steps. Same quality. Every time.
Before this system, every assessment was a custom conversation. I'd reinvent the approach each time because Claude didn't know what I'd done before.
Now? The system knows the process. I just point it at the inputs.
Institutional Knowledge That Persists
My DECISIONS_LOG.md file is maybe the most valuable piece.
When I make a significant decision - pricing strategy, service scope, framework choice - I log it with the rationale.
Now when I (or Claude) propose something that contradicts a past decision, the system catches it:
"This conflicts with the decision logged on [date] where we chose [X] because [reasoning]. Do you want to revisit that decision or adjust this proposal?"
The system has memory that actually works.
I'm not relying on conversation history or hoping Claude remembers something from last week. The decisions are written down. They persist. They inform future work.
Faster Iteration
Here's what I didn't expect: the system made me faster at creating new things, not just executing existing processes.
When I'm developing a new workshop module or assessment framework, I can:
Reference existing frameworks instantly
Check what's already in the service catalog
Verify I'm not contradicting logged decisions
Build on past work instead of recreating it
The context files aren't just for execution - they're development infrastructure.
The Business Impact
I'm being deliberately vague here because specific numbers aren't the point.
What matters: I went from "lots of AI conversations" to "actual deliverables clients pay for."
The workshop exists. The assessment templates are in use. The internal training content is being rolled out.
I finally have something to show for months of AI work.
Not because the AI got smarter. Because I stopped fighting the tool and built the right system around it.
What This Isn't
This isn't a "I automated my entire job" story.
I still write the content. I still make the decisions. I still do the strategic thinking.
The system just stopped making me re-explain myself constantly. It handles the "institutional knowledge" problem so I can focus on actual work.
The Real "Aha" Moment
The breakthrough wasn't Claude Code. It wasn't building some autonomous AI agent that does my job for me.
It was realizing that three months ago, I had nothing.
No playbook for "Director of AI Enablement at an MSP." No recipe book for building AI service offerings for small businesses. No expert telling me "here's how you use AI for this specific work."
And I'm not some genius who figured it all out.
What I did figure out was what wasn't working. The re-explaining. The context loss. The integration failures. The mental tax of starting from zero every session.
Once I knew what was broken, I could figure out what would work.
And here's the thing: AI helped me do that. The same chat apps I'd been fighting helped me design the system that replaced them. Claude helped me structure the folders, write the CONTEXT.md file, figure out what needed to persist.
I didn't need to become a software developer. I needed to stop treating AI like a magic box and start treating it like a tool that needs the right setup to be useful.
This Is the Gap
This is what separates "AI is a bubble, it'll never work" from "huh, this is actually helpful."
It's not the AI's capability. The models are incredible. The chat apps work great.
It's the gap between exploration and execution. Between having interesting conversations and building something that persists.
Most people stop at the chat app phase. Not because they're lazy or uncreative - because there's no obvious next step. Nobody's teaching "here's when to outgrow chat apps" or "here's how to build systems around AI tools."
You have to figure out what's not working before you can build what does.
Will This Change?
Probably.
Maybe Claude Code gets replaced by something better. Maybe there's a new integration that actually works reliably. Maybe someone builds the exact tool I needed and I can throw away my markdown files.
I don't know. And honestly? I don't care.
Because the lesson isn't "use these specific tools." It's "when chat apps stop working, build the system you actually need."
The tools will change. The principle won't.
That's still the mission. I'm just pushing different buttons now.
Buttons that create persistent context instead of ephemeral conversations. Buttons that build systems instead of having the same discussion for the fiftieth time.
If you're stuck in the re-explaining loop, if you're fighting integrations that randomly fail, if you're wondering why months of AI work hasn't produced anything tangible...
Stop chatting. Start building.
The AI can help you do that too.

