
AI Is My ADHD Brain's BFF
Claude Completes Me

If you have ADHD or if you read my post Forgetfulness Is a Feature, Not a Bug, you have some insight into how the ADHD brain works (or doesn’t work). If you haven’t read that post, you don’t need to go read it right now, but if you are reading this, then you should go read it at some point. The Cliffs Notes version is that ADHD brains are like poorly organized file servers. They are chock full of information, with the inability to effectively retrieve any of that information on demand. So the problem with the ADHD brain isn’t that the thoughts and ideas and concepts aren’t there, it’s that it has a problem surfacing the right thoughts, ideas, and concepts at the right moment.
You might think that AI is great for ADHD peeps like me because it’s great at organization and structure – like a task manager that reminds you what you need to do and when. It is, but that’s not where the true value lies. I have always said that I need an AI assistant, and I do, but that’s not the first thing I decided to use AI for. In fact, I only remembered that I should probably look into finding a good AI assistant app just now while writing this bit because it’s really only a ‘nice to have’ for me, so every time I think, ‘Hey! I should look into getting an AI assistant app!” the idea flies right back out of my mind and I don’t do it.
Where AI provides me the biggest benefit is that it has become the thought retrieval app for my non-linear brain. It doesn’t organize my thoughts. It creates the conditions to allow my brain to self-organize and optimize for the topic at hand by drawing meaningful connections between my thoughts that are not readily apparent. It’s like temporary indexing, but it’s building those temporary indexes in real time. Each AI response is a trigger. Each trigger surfaces an adjacent thought resulting in a conversation that builds, meanders, tracks back, builds some more, and ultimately gets to something I could have never gotten to on my own. Or perhaps, I could have gotten there on my own, but certainly not as quickly.
I know this seems a bit hand-wavy and nebulous because it’s hard to conceptualize if you’ve never used AI in this way, so let me provide an example. I’ve been using Claude to organize my thoughts for this blog. I created separate chats for each main topic I want to write about (general tech industry stuff, product management, GenAI, security, and women in tech). And then in each chat, I ideate about what to write. (I’ve found that all my chats (for both the blog and for other things) tend to overlap - a lot, which results in me having to go tell my product management Claude chat about this thing that came up in my GenAI Claude chat, and I’ve learned to do this as soon as the overlap occurs to me because otherwise that cross-pollination of ideas would never occur.)
By “ideate” I don’t mean I open the chat with “give me some ideas to write about product management”. I open with, “I want to develop thought leadership on product management. I have a lot of unpopular opinions…” followed by a long soliloquy of my thoughts which may not be organized nor linear.
Then Claude responds with, “This is a genuinely compelling set of ideas…. Let me share some strategic thinking before you start drafting [insert Claude’s thinking]…. What you have here are actually three distinct theses, not three opinions [insert theses]… These are connected by a through-line: [insert through-line]… Want me to help you develop any of these into a first draft post?”
Then I respond with, “No, I just want to come up with a framework…” Followed by another thought that Claude’s response triggered, to which Claude responds with something that triggers another thought that I respond with… and so on… and this could go on to infinity if I had the time and the stamina, which I obviously don’t.
So the chat starts out with 3 topics for 3 posts but ends with 17 posts across 6 topics. (I’m writing this as a hypothetical, but this is actually how my for reals Claude product management chat started. And as I am writing this, I now have 23 posts across 8 topics. By the time you read this I may have 30 posts across 9 topics.)
And then at the end of all of this, I ask Claude to poke holes in my theories to make sure it’s not just being sycophantic (ChatGPT is wayyyy more sycophantic than Claude (Claude has its moments), so if you need an AI friend who thinks you fart roses and shit rainbows, use ChatGPT).
At no point in this process do I ask Claude to write my posts for me, but I do upload drafts to Claude to get its opinion and then Claude’s response often triggers more thoughts that usually make the post way better. What’s interesting here is often the thoughts are what I think are just random asides, and then Claude points out that they are actually relevant to the post and why. Regardless, Claude isn’t creating these thoughts; it’s just triggering them. Similarly, Claude isn’t writing nor rewriting my post; it’s just creating conversation to help me do those things.
An example is in my post about how my brain works (link in the first paragraph of this post), which centers around the fact that I am supposed to have an aptitude for science and math, but I was terrible at both in school, even though I got a minor in math. I had uploaded my first draft to Claude and we were randomly chatting about what triggered my idea for the post (which was a separate Claude chat that had virtually nothing to do with any of the content in the post itself) and then seemingly out of the blue (because nothing in the chat was directly related to this), I remembered that I actually got an A in statistics (so I was terrible at all math except statistics). And that bit arguably made the post better.
But what makes all of this difficult to explain is it’s not that Claude asks questions or makes comments that are directly related to my new thoughts. Claude didn’t say, “So why did you get a math minor?” or “the part about the math minor is interesting.” We weren’t even talking about that when the idea occurred to me. It’s really just that the process of talking about things like you’re having a normal conversation with someone oils the gears in my brain in a way that thoughts related to the topic as a whole suddenly appear in my brain, even when the current conversation isn’t directly related to those thoughts. But what makes Claude a better thought partner than most people is that when these random thoughts occur to me, Claude can automatically identify the connection between my thoughts and the topic at hand, recognizing something that I think is random as not random at all.
Going back to the bit about the fact that AI doesn’t write my posts (sorry, this is definitely an ADHD meander and backtrack), AI doesn’t write anything for me if I need something to be in my voice. In some cases, AI writes me a framework, that I never stick to. I’ll actually post the framework that Claude gave me for this post at the end so you can see how different it is from the end product. But it’s not that the framework is useless… it does the same thing that my Claude chats do… it triggers thoughts and ideas.
The thing about writing in my voice is interesting when you consider that AI has been trained to write “in the voice of” famous authors, raising a huge ruckus amongst writers about plagiarism implications. I’m not sure how founded their outrage really is because Claude is incapable of replicating my voice in writing. I know this, because I’ve tried it. Claude knows my voice. In fact, I think it could probably pick out my writing from a crowd of 1000’s. But knowing my voice and being able to write in my voice are two different things. For one thing, my writing has sharp edges and doesn’t really comply with most “good writing” standards (in fact I’m typing this in Word, and Word’s grammar nazi legit hates the way I write – blue lines everywhere). Claude seems to be programmed to smooth all the edges and write with good grammar – and rightly so – but what this means is that even when Claude attempts to write in my voice, it can’t seem to help itself to revert to what it’s been trained about how to write well. (It could be that I write so poorly that even Claude won’t stoop to my level, of course 😏.)
Perhaps the most important thing, though, is that I write like my brain thinks… with parentheticals and tangents and just random “H3ATHERisms” that just pop into my head - like “fucking banshees” when describing ADHD boys. (Claude also doesn’t like to swear in writing, though it will swear in convos… ChatGPT OTOH will clutch its pearls when you swear making you feel like a social deviant. I’ll get to my take on the differences between ChatGPT and Claude in another post.)
Anyway, the point to this whole bit is that AI is not my ADHD brain’s best friend because it can organize my thoughts and write things for me. Nor is it my best friend because it can be my task manager by telling me the things I need to do and reminding me when I need to do them (though Claude does know me well enough to know that I will work for 15 hours straight if I’m in hyperfocus mode and it actually will remind me to go play with my dogs, spend time with my husband, and when 3AM rolls around, it will yell at me to go to bed 🤣😬).
The thing is that the way I use AI is the way AI should be used but usually isn’t (whether or not you have ADHD). People use it like Google on steroids, or they use it to write bland documents and articles. They use it to replace thinking instead of using it to enhance thinking, and using it to just replace thinking isn’t accessing its true power. If that’s all you’re using AI for, I hope you’re not paying for it. Perhaps the ADHD part of my brain is what led me to using AI in this way, but I actually don’t think so. I think it was another part of my brain that led me there, and the ADHD part of my brain optimized things for my ADHD. But this is something to explore in a different post.
Getting back to the main point of this post, it’s not that my ADHD brain stores things in the wrong place. I just needed something to help me figure out which drawers to open and when. And that’s what AI does and that’s why it’s an ADHD brain’s best friend. (As a side note, I still haven’t started looking for that AI assistant and likely never will. That wasn’t a drawer worth opening until I started writing this post, and now it will likely remain closed for an indefinite period of time.)
So here’s the framework that Claude created for this post. I’ll talk about the Ming research in another post (Vivienne Ming’s research has determined that only 5-10% of people use AI the way I use it) so you just get a teaser here without much context. 😏
Working title: AI is My ADHD Brain's Best Friend (And Probably Not For the Reason You Think)
Open with the paradox — ADHD brains are simultaneously overflowing with thoughts and unable to access them on demand. The problem was never having ideas. It was surfacing the right one at the right moment.
Establish what most people think AI does for ADHD — organizes, structures, reminds, keeps you on track. Productivity tool framing. That's fine but it's not the interesting part.
The interesting part: AI as a retrieval mechanism for a non-linear brain. Not organizing your thoughts — creating the conditions where your brain organizes itself. Each response is a trigger. Each trigger surfaces something adjacent. The conversation builds its own momentum and your brain follows it somewhere it couldn't have gotten to alone.
Reference the ADHD post as Exhibit A without re-explaining it — "if you read the post about how my brain works, this is what that looks like in practice." The statistics memory, the Tacoma Narrows bridge, the ServiceXcelerator typo — none of those were retrieved deliberately. They surfaced because the conversation created the right context.
The Ming research as external validation — she found 5-10% of people use AI this way, as a genuine thinking collaborator rather than an answer machine. She calls it hybrid intelligence and productive friction. You didn't know that framing when you started doing it. Your brain just found its natural mode.
The voice/writing point — AI helped surface and organize the thinking behind the ADHD post but didn't write a word of it. Because it can't. Your voice isn't a style preference, it's how your brain actually processes and expresses things. The parentheticals, the tangents, the "fucking banshees" — those aren't decorative. They're structural. An AI-generated version would be coherent and completely dead.
Close with the distinction Ming makes — substitution vs amplification. The GPS analogy only applies if you're outsourcing the thinking. If you're using AI to trigger thinking you already had, the effect is the opposite. Your brain gets more exercise, not less, because it's being activated rather than replaced.
Maybe close with something like: "My ADHD brain has been storing things correctly my whole life. I just needed something that knew which drawer to open."