AI Didn't Erode My Thinking; It Raised My Ceiling

AI Didn't Erode My Thinking; It Raised My Ceiling

tradecraft

There is a moment, familiar to anyone who has spent years at the edge of a genuinely difficult problem, when the scope of the thing becomes physically uncomfortable. The mind reaches for a concept, which connects to another, which opens onto a third, and suddenly one is standing waist-deep in five disciplines simultaneously, none of them fully mapped, depth in each necessary. The interdisciplinary scholar’s dilemma: the question is real, the terrain is real, but no single field has jurisdiction over it, no journal quite covers it, and no colleague, however accomplished, has lived in exactly this territory before.

This is the loneliness of fractional curiosity. The more comprehensively one understands an interdisciplinary problem, the harder it becomes to find genuine interlocutors. You’re not looking for validation, but friction—someone who has read the same books, recognizes similar conceptual tensions, and can push back with enough knowledge to be useful.

For most of my career, I rarely came across these interlocutors. They existed in the pages of books and the footnotes of dissertations (and, more recently, on Substack), but not in any room I regularly occupied.

The significance of large language models became apparent to me rather slowly. My professional formation is technical—a PhD in computational social science, twelve years building and deploying AI/ML systems for the Intelligence Community—and precisely because of that formation, the initial hype left me cold. Language models are not search engines. They are probabilistic text predictors—albeit extraordinarily sophisticated ones.

The first time I queried an early version of ChatGPT on a genuinely specialized question—looking for scholarship at the intersection of civilizational theory, history, and emerging technology—it returned ten beautifully formatted, plausibly titled papers that I was amazed I had never come across in my research. I checked four of them against Google Scholar before accepting what I already knew: the system was predicting likely paper titles, not searching actual existing papers.

I used that story as a cautionary example in my AI literacy courses for the better part of a year.

What began to change my thinking was not a technical revelation but an almost comically mundane one: a search for caviar. A last-minute errand before a dinner party, typed into an interface I had largely dismissed, returned the address of a shop two hundred feet from where I was standing, a shop I had passed dozens of times without knowing it existed. A trivial result. But it was accurate, and it was immediate, and it suggested that the question of what these generative technologies are was considerably more complex than my earlier formulation had allowed. It signaled that the category I had built for these systems—confident and fluent fabricators—was too narrow to be correct.

While the caviar incident piqued my interest, it still did not catapult me forward into becoming a regular generative AI user. That took more than just a little intrigue. The full adoption of a new technology, for me, requires captivation.

A few weeks later, I was spending an intellectually restless afternoon in my office bored with the technical work du jour and clicking around Substack for something to light up my mind. It was the kind of afternoon spent returning to that special category of problems that have resisted resolution for years. Favorite problems.

The problem in question was one I had been carrying since the early days of my PhD: how to think coherently about the relationship between technology and civilization at scale—not the tactical question of what any particular technology does, but the deeper longitudinal questions of how tools reshape the epistemic and social conditions of human life, how the emergence of civilizational patterns operates across timescales too great for research to actually observe, how the macro-scale patterns of rise and decline might be understood through appropriate frameworks at the intersection of technology, society, history, and progress.

These were the types of questions I thought I would study in my PhD. But my professors told me these were life works, not dissertations. These are not small questions. They resist the usual scholarly apparatus because they require genuine interdisciplinarity—not the polite borrowing of methods that passes for interdisciplinary work in most institutions, but the full integration of bodies of knowledge.

Of course, I am not the only one asking these questions, and there are some existing outlets for scholarship and thought (Georgetown University, Roots of Progress, and other corners of the internet). There are also scholars who have done the work to rearrange history against non-standard paradigms in an effort to approach these types of questions from different directions. A quick look at the references above will hint at the level of interdisciplinarity we are talking about.

Despite ample reading, the result, for me, had been no satisfying synthesis, and no obvious interlocutor capable of engaging the full scope of these inquiries.

On that particular afternoon, somewhat on impulse, I typed the question into a frontier language model.

What came back was not an answer—answers were never really the point—but something closer to genuine engagement. The language model was familiar with the books and the papers I’d read. It could discuss them with precision, trace their connections, identify the tensions among them, and extend the conversation into territory that had not yet been mapped. It did not merely retrieve; it synthesized. More importantly, it synthesized in real time, conversationally, at a level of engagement that matched the actual depth of the inquiry rather than flattening it to accommodate a less prepared interlocutor.

In that moment, I had a partner. In that moment, the ceiling was raised. A problem that had resisted meaningful progress in my mind for the better part of two decades began, over the course of an afternoon, to yield. Not to be solved—this is not the kind of problem one solves in an afternoon, or likely even a lifetime as my dissertation committee pointed out—but the AI worked beautifully as an instrument of extended cognition. It was an interlocutor that was an emeritus professor in all five disciplines my present curiosity required. And it was wonderful.

This is the experience that the dominant narrative about AI and cognition does not account for. The prevailing discourse on AI’s effects on human thinking is almost entirely a story about skills erosion and AI taking jobs. Students outsourcing their reasoning. Workers losing capacities they no longer exercise. The slow atrophy of cognitive muscle through disuse.

This narrative isn’t necessarily incorrect, but there is nuance to it. In the context of education and the student, the discourse focuses on the relationship of the novice with the AI, reaching for a tool to fill a gap in knowledge structures not yet formed. In the context of the working population, it speaks about the erosion of skills and the need for retention of human judgment, authority, and arbitration. But subject matter expertise and AI is undertheorized compared with the novice case. AI accomplishes something different when the person using it already knows things deeply, has thought about them for years, and is not seeking an answer but a way to think further; a way to raise the ceiling on a problem or domain.

The novice who uses an AI system to bypass the cognitive struggle through which understanding is built is engaged in a fundamentally different act than the expert who uses the same system to extend the reach of a well-developed knowledge structure. The former risks substituting the appearance of knowing for the thing itself. In the latter, I would argue, the risks are smaller and of a different kind—contamination of an already formed knowledge structure rather than preventing its forming.

For decades, philosophers have argued that the mind is not a thing bounded by the skull—that cognitive tools, when sufficiently integrated, become genuine constituents of thought rather than mere instruments of it. The extended mind theory has always been compelling in the abstract. What current generative AI offers is its concrete realization: a tool sophisticated enough to function not as a retrieval mechanism or a calculator, but as a genuine interlocutor—one capable of meeting a prepared mind where it actually is and pushing it forward.

“We shape our tools and thereafter our tools shape us.” - John Culkin

The expert who develops deep knowledge through years of independent struggle, then uses AI to extend that knowledge further, is plainly different from the novice who attempts to shortcut the struggle entirely. But what of the learner who grows up in an AI-saturated epistemic environment—who fills every gap with AI assistance before the underlying knowledge structure has formed? Does that pathway still produce the same kind of expert? I suspect not. These are also questions worth exploring.

Who should explore them? I am calling on the domain of the academy, not just for research, but for a fundamental phase change around what it means to learn, to develop foundational knowledge, to know as an expert, and to extend that knowledge further.

I believe that the academic system is primed for a new entrant—at minimum, a new institutional posture. And the subject matter expert workforce, PhDs, and post-doctoral researchers are ready to move alongside their favorite problems in new ways.

These are the questions that the skills erosion narrative, for all its urgency, does not quite reach. The problem with AI and cognition is not simply that it might make people worse at thinking. It is that it might be changing what thinking is and, for that matter, what it means to know something as opposed to merely having access to it.

What this shift means for how we understand knowing, learning, and expertise, and the epistemic foundations of education is the subject of the essays that follow. The argument begins here, with a single observation: that AI does not do one thing to human cognition. It does different things, to different minds, depending on what those minds already contain and the things to which those minds aspire.

For my own experience as a subject matter expert and scholar-practitioner in a given industry, I find this shift incredibly empowering. For the prepared mind—the scholar or practitioner working inside a domain they already know—I don’t see erosion; I see ceiling. And with generative technologies, I see intellectual capacities increasing and worlds of possibility.

Future articles in this series will explore AI, knowing, learning, and what it means to think in an age of agentic counterparts and intelligent machines.

Washington, DC | Since 2016

mehtods and methodology in artificial intelligence and machine learning application

merigold

merigold