A Year of AI-Assisted Writing
What I Discovered About Process, Voice, and the Unexpected Joy of Writing
I spent most of my 30-year academic career struggling with writing. This was not for lack of effort or training. I completed a PhD that required extensive written work and published roughly 80 academic papers across my pre-AI career. But each felt more painful to write than the last. Every article became an exercise in frustration, watching colleagues draft fluent prose while I labored over sentences that never quite sounded right.
I now understand that this difficulty stems from my neurodiversity. The specific challenges I face with written language are part of who I am neurologically, not a matter of insufficient practice or inadequate instruction. Beginning in high school, language classes posed persistent challenges. While I earned straight A grades in nearly every other subject, I consistently struggled to reach a passing grade in English and German, which is my native language.
This pattern continued throughout my academic career. The only reason I survived in academia was that I operated in STEM fields, where colleagues cared primarily about content rather than prose quality. A paper with clumsy phrasing but solid methodology would pass peer review, but a well-written paper with methodological flaws would not. This arrangement allowed me to build a successful academic career despite my limitations with written language.
Things feel very different now.
The Augmented Educator recently passed its first anniversary, marking a year since I began working with AI-assistance to write in ways I could not before. This milestone seems an appropriate moment to reflect on how my relationship with the written word has grown and to explain the practical details of my AI-assisted writing process.
I need to point out that I published an ethics statement early on, outlining what I believe are the core ethical obligations when using AI-assistance. But I have not yet discussed how I actually work with these tools in practice. Over recent months, this process has consolidated into something more deliberate and reliable. I feel that now is the time to explain how it works.
My Relationship With Text
Over the last year, my relationship with written language has transformed in ways I did not expect. Instead of avoiding any writing task requiring more than a few hundred words, I now genuinely enjoy engaging with texts. Developing articles for the Augmented Educator has become one of my favorite activities. This is not hyperbole or promotional language. It is a simple statement of fact that would have seemed absurd to me two years ago.
I am starting to find myself in situations where I am almost disappointed that the next few blog posts are already scheduled. This is because I want to turn a new idea into an article immediately. I have arguments I want to make and connections I want to explore. This is typically when I add bonus essays like this one, fitting them into gaps in the schedule while trying not to overburden readers.
The change extends beyond mere willingness to write. I have also developed a surprising knowledge of grammar and textual structure. Who knew that em-dashes and en-dashes were distinct entities with different functions? I never thought about split infinitives as a style choice rather than a grammatical error. And I did not know about hedging language, or the rule of three or structural monotony as concepts one could deliberately manipulate.
I am simultaneously disappointed and excited. Disappointed that tools for making these elements visible did not exist when I was a young academic 35 years ago. How different might my career have been with AI-assistance that made writing mechanics comprehensible? But I am also incredibly excited because I can now bring my ideas to paper in a way people can actually read without hitting walls of awkward phrasing or confusing structure.
The quality of the writing on this blog is not something I could have produced alone. But it is also not something an AI could have produced without me. It is genuinely collaborative work in ways that challenge conventional assumptions about both human creativity and machine capability.
Finding My Voice
After publishing nearly 100 AI-assisted essays on Substack, I am beginning to find my voice in this process. Or perhaps more accurately, I am discovering what “my voice” means when the actual construction of sentences involves AI-assistance.
I recently encountered an article on a major new site that sounded like an article from my blog. Not in content as it addressed a completely different topic, but in cadence, structure, and approach. The author likely used an AI-assisted writing process similar to mine. My first reaction was curiosity. This is what “my voice” sounds like when constructed with a style guide and a set of preferences about structure and rhythm, combined with a particular way to think about ideas.
I do not mind that “my voice” is not entirely “mine” in the traditional sense. This does not trouble me the way it might trouble a literary fiction writer or poet. I care primarily about communicating ideas to others effectively and clearly. I am not a professional creative writer, and I do not think AI-assistance would or even should get me there. Creative writing demands a level of linguistic craft and originality that AI-assistance, at least in its current form, cannot reliably provide.
I am a science communicator, a scholar with a keen interest in making ideas accessible to a broader audience. For this kind of writer, AI-assistance is close to ideal. My value lies in thinking carefully about complex questions, synthesizing information from multiple fields, and developing frameworks that help others understand those questions better. The specific arrangement of words that conveys these ideas matters, but it matters primarily for clarity and readability rather than for aesthetic innovation.
This distinction seems important. Different writing goals require different relationships with AI-assistance. A novelist seeking to develop a distinctive literary voice faces different challenges than a scholar seeking to explain research clearly. Neither goal is more legitimate than the other, but they benefit differently from AI-assistance.
How the Process Begins
My writing process typically starts with a trigger. This might be a comment on one of my posts that raises an interesting question I had not yet considered. It might be a YouTube video that connects to something I have been thinking about. Or it might be a conversation where someone says something that makes me pause and think, “Oh, that’s interesting.”
From there, I first develop a research brief using a deep research model. These models have improved dramatically over the past year, producing more accurate results with fewer hallucinations. I have found that Gemini 3.0 Pro currently produces the most reliable output for my needs, though this assessment is based on anecdotal experiences. Outright hallucinations have become rare, and when they appear, they are typically minor details that do not affect the principal argument.
My recent article about anthropomorphizing AI offers a good example of how this works in practice. I had started binge-watching the “Itchy Boots” channel on YouTube. The channel documents solo motorcyclist Noraly Schoenmaker’s journeys around the world. Noraly rides through remote regions, often for weeks at a time with minimal human contact. The production quality is excellent, and the content is genuinely engaging with a heartwarming level of radical wholesomeness.
After several episodes, I realized something quite interesting: Noraly was anthropomorphizing her motorcycle. She named it “Frankie,” talked to it, about it, and gave it personality traits. The practice was so casual and natural that it took me some time to even notice it. This was not performed for the camera or played for comedy. It seemed to be how she actually related to the machine.
This interested me because I had previously contemplated a different but related question: why do we criticize anthropomorphizing AI while having no issue with anthropomorphizing other inanimate objects? We name our tools, talk to our computers, treat our smartphones as companions. But when someone refers to an AI as something more than just an engineered system, there is often immediate pushback. The inconsistency seemed worth exploring.
With that question in mind, I asked Gemini Deep Research to develop a brief about the history and theory behind the human tendency to anthropomorphize objects. I wanted to understand the psychological mechanisms, the historical patterns, and the theoretical frameworks scholars used to explain this behavior.
Verification of Research Briefs
It is important to point out that such AI-generated research briefs can never be taken at face value. They require careful verification. While straight fabrications have become quite rare with the newer models, some claims can lack verification through primary sources. This is particularly common when the AI cites internet forums or secondary sources that themselves lack clear sourcing.
In the anthropomorphism research brief, for example, Gemini cited several Reddit posts making fascinating claims about the anthropomorphization of ships in early antiquity. According to these posts, ancient mariners did not just name their ships but developed elaborate rituals. The posts were written by people who seemed experts in their fields as their comments included technical terminology and very specific historical references. This suggested the claims were likely accurate.
But I am not an expert in maritime history, and I could not verify these assertions through primary sources. I did not have access to the ancient texts being referenced, and even if I did, I lack the linguistic and historical expertise to evaluate them properly. So, I removed those claims from the research brief. It is much more appropriate to narrow the scope of an argument than to include unverifiable assertions, however plausible they might seem.
This represents an interesting challenge with AI-assisted research. The tools are sophisticated enough to find connections and claims that sound authoritative and often are. But they are not yet sophisticated enough to reliably distinguish between well-sourced claims and plausible-sounding assertions that lack solid evidence. That judgment still requires human expertise and careful verification.
At the same time, AI can connect interdisciplinary concepts in ways that are genuinely valuable and not immediately apparent to human researchers working within conventional disciplinary boundaries.
In the same research brief, Gemini linked anthropomorphizing AI with Bruno Latour’s Actor-Network Theory. This theoretical and methodological approach to social theory argues that both human and non-human actors have agency in social networks. Treating AI as an actor with some degree of agency, rather than as a passive tool, aligns closely with Latour’s approach.
I had actually considered this connection before. I even pointed one of my PhD students in that direction, thinking it might make an interesting research project. We did not pursue the idea further because the student’s interests developed in a different direction. And I have seen no one else make this connection explicitly in published work, though it almost certainly exists somewhere in the vast landscape of academic literature.
Seeing this connection emerge in a research brief where I did not explicitly prompt for it was striking. The AI recognized a conceptual parallel between the two domains even though they were not typically discussed together. This is the interdisciplinary synthesis that AI can sometimes perform effectively, connecting ideas across fields in ways that might not occur to human researchers constrained by disciplinary training and convention.
Creating the Recipe
Once I have the research brief, I always write a rough draft manually. This draft takes the form of what I call a “recipe” with absolutely no consideration for writing quality. The goal is simply to get ideas out of my head and into some kind of external form.
Whatever flows from my mind gets written down regardless of semantic elegance, grammatical correctness, or stylistic coherence. Sentences fragment. Ideas jump around. Connections are implied rather than stated. The draft might include phrases like “talk about ships here,” “connect to Latour somehow,” or “this needs an example.” It is purely functional, a map of what I want the final article to accomplish.
I have tried using voice-to-text dictation for this stage. The technology has improved dramatically, and many people swear by it. But for whatever reason, this approach does not work for me. Perhaps it is the same neurodiversity that made traditional writing difficult. I need to actually type the draft or recipe into a basic text editor, watching the words appear on screen as my fingers move.
There is something about the physical act of typing that helps me think through ideas in ways that speaking aloud does not. Writing instructors might find my experience comforting as it reinforces the idea that writing is a gateway to thinking critically.
Depending on the research brief’s quality and how much restructuring I need to do, this manual draft might be several hundred words or more. If the research brief is already structured in a way that represents my thinking about the final article, the manual draft will be shorter. If I need to substantially restructure content from the research brief, I add more detailed instructions about what information should appear and in what order.
If an article does not need a research brief because it is not based on something requiring extensive background research—like this article, for example—the manual draft can be quite substantial. The manual draft, or “recipe,” for this piece ran over 1,000 words. I needed to capture not just ideas but also the narrative structure I wanted: beginning with personal history, moving through process description, and ending with implications.
AI-Assisted Ghostwriting
I then use this manual draft as a base prompt for Claude 4.5. I have experimented with different AI models at this stage of the process. Claude consistently produces prose that works best for my purposes. It is clear without being simplistic and maintains a consistent tone appropriate for scholarly communication aimed at a broader audience. This assessment is pragmatic rather than ideological. It is based on which outputs require the least revision to match what I am trying to achieve.
In this phase of the process, I instruct Claude to “ghostwrite” an article draft based on my “recipe.” In doing so, I provide two additional system prompts, both of which have evolved considerably over the past year.
The first prompt contains instructions about general scholarly writing. You could call this the prompt that establishes “my voice.” This prompt tells Claude to vary sentence structures and lengths. It instructs the model to avoid overusing em-dashes. It warns against the overuse of parallel structures of three: the tendency to list items in threes simply because that pattern sounds pleasing. And it includes similar guidance about avoiding certain predictable patterns that appear frequently in AI-generated text.
The second prompt is a more detailed style guide containing specific instructions about the style I expect for an Augmented Educator article. This prompt has grown longer over time as I have identified more patterns I want to encourage or discourage. It describes the expected audience of educators and intellectually engaged readers. It specifies the approximate article length or my preference for scholarly prose that avoids bulleted lists in favor of developed paragraphs. And it includes examples of the way I typically structure arguments.
Together, these prompts shape how Claude interprets my rough draft. The model does not simply clean up grammar and fix sentence structure. It translates my fragmentary notes into readable prose while maintaining certain stylistic preferences and avoiding certain predictable patterns.
I typically aim for around 2,500 words per article these days. I know conventional wisdom recommends keeping blog posts shorter. But I need a certain length to develop arguments properly. Complex ideas require space to unfold, and I need room to acknowledge counterarguments, provide examples, introduce theoretical frameworks, and explore implications.
As ironic as it may sound, I consider my articles to be AI-assisted writing for people who actually want to read and enjoy written long-form content.
I have, however, started including voiceovers created with ElevenLabs. These are meant for people who prefer listening to reading. The voiceovers are not perfect. ElevenLabs sometimes struggles with correct intonation, particularly with technical terms or when trying to decipher complex sentence structures. But the voiceovers serve their purpose adequately.
Revising the Draft
After Claude writes the first draft, the article typically takes one of two directions. In roughly half the cases, the proposed draft has not yet reached the level I want. This is not usually a failure of the AI so much as a mismatch between what I specified in my recipe and what I actually meant. I might realize that an argument needs more development or that a section should come earlier in the piece.
In these cases, I take the article and feed it back into Gemini with a request to scrutinize it and recommend improvements. The model has proven quite successful in identifying logical gaps, unclear transitions, or places where claims need more support. It will note when a paragraph makes an assertion without adequate justification, when the connection between two sections is not clear, or when technical terminology appears without sufficient explanation.
I then manually edit these improvement recommendations. This is important: I do not simply accept Gemini’s suggestions wholesale. The model sometimes misunderstands my intent or recommends changes that would alter the argument in ways I do not want. I therefore need to evaluate each recommendation, deciding which ones address genuine problems and which ones reflect the model’s limitations or different priorities.
After editing the recommendations, I return to Claude and ask it to implement them. This usually produces the results I am aiming for. The combination of Claude’s prose capabilities and Gemini’s analytical approach tends to resolve most issues with the draft. But it is possible that I will need to iterate this review process multiple times. If necessary, I do this until I am satisfied with the draft.
Once the draft is at a point that I like, I take it into ProWritingAid and make final edits and corrections. This stage often reveals interesting patterns about AI-generated text. Claude makes a surprising number of punctuation errors that need fixing. Commas are missing, appear in the wrong places, or semicolons get used where periods would be clearer. And some overly grandiose words usually need to be brought back to regular language.
Sometimes, entire sentences require restructuring. A sentence might be grammatically correct and even well-written by some standards, but it disrupts the rhythm of the paragraph or creates an awkward transition. I often adjust the transitions between sentences, making sure ideas flow naturally from one to the next rather than feeling like a series of independent statements placed adjacently.
Section titles usually also need adjustment. Claude tends to produce titles that are either too generic (like the dreaded “Moving Forward”) or too elaborate (“Navigating the Complex Landscape of Contemporary Educational Practice”). I prefer titles that are descriptive and specific without being pretentious.
Most importantly, I usually need to remove several sentences or entire paragraphs from the conclusion. Claude lacks access to my other blog posts and tends to end essays with broad statements about the future of AI in education, the importance of maintaining human elements in technological systems, or the need for thoughtful integration of new tools. These conclusions are not wrong, but they risk becoming repetitive across my blog.
Final Source Verification
After completing edits in ProWritingAid, I usually make one final pass through Gemini to double-check the accuracy of claims made in the text. I ask the model to verify specific factual claims: dates, names, descriptions of research findings, or technical details. It is quite rare at this point that Gemini catches something I missed in my manual verification process. But it happens.
And as a last step, I add references and links to the article. I do this manually, trying to find primary sources for any substantial claims. If I am discussing a research study, I want to link to the actual study or at least to a detailed account of it in a reputable publication. If I am describing a controversy, I want to point to original reporting rather than secondary or tertiary accounts.
This is particularly important when writing something more academic, such as for a conference presentation. Academic audiences expect careful sourcing and will notice if references are inadequate or inappropriate. But for a blog post, I am usually comfortable with secondary references as long as they come from publications with editorial standards and fact-checking processes, such as major newspapers, established magazines, reputable online publications, or academic outlets.
The Promise of AI-Assistance
This workflow reveals something important about the current state of AI-assisted writing. The process is not about pressing a button and receiving finished prose. It is a complex dance between human judgment and machine capability, requiring multiple tools, iterative refinement, and constant verification.
Each tool in my workflow serves a different function and has different strengths. In my experience, Gemini excels at research and fact-checking, at finding connections across domains and verifying claims against multiple sources. Claude produces the most readable prose, transforming my fragmentary recipes into coherent essays while maintaining stylistic preferences. And ProWritingAid catches mechanical errors and identifies patterns that might not be obvious in isolated sentences but become problematic across an entire piece.
My own editorial judgment determines when something is ready for publication. I decide which of Gemini’s research findings are relevant and trustworthy. I choose which recommendations for improvement to implement. I evaluate whether Claude’s prose successfully captures what I meant. And I make the final decisions about structure, emphasis, and tone.
The result is not writing that could exist without me. If I gave my research brief and recipe to an AI, it would produce something, but it would not produce this particular essay with these particular choices about what to emphasize, which arguments to develop, and which examples to include. At the same time, I could not have produced the text alone. Without AI-assistance, I could not reliably transform my ideas into readable prose at this scale or with this level of polish.
A New Year Is Calling
As 2025 draws to a close, I find myself with a growing list of ideas waiting to be developed into articles for 2026. The topics range widely: practical pedagogical strategies for integrating AI thoughtfully into classroom practice, theoretical questions about creativity and authorship in an age of AI-assistance, copyright controversies and what they reveal about our assumptions regarding intellectual property, and the relationship between AI-capability and human expertise in different domains.
Each idea feels urgent and worth exploring in depth. This is in itself remarkable. A few years ago, having ideas I wanted to write about produced anxiety rather than enthusiasm. The gap between idea and readable text felt insurmountable. Now that gap has narrowed dramatically, and the ideas themselves generate genuine excitement about the prospect of developing written long-form content.
My experience runs directly counter to common fears about AI-assistance. The standard narrative suggests that AI will make people lazy, that it will atrophy skills through disuse, and that offloading cognitive work to machines will diminish human capability. I understand why this narrative has purchase, and there are certain contexts where it might prove accurate. But I represent a different pattern, one that often gets lost in the polarized debates.
AI-assistance has not replaced my thinking or judgment. It has not made me lazy or less engaged with ideas. Instead, it has provided access to a capability I did not have before. I still think deeply about what I write. I still structure arguments, evaluate evidence, and make judgments about what matters. I still revise, reconsider, and refine. The AI has not reduced my cognitive engagement with writing; if anything, it has increased it.
As we move into 2026 and beyond, the conversations about AI-assistance will grow more complex rather than simpler. New capabilities will emerge. Additional concerns will arise. The ethical questions will require sustained attention. What my experience offers is one data point that challenges simple narratives about displacement and obsolescence. For me, AI-assistance has meant access rather than avoidance, engagement rather than laziness, and joy rather than anxiety.
The images in this article were generated with Nano Banana Pro.
P.S. I believe transparency builds the trust that AI detection systems fail to enforce. That’s why I’ve published an ethics and AI disclosure statement, which outlines how I integrate AI tools into my intellectual work.






