Ethics & AI: A Statement of Principle and Practice
How AI Tools Support My Educational and Academic Writing
Last Updated: September 18, 2025
As The Augmented Educator, my work explores the intersection of human intellect and emerging technology. My guiding principle holds that technology, including Artificial Intelligence, should augment human experience, creativity, and critical thought. This statement outlines how I employ AI systems in creating content for this blog and other platforms, maintaining transparency about both the capabilities and limitations of these tools in my practice.
How I Use AI in My Work
My process remains fundamentally human-led and experience-driven. AI tools function in specific, clearly defined roles within my broader intellectual framework.
As a Scribe and Line Editor: I primarily employ Large Language Models (LLMs) including Gemini, Claude, and ChatGPT to translate fully-formed concepts into clear prose. The core ideas, frameworks, and arguments emerge from my three decades of experience in higher education. These AI systems function as sophisticated line editors, helping me articulate detailed concepts that I have already developed.
The technology proves most useful when I approach it with precise instructions and clear objectives—when the thinking has been completed and only the articulation remains.
As a Creative Sounding Board: LLMs occasionally serve as dialogic partners when I explore alternative perspectives on topics I have already defined. This process resembles a structured thought experiment; one that helps refine arguments and anticipate potential counterpoints. The AI neither generates core topics nor determines my analytical stance.
As a Research Assistant: Specialized AI-powered research tools help identify relevant scholarship that may lie beyond my immediate field, particularly when synthesizing interdisciplinary work. These systems can summarize complex papers and suggest connections between disparate research areas. However, they supplement rather than replace the fundamental scholarly practices of critical analysis and original synthesis.
The Intellectual Foundation of This Work
The intellectual foundation of every article I publish originates entirely from my own knowledge and professional experience. All frameworks, theories, analyses, and conclusions represent the product of my scholarship and reflection.
I maintain complete editorial control over all content. Every piece undergoes thorough review to ensure it reflects my voice, meets my academic standards, and aligns with my intellectual commitments. I take full responsibility for the accuracy, integrity, and ethical implications of everything that appears under my name.
An Evolving Commitment
The landscape of artificial intelligence continues to develop at unprecedented speed. Ethical practices surrounding AI use in academic and educational contexts remain similarly dynamic. I will continue evaluating my processes and updating this statement to reflect changes in both technology and professional norms. Questions about this policy and its implementation are welcome and encouraged.
A Note on Practice and Theory: My writing frequently examines whether LLMs possess genuine creativity or merely simulate it through sophisticated pattern matching. My own practice provides an interesting data point in this debate: I achieve the strongest results when I treat these systems as having no independent creative capacity whatsoever. The more completely I develop my ideas before engaging with AI, the more effectively these tools serve their purpose. This observation doesn't definitively answer questions about AI creativity, but it does suggest that, at least for now, these systems function best as amplifiers of human thought rather than as autonomous creative agents.
On AI Detection: This very statement, developed using the exact process I describe above, registered as "completely human-generated" when processed through standard AI detection tools at the time of publication. This outcome underscores a critical point I make throughout my work: current AI detectors cannot reliably distinguish between human and AI-assisted writing, particularly when AI is used thoughtfully as an editorial tool rather than a content generator. The futility of these detection systems highlights why transparency and honest disclosure must form the foundation of ethical AI use in academic contexts.