AI and Publishing
Preparing for what’s next
Every publisher, editor, and writer knows it: we are crossing some kind of Rubicon in literary creation and publishing. In “AI, Creativity, and the Three-Fold-Steady-State Literary Economy” I describe the problem and project a three-sector equilibrium for the literary (and entertainment) economy. In this essay, I focus on the impact of AI on book and magazine publishing.
To strategize about AI policies for publishing, it’d help to have a formal taxonomy of AI assistance in creative writing. The problem is that there is no universally accepted taxonomy and the landscape is mutating so quickly that what seems obvious in one moment is implausible the next.
The book and journal publisher Elsevier has done some helpful work on this. They distinguish among:
mechanical assistance (language cleanup),
cognitive assistance (idea shaping),
content generation (drafting), and
delegation (AI acting as authorial agent)
Elsevier’s policy is that AI tools may support language and organization, but must not replace human intellectual contribution or responsibility, and must be disclosed when used beyond surface‑level.
I think we can split one of these (delegation) and add a couple more to reach a seven-fold taxonomy that should be reasonably future-proof, given the steady state vision of creative literary work I outlined in the foregoing. The defining question at each level is not whether AI was used, but whether it exercised creative agency. (Note: I used AI help to express these seven levels, working between levels 2 and 3.)
Level 1: Non‑AI Mechanical Tools (Baseline). Spellcheck, grammar correction, basic style linting, find/replace, formatting assistance. No semantic contribution, no new content, no idea shaping. This is universally accepted and already invisible in publishing workflows. Most presses implicitly treat this as “not AI” even when technically it is.
Level 2: AI as Language Polisher. Smoothing sentences for clarity, adjusting tone without changing meaning, grammar and style rewrites that preserve authorial intent. The ideas, structure, voice, and content remain fully human, but AI operates at the sentence surface, like a copyeditor who does not suggest content changes. This is explicitly permitted in most academic and professional publishing policies, with accountability remaining entirely human.
Level 3: AI as Cognitive Assistant. Brainstorming prompts, asking for alternative metaphors, testing how a passage lands with a simulated reader, generating questions, objections, or summaries about the author’s text. AI is like a writing coach, workshop partner, or sounding board, contributing responses, not text that enters the manuscript, and the author decides on everything that survives. This is the most ambiguous boundary and where future pressure will concentrate.
Level 4: AI as Drafting Partner. AI writes paragraphs or scenes that are later revised, AI generates alternative versions of passages, AI supplies raw poetic language later reshaped. AI produces candidate text, and a human edits, selects, reshapes, and integrates; this treats AI like a junior co‑writer whose drafts are heavily rewritten. This is treated as generative contribution in academic contexts and typically requires disclosure if permitted at all.
Level 5: AI as Co‑Author. AI drafts substantial portions of the work, a human acts primarily as editor and curator, voice consistency maintained by the system. In this case, creative agency is shared and authorship becomes ambiguous, like collaborative writing with uneven contributions. This is often prohibited, or allowed only with explicit labeling and governance, because it raises copyright, ethical, and provenance issues.
Level 6: AI as Primary Author (Human‑in‑the‑Loop). AI writes the work end‑to‑end and a human provides prompts, constraints, and approval. This is human responsibility without human creativity, and represents the full delegation of composition. This is generally excluded from literary and scholarly publishing, and treated as machine‑generated content, not authorship.
Level 7: AI as Autonomous Author (Human‑out-of‑the‑Loop). AI writes the work end‑to‑end and human beings ingest the work with no editorial or authorial intent. This is machine responsibility without human creativity or intervention. This is still emerging, technically, but is a vital part of the steady-state entertainment infrastructure.
Many organizations treat Levels 1-3 as assistance and Levels 4-7 as generation/delegation, but both thresholds and designations vary.
Across publishing, driven by the recognition that detection arms races have no good off-ramp, organizations are moving away from straight-out bans to alternatives:
functional distinctions (editing vs. generation),
accountability rules (humans remain responsible),
disclosure thresholds (only above certain levels), or
process‑based evaluation rather than detection.
There are notable assertions of the strict No-AI posture (see the Groke Literary Magazine at https://thegroke.org/ai-use-policy) and flashy instances of a permissive “do anything you want with AI” posture (see the Alliance of Independent Authors, ALLi, at https://selfpublishingadvice.org/ai-practical-guidelines/). The spectrum of positions is real, and the struggle is real, too.
How publishers land on this issue affects everything from contracting to editing, and from author relationships to a press’s public profile. Are editors trusting authors without verification? Building staged processes to evaluate manuscript development? Relying on dubious AI detectors plagued with false positives and false negatives? Shutting down submissions from people not already known and trusted?
That’s where the discussion begins for publishers. Here are few questions to consider.
Should a publisher hold one standard across all imprints, or do different genres and imprint missions require different policies?
What about in-house editorial and marketing labor? Should a publisher use AI for copyediting? metadata? blurbs? marketing? sensitivity checks? fact-checking? permissions letters?
What should publishers put in author contracts? Originality warranties to protect against plagiarism and rights uncertainty if AI is used? Disclosure obligations?
Given that AI detection is unreliable, and process-based verification is expensive and time-consuming, how does a publisher set up affordable process-oriented editing when manuscripts usually arrive in full-draft form?
These questions will be answered in a lot of ways in the interim until the literary economy trifurcates into the three sectors I described in “AI, Creativity, and the Three-Fold-Steady-State Literary Economy.” After that, the pressure on publishers of human literature will ease. Writing from the AI-human symbiosis will dominate the market for nonfiction, fiction, and poetry. Well, let it dominate.
In the small crucible that is the human literary sector, something quiet and beautiful will arc forward into the uncertain future. Human authors will train like athletes to master their own pages. Provenance will be determined through relationships and open-drafting processes. And, without fear of deception, we will continue to treasure human writing the way we always have.


