Ungeneratable Writing
The year is 2025, and AI is everywhere: on the news, on social media feeds, on advertising billboards, in countless grant proposals and startup pitches striving for relevance, on your phone, laptop, and maybe your refrigerator. Compensation packages for the very best AI scientists have reached tens to hundreds of millions of dollars. A few days ago, Anthropic announced a gigawatt-scale Google Cloud partnership1. Earlier in October, OpenAI announced that ChatGPT has reached 800M weekly active users. In the past 30 days, OpenAI's Sora 2 and Google's Veo 3.1 have enshrined audio and video as key modalities of the post-truth world that we live in2. Before ChatGPT's initial release in late November 2022, most of us weren't thinking of AI very much. Less than three years later, I would go so far as to call AI a pillar of the current Zeitgeist.
To the average person, "AI" generally refers to LLM-based technologies like ChatGPT, Claude, Gemini, Copilot, Perplexity, and their myriad competitors and derivatives. The term is used much less often to refer to task-specific AI systems meant to be wielded by experts, like the Nobel-winning AlphaFold in structural biology, or non-LLM foundation models3 in various domains, even though such systems are anything but obsolete, continue to deliver breakthroughs, and can be much more useful and/reliable than LLMs in their respective domains. Eclipse their mute counterparts as they might, LLMs are really just the tip of the iceberg.
Yet when it comes to writing, LLMs might very well be the iceberg, specifically of the kind that sank the Titanic: an immovable mass of impending doom, on a collision course with helpless humans. After all, LLMs are Large Language Models, trained on an increasingly large portion of humanity's written output. They have read it all, and they will keep reading.
Why am I even going through the trouble of writing this post with my own hands, when I could fire up a new Claude tab, messily dictate an outline in about thirty seconds using speech recognition, get a draft to edit and iterate on a few times, and get the whole thing done in a fraction of the time? Is it that the very act of writing is futile, if AI can do it better and might well end up being one's biggest audience, should prospective readers ask their favorite chatbot to summarize the piece into a few easily digestible bullet points?
In a sense, I am thankful that the rise of LLMs brought about these questions. Written text, as a product, has little to no inherent value. The Internet was already inundated by low-quality SEO spam before AI; now, we have more of it and it's harder to distinguish from human writing. Is that valuable? Does this increasing difficulty in distinguishing human writing from AI-generated text mean that both are of roughly equal value?
The question might be ill-posed: value to whom?
Setting aside the question of whether LLMs are or will eventually be better than humans at writing4, writing is thinking. Structuring your thoughts, examining them from a variety of angles, confronting your own biases and pre-existing beliefs with reality through systematic verification, or even just letting words flow for a while and contemplating the result: this all forms a valuable activity for you, the writer. Even if you are without an audience, even if you destroy all of your drafts or lock them away forever. Spending time writing is no more wasteful than expending energy lifting weights and putting them back down; the latter is exercise for the body, and the former is exercise for the mind. Both can be deeply satisfying, and your mind deserves exercise as much as your body does.
But let's say you do want to write something for others to read, contemplate, benefit from in some way. In that case, the satisfaction or growth that you personally derive from writing is not enough; you actually should care about your piece's contribution and value. At a base level, your piece will be valuable simply because you wrote it: regardless of its topic, it will say something about you and your lived experience, and exist as a record of intent. The piece signals that at a given point in time, you decided that you had something to share with the world and bothered to write it down, however imperfectly. In the process, maybe you made someone feel less alone, through your words and the thought that they came from a living, breathing person.
Still, you might want your writing to have value in itself, regardless of its provenance. In that case, I argue that LLMs offer a great litmus test. If you are unsure what to write, pick what is least obvious to LLMs, what they find hardest to articulate in the manner that you desire, what you think they get wrong. If you find that a lazy summary of your argument is easily understood by AI and readily expanded into a piece that you're fully satisfied with, that likely means that the subject has already been extensively discussed in the training corpus, and that this lazy summary might stand on its own. In that case, there might be little value in lengthening it. On the contrary, you might find that the only way to truly do justice to what you envisioned is to actually write it, because an LLM can't reproduce it from a short description without butchering ideas, making factual mistakes, or deviating from the tone that you wish for. In that case, you are approaching the minimum description length of your ideas, as encoded through the language and register of your choice.
In short, the set of pieces that are hard for LLMs to produce could be dubbed "ungeneratable writing". This, of course, comes with a giant caveat: given a sufficiently sophisticated prompt, any desired output should be attainable. Yet a similar argument applies to finding inconceivable wisdom in Borges' Library of Babel, and to retrieving your personal data from the digits of π. Technically possible, but practically useless.
So, when in doubt, go for the ungeneratable: what AI won't write until you do.
Addendum
I asked Claude 4.5 Sonnet to produce a single-paragraph summary of this post, then to try reconstructing the original post from that summary.
To put things into perspective, 1 gigawatt amounts to just about the entire output of the average US nuclear reactor.
To reject the evidence of your eyes and ears once belonged in Orwell's 1984, as the party's final command; it may now be a necessary facet of common sense, as the concept of "video evidence" gradually loses meaning and AI-generated clips replace out-of-context clips as a propaganda tool of choice, supplementing text-based sentiment manipulation.
To name a few, this includes vision foundation models (e.g. Meta's Segment Anything 2 and DINOv3), physical/world foundation models (e.g. NVIDIA's Cosmos), genomic foundation models (e.g. Arc's Evo 2), and geospatial foundation models (e.g. Google DeepMind's AlphaEarth Foundations).
I find today's top LLMs (GPT-5, Gemini 2.5 Pro, Claude 4.1 Opus / 4.5 Sonnet) very useful for brainstorming, proofreading, and reformulating specific passages, but simply incomparable to skilled human writers when it comes to actually writing.