I write documentation for a living. Although my output is writing, my job is observing, listening and understanding. I can only write well because I have an intimate understanding of my readers' problems, anxieties and confusion. This decides what I write about, and how to write about it. This sort of curation can only come from a thinking, feeling human being.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
The problem is that so many things have been monopolized or oligopolized by equally-mediocre actors so that quality ultimately no longer matters because it's not like people have any options.
You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Apply the same for software (have you seen how bad tech is lately?) or basically any kind of vertical with a nontrivial barrier to entry where someone can't just say "this sucks and I'm gonna build a better one in a weekend".
You are right. We are seeing a transition from the user as a customer to the user as a resource. It's almost like a cartel of shitty treatment.
I don't work for the public transit company; I introduce immigrants to Berlin's public transit. To answer to the broader question, good documentation is one of the many little things that affect how you feel about a company. The BVG clearly cares about that, because their marketing department is famously competent. Good documentation also means that fewer people will queue at their service centre and waste an employee's time. Documentation is the cheaper form of customer service.
Besides, how people feels about the public transit company does matter, because their funding is partly a political question. No one will come to defend a much-hated, customer-hostile service.
Counterpoint - I think it’s going to become much easier for hobbyists and motivated small companies to make bigger projects. I expect to see more OSS, more competition, and eventually better quality-per-price (probably even better absolute quality at the “$0 / sell your data” tier).
Sure, the megacorps may start rotting from the inside out, but we already see a retrenchment to smaller private communities, and if more of the benefits of the big platforms trickle down, why wouldn’t that continue?
Nicbou, do you see AI as increasing your personal output? If it lets enthusiastic individuals get more leverage on good causes then I still have hope.
When it became cheaper to publish text did the quality go up?
When it became cheaper to make games did the quality go up?
When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
It's a world that is made of an abundance of trash. The volume of low quality production saturates the market and drowns out whatever high quality things still remain. In such a world you're just better of reallocating your resources from the production quality towards the the shouting match of marketing and try to win by finding ways to be more visible than the others. (SEO hacking etc shenanigans)
When you drive down the cost of doing something to zero you you also effectively destroy the economy based around that thing. Like online print, basically nobody can make a living with focusing on publishing news or articles but alternative revenue streams (ads) are needed. Same for games too.
> When it became cheaper to … did the quality go up?
No, but the availability (more people can afford it) and diversity (different needs are met) increased. I would say that's a positive. Some of the expensive "legacy" things still exist and people pay for it (e.g. newspapers / professional journalism).
Of course low quality stuff increased by a lot and you're right, that leads to problems.
Well yeah more people can afford shitty things that end up in the landfill two weeks later. To me this is the essence of "consumerism".
Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
But in the context of softwares, the landfill argument doesn't fit exactly well (well, sure someone can argue that storage on say, github might take more drives but the scale would be very cheaper than say landfill filled with physical things as well
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
This problem actually runs deep and is systemic. I am genuinely not sure how one can do it when the basis of wealth derives from what exactly? The growth of stock markets which people call bubbles or the US debt crisis which is fueling up in recent years to basically fuel the consumerism spree itself. I am not sure.
If you were to make people wealthy, they might still buy cheapest of cheapest crap just at a 10x more magnitude in many cases (or atleast that's what I observed US to do with how many people buy and sell usually very simple saas tools at times)
Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
Maybe my opinion is just biased and I'm in the comfortable position to pass judgment but I'd like to believe that more people would be more ethical and conscious about their materialistic needs if things had more value and were better quality and instead of focusing on the "price" as the primary value proposition people were actually able to afford to buy other than the cheapest of things.
Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
> Re software and landfill.. true to some extent but there are still ramifications as you pointed out electricity demand and hardware infrastructure to support it. Also in the 80's when the computer games market crashed they literally dumped games cartridges in a hole in the desert!
I hear ya but I wonder how that reflects on Open source software which was the GP request created by LLM let's say. Yes I know it can have bugs but its free of cost and you can own it and modify it with source code availability and run it on your own hardware
There really isn't much of a difference in terms of hardware/electricity just because of these Open source projects
But probably some for LLM's so its a little tricky but I feel like open source projects/ running far with ideas gets incentivized
Atleast I feel like its one of the more acceptable uses of LLM in so far. Its better because you are open sourcing it for others to run. If someone doesn't want to use it, that's their freedom but you built it for yourself or running with an idea which couldn't have existed if you didn't know the details on implementations or would have taken months or years for 0 gains when now you can do it in less time
It significantly improves to see which ideas would be beneficial or not and I feel like if AI is so worrying then if an idea is good and it can be tested, it can always be rewritten or documented heavily by a human. In fact there are even job posts about slop janitor on linkedin lol
> Wouldn't the economy also be in much better shape if more people could buy things such as handmade shoes or suits?
Yes but also its far from happening and would require a real shake up in all things and its just a dream right now. i agree with ya but its not gonna happen or not something one can change, trust me I tried.
This requires system wide change that one person is very unlikely to bring but I wish you best in your endeavour
But what I can do on a more individualistic freedom level is create open source projects via LLM's if there is a concept I don't know of and then open sourcing it for the general public and if even one to two people find it useful, its all good and I am always experimenting.
> Rather than think in terms of making things cheaper for people to afford we should think how to produce wealthier people who could afford better than the cheapest of cheapest crap.
I'm not trying to be snarky, but, if the principle is broadly applied, then what is the difference between these two? (I agree that, if it can only be applied to a limited population, making a few poor people wealthier might be better than making a few products cheaper.)
I think you found, but possibly didn't recognize, the problem. When availability goes up, but the quality of that which is widely available goes down, you get class stratification where the haves get quality, reliable journalism / software / games / etc. while the not-haves get slop. This becomes generational when education becomes part of this scenario.
One of the qualia of a product is cost. Another is contemporaneity.
If we put these together, we see a wide array of products which, rather than just being trash, hit a sweet spot for "up-to-date yet didn't break the wallet" and you end up with https://shein.com/
These are not thought of as the same people that subscribe to the Buy It For Life subreddit, but some may use Shein for a club shirt and BIFL for an espresso machine. They make a choice.
What's more, a “Technivorm Moccamaster” costs 10x a “Mr. Coffee” because of the build and repairability, not because of the coffee. (Amazon Basics cost ½ that again.)
Maybe Fashion was the original SEO hack. Whoever came up with the phrase "gone out of style" wrought much of this.
When it became cheaper to publish text, for example with the invention of the printing press, the quality of what the average person had in his possession went up: you went from very few having hand-copied texts to Erasmus describing himself running into some border guard reading one of his books (in Latin). The absolute quality of texts published might have decreased a bit, but the quality per capita of what individuals owned went up.
When it became cheaper to mass produce sneakers, tshirts, and anything, the quality of the individual product probably did go down, but more people around the world were able to afford the product, which raised the standard of living for people in the aggregate. Now, if these products were absolute trash, life wouldn't make much sense, but there's a friction point in there between high quality and trash, where things are acceptable and affordable to the many. Making things cheaper isn't a net negative for human progress: hitting that friction point of acceptable affordability helps spread progress more democratically and raise the standard of living.
The question at hand is whether AI can more affordably produce acceptable technical writing, or if it's trash. My own experiences with AI make me think that it won't produce acceptable results, because you never know when AI is lying: catching those errors requires someone who might as well just write the documentation. But, if it could produce truthful technical writing affordably, that would not be a bad thing for humanity.
When it became cheaper to publish text, for example with the invention of the printing press, the quality of what the average person had in his possession went up: you went from very few having hand-copied texts to Erasmus describing himself running into some border guard reading one of his books (in Latin). The absolute quality of texts published might have decreased a bit, but the quality per capita of what individuals owned went up.
Today the situation is very different and I'm not quite sure why you compare a time in history where the average person was illiterate and (printed) books were limited to a very small audience who could afford them, with the current era where everybody is exposed to the written word all the time and is even dependent on it, in many cases even dependent on it's accuracy (think public services). The quality of AI writing in some cases is so subpar, it resembles word salad. Example goodreads: the blurb of this book https://www.goodreads.com/book/show/237615295-of-venom-and-v... was so surreal I wrote to the author to correct it (see in comments to the authors own review). It's better now, but it still has mistakes. This is in no way comparable with the pasts goes down a bit this is destroying trust even more than everything else, because it this gets to be the norm for official documents people are going to be hurt.
>When it became cheaper to x did the quality go up?
...yes?
It introduces a lower barrier to entry, so more low-quality things are also created, but it also increases the quality of the higher-tier as well. It's important to note that in FOSS, we (Or atleast...I) don't generally care who wrote the code, as long as it compiles and isn't malicious. This overlays with the original discussion...If I was paying you to read your posts, I expect them to be hand-written. If I'm paying for software, it better not be AI Slop. If you're offering me something for free, I'm not really in a position to complain about the quality.
It's undeniable that, especially in software, cheaper costs and a lower barrier to get started will bring more great FOSS software. This is like one of the pillars of FOSS, right? That's how we got LetsEncrypt, OpenDNS, etc. It will also 100% bring more slop. Both can be true at the same time.
I'd say that those high quality things that still exist do so despite of the higher volume of junk and they mostly exist because of other reasons/unique circumstances. (Individual pride, craftsmanship, people doing things as a hobby/without financial constraints etc)
In a landscape where the market is mostly filled with junk by spending anything on "quality" any commercial product is essentially losing money.
>people doing things as a hobby/without financial constraints
Isn't this the exact point I was making...? I get you're arguing it's only a single factor, but I feel like the point still stands. More hobbyists, less financial constraints
The problem is that with the amount of low-quality stuff we're seeing, and with the expansion of the low-quality frenzy into the realm of information dissemination, it can become prohibitively difficult to distinguish the high-quality stuff. What matters is not the "total quality" but sort of like the expected value of the quality you can access in practice, and I feel like in at least some areas that has gone down.
> but it also increases the quality of the higher-tier
I truly don't see this happening anymore. Maybe it did before?
If there's real competition, maybe this does happen. We don't have it and it'll never last in capitalism since one or a few companies will always win at some point.
If you're a higher tier X, cheaper processes means you'll just enjoy bigger profit margins and eventually decide to start the enshittification phase since you're a monopoly/oligopoly, so why not?
As for FOSS, well, we'll have more crappy AI generated apps that are full of vulnerabilities and will become unmaintainable. We already have hordes of garbage "contributions" to FOSS generated by these AI systems worsening the lives of maintainers.
Is that really higher quality? I reckon it's only higher quantity with more potential to lower quality of even higher-tier software.
> When it became cheaper to publish text did the quality go up?
Obviously, yes? Maybe not the median or even mean, but peak quality for sure. If you know where to look there are more high-quality takes available now than ever before. (And perhaps more meaningfully, peak quality within your niche subgenre is better than ever).
> When it became cheaper to make games did the quality go up?
Yes? The quality and variety of indie games is amazing these days.
> When it became cheaper to mass produce X (sneakers, tshirts, anything really) did the quality go up?
This is the case where I don’t see a win, and I think it bears further thought; I don’t have a clear explanation. But I note this is the one case where production is not actually democratized. So it kinda doesn’t fit with the digital goods we are discussing.
> basically nobody can make a living with focusing on publishing news or articles
Is this actually true? Substack enables more independent career bloggers than ever before. I would love to see the numbers on professional indie devs. I agree these are very competitive fields, and an individual’s chances of winning are slim, but I suspect there are more professional indie creators than ever before.
I think for 'technical' writing, there is going to be some end-state crash.
What happens when all the engineers left can't figure out something, and they start opening up manuals, and they are also all wrong and trash. And the whole world grinds to a halt because nobody knows anything.
When was the last time that speed of development was the limiting factor? 15-20 years ago?
Nowadays the problem is that both technical and legal means are used to prevent adversarial interoperability. It doesn't matter if you (or AI) can write software faster if said software is unable to interface with the thing everyone else uses.
> Documentation is the cheaper form of customer service.
Thank you so much for saying this. Trying to convince anyone of the importance of documentation feels like an uphill battle. Glad to see that I'm not completely crazy.
> We are seeing a transition from the user as a customer to the user as a resource.
I'd argue that this started 30 years ago when automated phone trees started replacing the first line of workers and making users figure out how to navigate where they needed to in order to get the service they needed.
I can't remember if chat bots or "knowledge bases" came first, but that was the next step in the "figure it out yourself" attitude corporations adopted (under the guise of empowering users to "self help").
Then we started letting corporations use the "we're just too big to actually have humans deal with things" excuse (eg online moderation, or paid services with basically no support).
And all these companies look at each other to see who can lower the bar next and jump on the bandwagon.
It's one of my "favorite" rants, I guess.
The way I see this next era going is that it's basically going to become exclusively the users' responsibility to figure out how to talk to the bots to solve any issue they have.
> You mention you've done work for public transit - well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it. Firing the technical writer however has an immediate and quantifiable effect on the budget.
Exactly. If the AI-made documentation is only 50% of the quality but can be produced for 10% of the price, well, we all know what the "smart" business move is.
AI can often synthesis information out of for example, code and screenshots and navigate a website. It could effectively document the current state of a given web application for example whereas most companies have 0 documentation whatsoever.
I'd take AI generated slop reviewed by the person who created the system over tech writer babble any day of the week.
I'm sure I'm not the only one who was reading about some interesting but flawed system only to discover later that they were talking about MY OWN SOFTWARE!? (only half-joking here)
Also consider that while the OP looks like a skilled, experienced individual, all too often the documentation is being written by someone with that context, but rather someone unskilled, and with read empathy. Quality is quite often very poor, to the point where as shitty as genai can be, it is still an improvement. Bad UX and writing outnumbers the good. The successes of big companies and the most well known government services are the exception.
"well, if public transit documentation suddenly starts being terrible, will it lead to an immediate, noticeable drop in revenue? Doubt it."
First, I understand what you're saying and generally agree with it, in the sense that that is how the organization will "experience" it.
However, the answer to "will it lead to a noticeable drop in revenue" is actually yes. The problem is that it won't lead to a traceable drop in revenue. You may see the numbers go down. But the numbers don't come with labels why. You may go out and ask users why they are using your service less, but people are generally very terrible at explaining why they do anything, and few of them will be able to tell you "your documentation is just terrible and everything confuses me". They'll tell you a variety of cognitively available stories, like the place is dirty or crowded or loud or the vending machines are always broken, but they're terrible at identifying the real root causes.
This sort of thing is why not only is everything enshittifying, but even as the entire world enshittifies, everybody's metrics are going up up up. It takes leadership willing to go against the numbers a bit to say, yes, we will be better off in the long term if we provide quality documentation, yes, we will be better off in the long term if we use screws that don't rust after six months, yes, we will be better off in the long term if we don't take the cheapest bidder every single time for every single thing in our product but put a bit of extra money in the right place. Otherwise you just get enshittification-by-numbers until you eventually go under and get outcompeted and can't figure out why because all your numbers just kept going up.
Just restating: Traceable errors get corrected, untraceable errors don't, and so over time the errors affecting you inevitably are comprised nearly entirely of accumulated untraceable issues.
It means you need judgement-based management to be able to over-ride metric-based decisions, at times.
That’s one way to frame it. An other one is, sometime people are stuck in a situation where all options that come to their mind have repulsive consequences.
As always some consequences are deemed more immediate, and other will seem remoter. And often the incentives can be quite at odd between expectations in the short/long terms.
>this sucks and I'm gonna build a better one in a weekend
Hey, this is me looking at the world this morning. Bear with me, the bright new harmonious world should be there on Monday. ;)
Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.
Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.
I like to think of coding as gathering knowledge about some problem domain. All that a team learns about the problem becomes encoded in the changes to the program source. Program is only manifestation of the humans minds. Now, if programmers are largely replaced with LLMs, the team is no longer gathering the knowledge, there is no intelligent entity whose understanding of the problem increases with time, who can help drive future changes, make good business decisions.
Well said. I try to capture and express this same sentiment to others through the following expression:
“Technology needs soul”
I suppose this can be generalized to “__ needs soul”. Eg. Technical writing needs soul, User interfaces need soul, etc. We are seriously discounting the value we receive from embedding a level of humanity into the things we choose (or are forced) to experience.
your ability to articulate yourself cleanly comes across in this post in a way that I feel AI is trying to be and never quite reaches as well.
I completely agree that the ambitions of AI proponents to replace workers is insulting. You hit the nail on the head with pointing out that we simply dont write everything down. And the more common sense / well known something is the less likely it is to be written down, yet the more likely it might be needed by an AI to align itself properly.
I like the cut o' your jib. The local public transit guide you write, is that for work or for your own knowledge base? I'm curious how you're organizing this while keeping the human touch.
I'm exploring ways to organize my Obsidian vault such that it can be shared with friends, but not the whole Internet (and its bots). I'm extracting value out the curation I've done, but I'd like to share with others.
Why shouldn't AI be able to sufficiently model all of this in the not far future? Why shouldn't have it have sufficient access to new data and sensors to be able to collect information on its own, or at least the system that feeds it?
Not from a moral perspective of course, but the technical possibility. And the overton window has shifted already so far, the moral aspect might align soon, too.
IMO there is an entirely different problem, that's not going to go away just about ever, but could be solved right now easily. And whatever AI company does so first instantly wipes out all competition:
Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
> Accept full responsibility and liability for any damages caused by their model making wrong decisions and either not meeting a minimum quality standard or the agreed upon quality.
That's not sufficient, at least from the likes of OpenAI, because, realistically, that's a liability that would go away in bankruptcy. Companies aren't going to want to depend on it. People _might_ take, say, _Microsoft_ up on that, but Microsoft wouldn't offer it.
> Why shouldn't AI be able to sufficiently model all of this
I call it the banana bread problem.
To curate a list of the best cafés in your city, someone must eventually go out and try a few of them. A human being with taste honed by years of sensory experiences will have to order a coffee, sit down, appreciate the vibe, and taste the banana bread.
At some point, you need someone to go out in the world and feel things. A machine that cannot feel will never be a good curator of human experiences.
I hear you, but counterpoint: if you had an AI that monitored social media for mentions, used vision and audio capture in cafes to see what people ordered and how they reacted to it, had access to customer purchase data to see if people kept coming back to particular cafes and what they ordered over and over again...
Granted, there's lots that's dystopian about that picture, I'm not advocating for it, but it does start to feel like the main value of the "curator" is actually just data capture. Then they put their own subjective take on that data, but I'm not totally convinced that's better than something that could tell me a data-driven story of: "Here are the top three banana breads in the city that customers keep coming back to have a taste orgasm for".
I don't know though, it's a brave new world and I'm skeptical of anyone who thinks they know how all this will play out.
See also: librarians, archivists, historians, film critics, doctors, lawyers, docents. The déformation professionnelle of our industry is to see the world in terms of information storage, processing, and retrieval. For these fields and many others, this is like confusing a nailgun for a roofer. It misses the essence of the work.
The hard part is the slow, human work of noticing confusion, earning trust, asking the right follow-up questions, and realizing that what users say they need and what they actually struggle with are often different things
As a counterpoint, the very worst "documentation" (scare quotes intended) I've ever seen was when I worked at IBM. We were all required to participate in a corporate training about IBM's Watson coding assistant. (We weren't allowed to use external AIs in our work.)
As an exercise, one of my colleagues asked the coding assistant to write documentation for a Python source file I'd written for the QA team. This code implemented a concept of a "test suite", which was a CSV file listing a collection of "test sets". Each test set was a CSV file listing any number of individual tests.
The code was straightforward, easy to read and well-commented. There was an outer loop to read each line of the test suite and get the filename of a test set, and an inner loop to read each line of the test set and run the test.
The coding assistant hallucinated away the nested loop and just described the outer loop as going through a test suite and running each test.
There were a number of small helper functions with docstrings and comments and type hints. (We type hinted everything and used mypy and other tools to enforce this.)
The assistant wrote its own "documentation" for each of these functions in this form:
"The 'foo' function takes a 'bar' parameter as input and returns a 'baz'"
Dude, anyone reading the code could have told you that!
All of this "documentation" was lumped together in a massive wall of text at the top of the source file. So:
When you're reading the docs, you're not reading the code.
When you're reading the code, you're not reading the docs.
Even worse, whenever someone updates the actual code and its internal documentation, they are unlikely to update the generated "documentation". So it started out bad and would get worse over time.
Note that this Python source file didn't implement an API where an external user might want a concise summary of each API function. It was an internal module where anyone working on it would go to the actual code to understand it.
The map is not the territory! Documentation is a helpful, curated simplification of the real thing. What to include and what to leave out depends on the audience.
But if you treat "write documentation" as a box-ticking exercise, a line that needs to turn green on your compliance report, then it can just be whatever.
In every single discussion AI-sceptics claim "but AI cannot make a Michelin-star five-course gourmet culinary experience" while completely ignoring the fact that most people are perfectly happy with McDonald's, as evidenced by its tremendous economic and cultural success, and the loudest complaint with the latter is the price, not the quality.
I think you fundamentally misunderstand how the technology can be used well.
If you are in charge of a herd of bots that are following a prompt scaffolding in order to automate a work product that meets 90% of the quality of the pure human output you produce, that gives you a starting point with only 10% of the work to be done. I'd hazard a guess that if you spent 6 months crafting a prompt scaffold you could reach 99% of your own quality, with the odd outliers here and there.
The first person or company to do that well then has an automation framework, and they can suddenly achieve 10x or 100x the output with a nominal cost in operating the AI. They can ensure that each and every work product is lovingly finished and artisanally handcrafted , go the extra mile, and maybe reach 8x to 80x output with a QA loss.
In order to do 8-80x one expert's output, you might need to hire a bunch of people to do segmented tasks - some to do interviews, build relationships, the other things that require in person socialization. Or, maybe AI can identify commonalities and do good enough at predicting a plausible enough model that anyone paying for what you do will be satisfied with the 90% as good AI product but without that personal touch, and as soon as an AI centric firm decides to eat your lunch, your human oriented edge is gone. If it comes down to beancounting, AI is going to win.
I don't think there's anything that doesn't require physically interacting with the world that isn't susceptible to significant disruption, from augmentation to outright replacement, depending on the cost of tailoring a model to the tasks.
For valuable enough work, companies will pay the millions to fine-tune frontier models, either through OpenAI or open source options like Kimi or DeepSeek, and those models will give those companies an edge over the competition.
I love human customer service, especially when it's someone who's competent, enjoys what they do, and actually gives a shit. Those people are awesome - but they're not necessary, and the cost of not having them is less than the cost of maintaining a big team of customer service agents. If a vendor tells a big company that they can replace 40k service agents being paid ~$3.2 billion a year with a few datacenters, custom AI models, AI IT and Support staff, and totally automated customer service system for $100 million a year, that might well be worth the reputation hit and savings. None of the AI will be able to match the top 20% of human service agents in the edge cases, and there will be a new set of problems that come from customer and AI conflict, etc.
Even so. If your job depends on processing information - even information in a deeply human, emotional, psychologically nuanced and complex context - it's susceptible to automation, because the ones with the money are happy with "good enough." AI just has to be good enough to make more money than the human work it supplants, and frontier models are far past that threshold.
Spot on! I think LLM's can help greatly in quickly putting that knowledge in writing, including using it to review written materials for hidden prerequisite assumptions that readers might not be aware of that. It can also help newer hires in how to write and more clearly. LLM's are clearly useful in increasing productivity, but management that think that they even close to ready to replace large sections of practically any workforce are delusional.
I don't write for a living, but I do consider communication / communicating a hobby of sorts. My observations - that perhaps you can confirm or refute - are:
- Most people don't communicate as thoroughly and complete - written and verbal - as they think they do. Very often there is what I call "assumptive communication". That is, sender's ambiguity that's resolved by the receiver making assumptions about what was REALLY meant. Often, filling in the blanks is easy to do - as it's done all the time - but not always. The resolution doesn't change the fact there was ambiguity at the root.
Next time you're communicating, listen carefully. Make note of how often the other person sends something that could be interpreted differently, how often you assume by using the default of "what they likely meant was..."
- That said, AI might not replace people like you. Or me? But it's an improvement for the majority of people. AI isn't perfect, hardly. But most people don't have the skills a/o willingness to communicate at a level AI can simulate. Improved communication is not easy. People generally want ease and comfort. AI is their answer. They believe you are replaceable because it replaces them and they assume they're good communicators. Classic Dunning-Kruger.
p.s. One of my fave comms' heuristics is from Frank Luntz*:
"It's not what you say, it's what they hear." (<< edit was changing to "say" from "said".)
One of the keys to improved comms is to embrace that clarify and completeness is the sole responsibility of the sender, not the receiver. Some people don't want to hear that, and be accountable, especially then assumption communication is a viable shortcut.
* Note: I'm not a fan of his politics, and perhaps he's not The Source of this heuristic, but read it first in his "Words That Work". The first chapter of "WTW" is evergreen comms gold.
LLMs are good at writing long pages of meaningless words. If you have a number of pages to turn in with your writing assignment and you've only written 3 sentences they will help you produce a low quality result that will pass the requirements.
Low-quality is relative. LLMs' low-quality is most people's above-average. The fact the copy - either way - is likely to go through some sort of copy-by-committee process makes the case for LLMs even stronger (i.e., why waste your time). Not always, but quite often.
As as writer, you know this makes it seem emotional rather than factual?
Anyway, I agree with what you are saying. I run a scientific blog that gets 250k-1M users per year, and AI has been terrible for article writing. I use AI for ideas on brainstorming and ideas for titles(which ends up being inspiration rather than copypaste).
Funnily, of all your comment, the only word I objected to was the one right before "insulting": "almost". Thinking that LLM can replace humans outright expresses hubris and disdain in a way that I find particularly aggravating.
…says every charlatan who wanted to keep their position. I’m not saying you’re a charlatan but you are likely overestimating your own contributions at work. Your comment about feeding on data - AI can read faster than you can by orders of magnitude. You cannot compete.
"you are likely overestimating your own contributions at work"
Based on what? Your own zero-evidence speculation? How is this anything other than arrogant punting? For sure we know that the point was something other than how fast the author reads compared to an AI, so what are we left with here?
>you are likely overestimating your own contributions at work
That’s the logical fallacy anyone is going to be pushed to as soon as judging their individual worth in an intrinsically collective endeavor will happen.
People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
> People in lowest incomes which would not be able to integrate in society without direct social funds will be seen as parasites by some which are wealthier, just like ultra rich will be considered parasites by less wealthy people.
Your use of the word parasite, especially in the context of TFA, reminds me of the article James Michener wrote for Reader’s Digest in 1972 recounting President Nixon’s trip to China that year. In an anecdote from the end of the trip, Michener explained that Chinese officials gave parting gifts to the American journalists and their coordinating staffs covering the presidential trip. In the case of the radio/TV journalists, those staffs included various audio and video technicians.
As Michener told it, the officials’ gifts to the technicians were unexpectedly valuable and carefully chosen; but, when the newspaper and magazine writers in the group got their official gifts, they turned out to be relatively cheap trinkets. When one writer was bold enough to complain about this apparent disparity, a translator replied that the Chinese highly valued those who held technical skills (especially in view of the radical changes then going on in China’s attempt to rebuild itself).
“So what do you think about writers?” the complainer responded.
To that, the translator said darkly, “We consider writers to be parasites.”
That's a trope easy to fall into for any human, probably.
All the more as part of the underlying representation is actually starting from a structuralist analysis. We try to clarify the situation through classes of issues. But then mid journey we see what looks like an easy ride shortcut, where mapping ontological assessment over social forces in interaction is always one step on the side away. Goat scape is nothing new.
So we quickly jump from, what social structures/forces lead to that awful results, to who can be blamed while we continue to let the underlying anthropological issue rules everyone.
The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.
Common features are:
- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many
- References to documents you don't have access to
- UML diagrams, not matching the code of course
- Signatures by people who left the project long ago and are nowhere to be seen
- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists
- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care
Of course, no one reads them, besides maybe a depressive QA manager.
Not sure why the /s here, it feels like documentation being read by LLMs is an important part of AI assisted dev, and it's entirely valid for that documentation to be in part generated by the LLM too.
The best tech writers I have worked with don’t merely document the product. They act as stand-ins for actual users and will flag all sorts of usability problems. They are invaluable. The best also know how to start with almost no engineering docs and to extract what they need from 1-1 sit down interviews with engineering SMEs. I don’t see AI doing either of those things well.
AI may never be able to replace the best tech writers, or even pretty good tech writers.
But today's AI might do better than the average tech writer. AI might be able to generate reasonably usable, if mediocre, technical documentation based on a halfheartedly updated wiki and the README files and comments scattered in the developers' code base. A lot of projects don't just have poor technical documentation, they have no technical documentation.
Exactly. My team's technical documentation is written (in English) by people who don't speak English natively, and it's awful, barely comprehensible many times because these people don't understand articles ("the" and "a") very well and constantly omit them or use the wrong ones. And aside from the poor English, the documentation itself is just bad.
AI would do a great job of fixing their writing, but they don't want to use it, because it's not an official part of "the process".
>and comments scattered in the developers' code base
I'm not so sure about this one. Most devs I've worked with don't use comments.
> They act as stand-ins for actual users and will flag all sorts of usability problems.
I think everyone on the team should get involved in this kind of feedback because raw first impressions on new content (which you can only experience once, and will be somewhat similar to impatient new users) is super valuable.
I remember as a dev flagging some tech marketing copy aimed at non-devs as confusing and being told by a manager not to give any more feedback like that because I wasn't in marketing... If your own team that's familiar with your product is a little confused, you can probably x10 that confusion for outside users, and multiply that again if a dev is confused by tech content aimed at non-devs.
I find it really common as well that you get non-tech people writing about tech topics for marketing and landing pages, and because they only have a surface level understanding of the the tech the text becomes really vague with little meaning.
And you'll get lots devs and other people on the team agreeing in secret the e.g. the product homepage content isn't great but are scared to say anything because they feel they have to stay inside their bubble and there isn't a culture of sharing feedback like that.
In my experience, great tech writers quietly function as a kind of usability radar. They're often the first people to notice that a workflow is confusing
Realistically, PMs incentives are often aligned elsewhere.
But even if a PM cares about UX, they are often not in a good position to spot problems with designs and flows they are closely involved in and intimately familiar with.
Having someone else with a special perspective can be very useful, even if their job provides other beneficial functions, too. Using this "resource" is the job of the PM.
How can a PM do their job if they don't *care* about UX?
I mean... I know exactly happens because I've seen it more than once: the product slowly goes to shit. You get a bunch of PMs at various levels of seniority all pursuing separate goals, not collaborating, not actually working together to compose a coherent product; their production teams are actively encouraged to be siloed; features collide and overlap, or worse conflict; every component redefines what a button looks like; bundles bloat; you have three different rendering tools (ok, I've not seen that in practice but it seems to be encouraged by many "best practices") etc etc
Oh, I agree completely with you, sorry if that wasn't clear. The PM should, must, care about UX. Still, they don't always do, or at least end up not caring eventually, for various reasons.
I'm just responding to this:
> what were your Product Managers doing in the first place if tech writer is finding out about usability problems
They might very well be doing their job of caring about UX, by using the available expertise to find problems.
It's a bit like saying (forgive the imperfect analogy): what are the developers doing talking about corner cases in the business logic, isn't the PM doing their job?
Yes, they are. They are using the combined expertise in the team.
Let's allow the PMs to rely on the knowledge and insights of other people, shall we? Their job already isn't easy, even (or especially) if they care.
Yes, product managers and product owners should also be looking for usability problems. That said, the docs people are often going through procedures step by step, double-checking things, and they will often hit something that the others missed.
I take your point, but a good PM will have been inside the decision-making process and carry embedded assumptions about how things should work, so they'll miss things. An outside eye - whether it's QA, user-testing, (as here) the technical writer, or even asking someone from a different team to take an informal look - is an essential part of designing anything to be used by humans.
> I don’t see AI doing either of those things well.
I think I agree, at least in the current state of AI, but can't quite put my finger on what exactly it's missing. I did have some limited success with getting Claude Code to go through tutorials (actually implementing each step as they go), and then having it iterate on the tutorial, but it's definitely not at the level of a human tech writer.
Would you be willing to take a stab at the competencies that a future AI agent would require to be excellent at this (or possibly never achieve)? I mean, TFA talks about "empathy" and emotions and feeling the pain, but I can't help feel that this wording is a bit too magical to be useful.
I don’t know that it can be well-defined. It might be asking something akin to “What makes something human?” For usability, one needs a sense of what defines “user pain” and what defines “reasonableness.” No product is perfect. They all have usability problems at some level. The best usability experts, and tech writers who do this well, have an intuition for user priorities and an ability to identify and differentiate large usability problems from small ones.
Thinking about this some more now, I can imagine a future in which we'll see more and more software for which AI agents are the main users.
For tech documentation, I suppose that AI agents would mainly benefit from Skills files managed as part of the tool's repo, and I absolutely do imagine future AI agents being set up (e.g. as part of their AGENTS.md) to propose PRs to these Skills as they use the tools. And I'm wondering whether AI agents might end up with different usability concerns and pain-points from those that we have.
A good tech writer knows why something matters in context: who is using this under time pressure, what they're afraid of breaking, what happens if they get it wrong
Current AI writing is slightly incoherent. It's subtle, but the high level flow/direction of the writing meanders so things will sometimes seem a bit non-sequitur or contradictory.
It has no sense of truth or value. You need to check what it wrote and you need to tell it what’s important to a human. It’ll give you the average, but misses the insight.
> but can't quite put my finger on what exactly it's missing.
We have to ask AI questions for it to do things. We have to probe it. A human knows things and will probe others, unprompted. It's why we are actually intelligent and the LLM is a word guesser.
Also true that most tech writers are bad. And companies aren't going to spend >$200k/year on a tech writer until they hit tens of millions in revenue. So AI fills the gap.
As a horror story, our docs team didn't understand that having correct installation links should be one of their top priorities. Obviously if a potential customer can't install product, they'd assume it's bs and try to find an alternative. It's so much more important than e.g. grammar in a middle of some guide.
Yeah. AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement. The companies with the best docs will absolutely still have tech writers, just with some AI assistance.
Tech writing seems especially vulnerable to people not really understanding the job (and then devaluing it, because "everybody can write" - which, no, if you'll excuse the slight self-promotion but it saves me repeating myself https://deborahwrites.com/blog/nobody-can-write/)
In my experience, tech writers often contribute to UX and testing (they're often the first user, and thus bug reporter). They're the ones who are going to notice when your API naming conventions are out of whack. They're also the ones writing the quickstart with sales & marketing impact. And then, yes, they're the ones bringing a deep understanding of structure and clarity.
I've tried AI for writing docs. It can be helpful at points, but my goodness I would not want to let anything an AI wrote out the door without heavy editing.
>AI might replace tech writers (just like it might replace anyone), but it won't be a GOOD replacement.
That's fine, though: as long as the AI's output is better than "completely and utterly useless", or even "nonexistent", it'll be an improvement in many places.
The best tech writers I've known have been more like anthropologists, bridging communication between product management, engineers, and users. With this perspective they often give feedback that makes the product better.
AI can help with synthesis once those insights exist, but it doesn't naturally occupy that liminal space between groups, or sense the cultural and organizational gaps
Based on the trajectory of LLMs I bet a good tech writer will soon be a more valuable engineer than a "leetcode-hard" engineer for most teams.
Obviously we still need people to oil the machine, but... a person who deeply understands the product, can communicate shortcomings in process or user flows, can quickly and effectively organize their thoughts and communicate them, can navigate up and down abstraction levels and dive into details when necessary - these are the skills LLMs require.
And here I am, 2026, and one of my purposes for this year is to learn to write better, communicate more fluently, and convey my ideas in a more attractive way.
I do not think that these skills are so easily replaced; certainly the machine can do a lot, but if you acquire those skills yourself you shape your brain in a way that is definitely useful to you in many other aspects of life.
In my humble opinion we will be losing that from people, the upscaling of skills will be lost for sure, but the human upscaling is the real loss.
It is such a challenge! As English is not my first language I have to do some mind gimnastics to really convey my thoughts. 'On writing well' is on my list to read, it is supposed to help.
I thought it was saying "a letter to those who fired tech writers because they were caught using AI," not "a letter to those who fired tech writers to replace them with AI."
The whole article felt imprecise with language. To be honest, it made me feel LESS confident in human writers, not more.
I was having flashbacks to all of the confusing docs I've encountered over the years, tightly controlled by teams of bad writers promoted from random positions within the company, or coming from outside but having a poor understanding of our tech or how to write well.
I'm writing this as someone who majored in English Lit and CS, taught writing to PhD candidates for several years, and maintains most of my own company's documentation.
Given the steady parade of headlines on HN about workers supposedly being replaced by AI, it seems fairly self-evident that the first interpretation is the less likely of the two.
Is it expected that LLMs will continue to improve over time? All the recent articles like this one just seem to describe this technology's faults as fixed and permanent. Basically saying "turn around and go no further". Honestly asking because their arguments seem to be dependent on improvement never happening and never overcoming any faults. It feels shortsighted.
On one hand, recent models seem to be less useful than the previous generation of them, the scale needed for training improved networks seems to be following the expected quadratic curve, and we don't have more data to train larger models.
On the other hand, many people claim that what tooling integration is the bottleneck, and that the next generation of LLMs are much better than anything we have seen up to now.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
The person you're replying to is obviously and explicitly aware that that is another scenario, and the whole point of their comment was to argue against it and explain why they think something else is more likely. Merely restating the thing they were already arguing against adds nothing to the discussion.
Not really a contradiction, since the entire point of jobs and the economy at all is to serve the specific needs of humanity and not to maximize paper clip production. If we should be learning anything from the modern era it's something that should have always been obvious: the Luddites were not the bad guys. The truth is you've fallen for centuries old propaganda. Hopefully someday you'll evolve into someone who doesn't carry water for paperclip maximizers.
Zero labor cost should see the number of engineers trend towards infinity. The earlier comment suggested the opposite — that it would fall to just 1000 engineers. That would indicate that the cost of labor has skyrocketed.
What difference does that make? If the cost of an engineer is zero, they can work on all kinds of nonsensical things that will never be used/consumed. It doesn't really matter as it doesn't cost anything.
> That's just not how people or organizations run by people operate.
Au contraire. It's not very often that the cost of labor actually drops to anywhere close to zero, but we have some examples. The elevator operator is a prime example. When it was costly to hire an operator we could only hire a few of them. Nowadays anyone who is willing to operate an elevator just has to show up and they automatically get the job.
If 1,000 engineers are worth having around, why not an infinite number of them, just like those working as elevator operators? Again, there is no cost in this hypothetical scenario.
> Cost is not the only driver to demand.
Technically true, but we're not talking about garbage here. Humans are always valuable to some degree, just not necessarily valuable enough when there is a cost to balance. But, again, we're talking about zero cost. I expect you are getting caught up in thinking about scenarios where labor still has a cost, perhaps confusing zero cost with zero payroll?
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
The tech writer backlog is probably worse, because writing good documentation requires extensive experience with the software you're writing documentation about and there are four types of documentation you need to produce.
Yes. I have been building software and acting as tech lead for close to 30 years.
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
"And it's just plain better at writing code than 60% of my graduating class was back in the day".
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
> Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
Try it. The pattern matching these things do is unlike anything seen before.
I'm writing a compiler for a language I designed, and LLMs have no trouble writing examples and tests. This is a language with syntax and semantics that does not exist in any training set because I made it up. And here it is, a machine is reading and writing code in this language with little difficulty.
Caveat emptor: it is far from perfect. But so are humans, which is where the training set originated.
> I'm starting to think about a risk of technological stagnation in many areas.
That just does not follow for me. We're in an era where advancements in technology continues to be roughly quadratic [1]. The implication you're giving is that the advancements are a step function that will soon (or has already) hit its final step.
This suggests that you are unfamiliar or unappreciative of how anything progresses, in any domain. Creativity is a function of taking what existed before and making it your own. "Standing on the shoulders of giants", "pulling oneself up by the bootstraps", and all that. None of that is changing just because some parts of it can now be automated.
Stagnation is the very last thing I would bet on. In part because it means a "full reset" and loss of everything, like most apocalyptic story lines. And in part because I choose to remain cautiously optimistic.
I suspect a lot of folks are asking ChatGPT to summarize it…
I can’t imagine just letting an LLM write an app, server, or documentation package, wholesale and unsupervised, but have found them to be extremely helpful in editing and writing portions of a whole.
The one thing that could be a light in the darkness, is that publishers have already fired all their editors (nothing to do with AI), and the writing out there shows it. This means there’s the possibility that AI could bring back editing.
as a writer, i have found AI editing tools to be woefully unhelpful. they tend to focus on specific usage guidelines (think Strunk & White) and have little to offer for other, far more important aspects of writing.
i wrote a 5 page essay in November. the AI editor had sixty-something recommendations, and i accepted exactly one of them. it was a suggestion to hyphenate the adjectival phrase "25-year-old". i doubt that it had any measurable impact on the effectiveness of the essay.
thing is, i know all the elements of style. i know proper grammar and accepted orthographic conventions. i have read and followed many different style guides. i could best any English teacher at that game. when i violate the principles (and i do it often), i do so deliberately and intentionally. i spent a lot of time going through suggestions that would only genericize my writing. it was a huge waste of my time.
i asked a friend to read it and got some very excellent suggestions: remove a digressive paragraph, rephrase a few things for persuasive effect, and clarify a sentence. i took all of these suggestions, and the essay was markedly improved. i'm skeptical that an LLM will ever have such a grasp of the emotional and persuasive strength of a text to make recommendations like that.
That makes a lot of sense, but right now, the editing seems to be completely absent, and, I suspect, most writers aren’t at your level (I am sure that I’m not).
The failure mode isn't just hallucinations, it's the absence of judgment: what not to document, what to warn about, what's still unstable, what users will actually misunderstand
First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.
But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.
Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.
This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."
Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.
Two years ago, I asked chatgpt to rewrite my resume. It looked fantastic at a first sight, then, one week later I re-read it, and feel ashamed to have sent it to some prospective employers. It was full of cringe inducing babble.
You see, for an LLM there are no hierarchies other than what it observed in their training, and even then, applying it in a different context may be tricky for them. Because it can describe hierarchies, relationships by mimicry, but it doesn't actually have a model of them.
Just an example: It may be able to generate text that recognizes that a PhD title is a step above from a Master’s degree, but sometimes it won't be able to translate this fact (instead of the description of this fact) into the subtle differences in attention and emphasis we do in our written text to reflect those real world hierarchies of value. It can repeat the fact to you, can even kind of generalize it, but it won't take a decision based on it.
It can, even more now, get a very close simulation of this, because relative importance of stuff would have been semantically capture, and it is very good at capturing those subtle semantical relationships, but, in linguistic terms, it absolutely sucks at pragmatics.
An example: Let's say in one of your experiences, you improved a model that detected malignancy in a certain kind of tumor images, improving its false negative rate to something like 0.001%, then in the same experience you casually mention that you tied the CEOs toddler tennis shoes once. Given your prompt to write a resume according to the usual resume enhancement formulas, there's a big chance it will emphasize the irrelevant tennis lace tying activity in a ridiculously pompous manner, making it hierarchically equivalent to your model kung-fu accomplishments.
So in the end, you end up with some bizarre stuff that looks like:
"Tied our CEO's toddler tennis shoes, enabling her to raise 20M with minimal equity dilution in our Series B round"
To get through the hiring process nowadays you actually need an AI written CV because no one is reading it except of AI powered ATS used by HR department.
I’m already seeing colleagues at work using AI to generate documentations and then call it a day. It’s like they are oblivious to how _ugly_ and _ineffective_ the AI generated AI slops are:
- too many emojis
- too many verbose text
- they lack the context of what’s important
- critical business and historical context are lost
- etc..
They used AI to satisfy the short-term gain: “we have documentation”, without fully realising the long-term consequences of low quality. As a result, imo we’ll see the down spiral effects of bugs, low adoption, and unhappy users.
>I’m already seeing colleagues at work using AI to generate documentations and then call it a day. It’s like they are oblivious to how _ugly_ and _ineffective_ the AI generated AI slops are:
I'm sure their slop looks FAR better than the garbage my coworkers write. I really wish my coworkers would use AI to edit their writing, because then it might actually be comprehensible.
I have not fired a technical writer, but writing documentation that understands and maintains users focus is hard even with llm. I am trying to write documentation for my start up and it is harder than I expected even with llm.
Kudos to all technical writer who made my job as software engineer easier.
If that was more technical tho, like something more similar to technical writing... I would have had Copilot summarise it for me.
You are correct, the future is collaborative with AI, but not everything will still need to be collaborative...
Technical writing, like manuals and whatnots, that is simply akin to a math problem that, post calculator, has always calculated by calculators - even by people who didn't need them.
It will not be better, there is absolutely loss, it will still happen.
Nice read after the earlier post saying fire all your tech writers. Good post.
One thing to add is that the LLM doesn't know what it can't see. It just amplifies what is there. Assumed knowledge is quite common with developers and their own code. Or the more common "it works on my machine" because something is set outside of the code environment.
Sadly other fields are experiencing the same issue of someone outside their field saying AI can straight up replace them.
I will share my experience, hopefully it answers some questions to tech writers.
I was terrible writer, but we had to write good docs and make it easy for our customers to integrate with our products. So, I prepared the context to our tech writers and they have created nice documentation pages.
The cycle was (reasonably takes 1 week, depending on tech writer workload):
1. prepare context
2. create ticket to tech writers, wait until they respond
3. discuss messaging over the call
4. couple days later I get first draft
5. iterate on draft, then finally publish it
Today its different:
1. I prepare all the context and style guide, then feed them into LLM.
1.1. context is extracted directly from code by coding agents
2. I proofread it and 97% of cases accept it, because it follows the style guide and mostly transforms my context correctly into customer consumable content
3. Done. less than 20 minutes
Tech writers were doing amazing job of course, but I can get 90-95% quality in 1% of the time spend for that work.
Your docs are probably read many more times than they are written. It might be cheaper and quicker to produce them at 90% quality, but surely the important metric is how much time it saves or costs your readers?
None of the ten or so staff tech writers I have worked closely with over the years have honestly been great. This has been disappointing.
Always had to contract external people to get stuff done really well. One was a bored CS university professor, another was a CTO in a struggling tiny startup who needed cash.
It’s not so much that AI is replacing “tech writers”; with all due respect to the individuals in those roles, it was never a good title to identify as.
Technical writing is part of the job of software engineering. Just like “tester” or “DBA”, it was always going to go the way of the dodo.
If you’re a technical writer, now’s the time to reinvent yourself.
The specialisations will always exist. A good software engineer can't replace a good tester, DBA, or writer. There are specific extra skills necessary for those roles. We may not need those full skills in every environment (most companies will be just fine without a DBA), but they sure are not going away globally.
You're going to get some text out of a typical engineer, but the writing quality, flow, and fit for the given purpose is not going to come close to someone who does it every day.
> Technical writing is part of the job of software engineering.
Where I work we have professional technical writers and the quality vs your typical SW engineer is night and day. Maybe you got lucky with the rare SW engineer that can technical write.
While I agree with the article, the reducing of the number of technical writers due to the belief that their absence can be compensated by AI is just the most recent step of a continuous process of degradation of the technical documentation that has characterized the last 3 decades.
During the nineties of the last century I was still naive enough to believe that the great improvements in technology, i.e. the widespread availability of powerful word processors and the availability of the Internet for extremely cheap distribution will lead to an improvement in the quality of technical documentation and to easy access to it for everybody.
The reverse has happened, the quality of the technical documentation has become worse and worse, with very rare exceptions, and the access to much of what has remained has become very restricted, either by requiring NDAs or by requiring very high prices (e.g. big annual fees for membership to some industry standards organization).
A likely explanation for the worse and worse technical documentation is a reduction in the number of professional technical writers.
It is very obvious that the current management of most big companies does not understand at all the value of competent technical writers and of good product documentation; not only for their customers and potential customers, but also for their internal R&D teams or customer support teams.
I have worked for several decades at many companies, very big and very small, on several continents, but, unfortunately only at one of them the importance of technical documentation was well understood by the management, therefore the hardware and software developers had an adequate amount of time planned for writing documentation in their schedules for product development. Despite the fact that the project schedules at that company appeared to allocate much more time for "non-productive tasks" like documentation, than in other places, in reality it was there where the R&D projects were completed the fastest and with the least delays over the initially estimated completion time, one important factor being that every developer understood very well what must be done in the future and what has already been done and why.
There are better tools for software developers now than in e.g. 1996, so the pace of writing software has indeed increased, but certainly there has not been any 100x speed up.
At best there may have been a doubling of the speed, though something like +50% is much more likely.
Between e.g. 1980 and 1995 the speed of writing documentation has increased much faster than the speed of writing programs has ever increased, due to the generalization of the use of word processors on personal computers, instead of using typewriting machines.
Many software projects might be completed today much faster than in the past only when they do not start from zero, but they are able to reuse various libraries or program components from past projects, so the part that is actually written now is very small. Using an AI coding assistant does exactly the same thing, except that it automates the search through past programs and it also circumvents the copyright barriers that would prevent the reuse of programs in many cases.
I'm talking about the features/hr. It's trivial now to spin up a website with login, search, commenting, notifications, etc. These used to be multi week projects.
This is not writing something new for scratch, but just using an already existing framework, with minor customization for the new project.
Writing an essentially new program, which does something never accomplished before, proceeds barely faster today than what could be done in 1990, with a programming environment like those of Microsoft or Borland C/C++.
The answer could be: (1) users flagged it; (2) mods downweighted it; and/or (3) it set off the flamewar detector, a,k.a. the overheated discussion detector.
In this case it was #3.
That's one of the ways the system autocorrects. A sensational/indignant post attracts upvotes because that's how upvotes work (this is a weakness of the upvoting system), and this triggers an overheated discussion, which trips the flamewar detector which penalizes the post. It's about as simple a feedback mechanism as a thermostat.
That's why it's not uncommon for something to be at #1 and have tons of upvotes and comments, and then suddenly plummet. We do review all the threads that get that particular penalty but sometimes it takes a while.
Edit: ok, I've reviewed it. In this case, the thread is actually pretty good. I'm not sold on the article*, but a good thread is enough to turn off the flamewar penalty in this case, and I've done so.
(* not a judgment about article quality in general, only about how good a fit it is or isn't for HN)
"Productivity gains are real when you understand that augmentation is better than replacing humans..." Isn't this where the job losses happen? For example, previously you needed 5 tech writers but now you only need 4 to do the same work. Hopefully it just means that the 5th person finds more work to do, but it isn't clear to me that Jevons paradox kicks in for all cases.
I’m on engineering side . We are in the same boat.
Writers become more productive = less writers needed not 0 but less.
That’s current step. Now if the promise of cursor that capable of Multi week system to be automated completely. All the internal docs become ai driven .
So only exception are external docs . But … if all software is written by machine there are no readers .
This obviously a vector not a current state :( very dark and gloom
However, the writing is on the wall: AI will completely replace technical writers.
The technology is improving rapidly, and even now, with proper context, AI can write technical documentation extremely well. It can include clear examples (and only a very small number of technical writers know how to do that properly), and it can also anticipate and explain potential errors.
I agree with the core concern, but I think the right model is smaller, not zero. One or two strong technical writers using AI as a leverage tool can easily outperform a large writing team or pure AI output. The value is still in judgment, context, and asking the right questions. AI just accelerates the mechanics.
I remember the days when every large concern employed technical writers and didn't expect us programmers and engineers to write for the end users. But that stopped decades ago in most places at least as far as in house applications are concerned, long before AI could be used as an excuse for firing technical writers.
A lot of this applies to programming as well. And pretty much everything people are using GenAI for.
If you want to see how well you understand your program or system, try to write about it and teach someone how it works. Nature will show you how sloppy your thinking is.
I think using AI for tech documentation is great for people who don't really give a shit about their tech documentation. If you were going to half-ass it anyway, you can save a lot of money half-assing it with AI.
Someone has to turn off their brain completely and just follow the instructions as-is. Then log the locations where the documentation wasn't clear enough or assumed some knowledge that wasn't given in the docs.
What’s the point of confirming? AI can lie and so can humans as well.
I believe you but that’s just a gut feeling. I guess the best way to put this is anyone can write what you wrote with AI and claim it wasn’t written by AI.
The decision to stop hiring technical writers usually feels reasonable at the moment it’s made. It does not feel reckless. It feels modern. Words have become cheap, and documentation looks like words. Faced with new tools that can produce fluent text on demand, it is easy to conclude that documentation is finally solved, or at least solved well enough.
That conclusion rests on a misunderstanding so basic it’s hard to see once you’ve stepped over it.
Documentation is not writing. Writing is what remains after something more difficult has already happened. Documentation is the act of deciding what a system actually is, where it breaks, and what a user is allowed to rely on. It is not about describing software at its best, but about constraining the damage it can do at its worst.
This is why generated documentation feels impressive and unsatisfying at the same time. It speaks with confidence, but never with caution. It fills gaps that should remain visible. It smooths over uncertainty instead of marking it. The result reads well and fails quietly.
Technical writers exist to make that failure loud early rather than silent later. Their job is not to explain what engineers already know, but to notice what engineers have stopped seeing. They sit at the fault line between intention and behavior, between what the system was designed to do and what it actually does once released into the world. They ask the kinds of questions that slow teams down and prevent larger failures later.
When that role disappears, nothing dramatic happens. The documentation still exists. In fact, it often looks better than before. But it slowly detaches from reality. Examples become promises. Workarounds become features. Caveats evaporate. Not because anyone chose to remove them, but because no one was responsible for keeping them.
What replaces responsibility is process. Prompts are refined. Review checklists are added. Output is skimmed rather than owned. And because the text sounds finished, it stops being interrogated. Fluency becomes a substitute for truth.
Over time, this produces something more dangerous than bad documentation: believable documentation. The kind that invites trust without earning it. The kind that teaches users how the system ought to work, not how it actually does. By the time the mismatch surfaces, it no longer looks like a documentation problem. It looks like a user problem. Or a support problem. Or a legal problem.
There is a deeper irony here. The organizations that rely most heavily on AI are also the ones that depend most on high-quality documentation. Retrieval pipelines, curated knowledge bases, semantic structure, instruction hierarchies: these systems do not replace technical writing. They consume it. When writers are removed, the context degrades, and the AI built on top of it begins to hallucinate with confidence. This failure is often blamed on the model, but it is really a failure of stewardship.
Responsibility, meanwhile, does not dissolve. When documentation causes harm, the model will not answer for it. The process will not stand trial. Someone will be asked why no one caught it. At that point, “the AI wrote it” will sound less like innovation and more like abdication.
Documentation has always been where software becomes accountable. Interfaces can imply. Marketing can persuade. Documentation must commit. It must say what happens when things go wrong, not just when they go right. That commitment requires judgment, and judgment requires the ability to care about consequences.
This is why the future that works is not one where technical writers are replaced, but one where they are amplified. AI removes the mechanical cost of drafting. It does not remove the need for someone to decide what should be said, what must be warned, and what should remain uncertain. When writers are given tools instead of ultimatums, they move faster not because they write more, but because they spend their time where it matters: deciding what users are allowed to trust.
Technical writers are not a luxury. They are the last line of defense between a system and the stories it tells about itself. Without them, products do not fall silent. They speak freely, confidently, and incorrectly.
Language is now abundant.
Truth is not.
That difference still matters.
Let me explain what happened here, because this is very human and very stupid, and therefore completely understandable.
We looked at documentation and thought, Ah yes. Words.
And then we looked at AI and thought, Oh wow. It makes words.
And then we did what humans always do when two things look vaguely similar: we declared victory and went to lunch.
That’s it. That’s the whole mistake.
Documentation looks like writing the same way a police report looks like justice. The writing is the part you can see. The job is everything that happens before someone dares to put a sentence down and say, “Yes. That. That’s what this thing really does.”
AI can write sentences all day. That’s not the problem. The problem is that documentation is where software stops flirting and starts making promises. And promises are where the lawsuits live.
Here’s the thing nobody wants to admit: technical writers are not paid to write. They are paid to be annoying in very specific, very expensive ways. They ask questions nobody likes. They slow things down. They keep pointing at edge cases like a toddler pointing at a dead bug going, “This too? This too?”
Yes. Especially this too.
When you replaced them with AI, nothing broke. Which is why you think this worked. The docs still shipped. They even looked better. Cleaner. Confident. Calm. That soothing corporate voice that says, “Everything is fine. You are holding it wrong.”
And that’s when the rot set in.
Because AI does not experience dread. It does not wake up at 3 a.m. thinking, “If this sentence is wrong, someone is going to lose a week of their life.” It does not feel that tightening in the chest that tells a human writer, This paragraph is lying by omission.
So it smooths. It resolves. It fills in gaps that should stay jagged. It confidently explains things no one actually understands yet. It does what bad managers do: it mistakes silence for agreement.
Over time, your documentation stops describing reality and starts describing a slightly nicer alternate universe where the product behaves itself and nobody does anything weird.
This is how you get users “misusing” your product in ways your own docs taught them.
Then comes my favorite part.
You notice the AI is hallucinating. So you add tooling. Retrieval. Semantic layers. Prompt rules. Context hygiene. You hire someone with “AI” in their title to fix the hallucinations.
What you are rebuilding, piece by piece, is technical writing. Only now it’s worse, because it’s invisible, fragmented, and no one knows who’s responsible for it.
Context curation is documentation.
Instruction hierarchies are documentation.
If your AI is dumb, it’s because you fired the people who knew what the truth was supposed to look like.
And don’t worry, accountability did not get automated away while you weren’t looking. When the docs cause real damage, the model will not be present. You cannot subpoena a neural net. You cannot fire a prompt. You will be standing there explaining that “the system generated it,” and everyone will hear exactly what that means.
It means nobody was in charge.
Documentation is where software admits the truth. Not the aspirational truth. The annoying truth. The truth about what breaks, what’s undefined, what’s still half-baked and kind of scary. Marketing can lie. Interfaces can hint. Documentation has to commit.
Commitment requires judgment.
Judgment requires caring.
Caring is still not in beta.
This is not an anti-AI argument. AI is great. It writes faster than any human alive. It just doesn’t know when to hesitate, when to warn, or when to say, “We don’t actually know yet.” Those are the moments that keep users from getting hurt.
The future that works is painfully obvious. Writers with AI are dangerous in the good way. AI without writers is dangerous in the other way. One produces clarity. The other produces confidence without consent.
Technical writers are not a luxury. They are the people who stop your product from gaslighting its users.
AI can generate language forever.
Truth still needs a human with a little fear in their heart and a pen they’re willing to hesitate with.
With every job replaced by AI the best people will be doing a better job than the AI and it'll be very frustrating to be replaced by people that can't tell the difference.
Meh. A bit too touchy feely for my taste, and not much in ways of good arguments. Some of the things touched on in the article are either extreme romanticisations of the craft or rather naive takes (docs are product truth? Really?!?! That hasn't been the case in ages, with docs for multi-billion dollar solutions, written by highly paid grass fed you won't believe they're not humans!)...
The parts about hallucinations and processes are also a bit dated. We're either at, or very close to the point where "agentic" stuff works in a "GAN" kind of way to "produce docs" -> read docs and try to reproduce -> resolve conflicts -> loop back, that will "solve" both hallucinations and processes, at least at the quality of human-written docs. My bet is actually better in some places. Bitter lesson and all that. (at least for 80% of projects, where current human written docs are horrendous. ymmv. artisan projects not included)
What I do agree with is that you'll still want someone to hold accountable. But that's just normal business. This has been the case for integrators / 3rd party providers since forever. Every project requiring 3rd party people still had internal folks that were held accountable when things didn't work out. But, you probably won't need 10 people writing docs. You can hold accountable the few that remain.
I love AI and use it daily, but I still run into hallucinations, even in COT/Thinking. I don't think hallucinations are as bad as people make it out to be. But I've been using AI since GPT3, so I'm hyper aware.
Yea. I think people underestimate this. Yesterday I was writing an obsidian plugin using the latest and most powerful Gemini model and I wanted it to make use of the new keychain in Obsidian to retrieve values for my plugin. Despite reading the docs first upon my request it still used a non existent method (retrieveSecret) to get the individual secret value. When it ran into an error, instead of checking its assumptions it assumed that the method wasnt defined in the interface so it wrote an obsidian.shim.ts file that defined a retrieveSecret interface. The plug-in compiled but obviously failed because no implementation of that method exists. When it understood it was supposed to used getSecret instead it ended up updating the shim instead of getting rid of it entirely. Add that up over 1000s of sessions/changes (like the one cursor has shared on letting the agent run until it generated 3M LOC for a browser) and it's likely that code based will be polluted with tiny papercuts stemming from LLM hallucinations
Just like when google started - get ahead, and stay ahead.
Google returns the best result based on both it's calculations, and click history of what clicks were most successful for a search.
LLM's don't really have that same response partially because it's strength is writing one sentence many different ways. The many different ways to write a sentence doesn't mean it's the best way. If it can write deep sentences, keeping a coherent, connected arc through sentences and stories
LLMs' also generally return the "best" answer as the most "common" one, without weight towards outliers as easily that might be the most true, or the best.
The definition of what is "good" and "correct" can also vary quite a bit, especially with writing.
AI can be configured to look for patterns humans might not see, but we also know humans can see things and scenarios that LLM's aren't trained on and can miss getting to.
As we can tell with AI copy, it all starts to sound the same even if it's new. Real writing ages differently. It can be much more of a finger print. This is an area I'm hoping to learn more about from the talented writers in my life - it seems the better the writer, the more they can see the holes of LLM and also be the best power users of LLMs by their superior ability to use words whether they realize it or not.
Why should I hire a dedicated writer if I have people with better understanding of the system? Also worth noting that like in any profession the most writers are... mediocre. Especially when you hire someone on contract. I had mostly bad experience with them in past. They happily charge $1000 for a few pages of garbage that is not even LLM-quality. No creativity, just pumping out words.
I can chip in like $20 to pay some "good writer" that "observes, listens and understands" for writing documentation on something and compare it with LLM-made one.
"Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
> Why should I hire a dedicated writer if I have people with better understanding of the system?
Many engineers are terrible at documentation, not just because they find it boring or cannot put it into words (that's the part an LLM could actually help with) but because they cannot tell what to document, what is unneeded detail, how best to address the target audience (or what is the profile of the target audience to begin with; something you can tell an LLM but which it cannot find on its own), etc, etc. The Fine Article goes into these nuances; it's the whole point of it.
> "Write a manual for air travel for someone who never flew. Cover topics like buying a ticket, preparing for travel, getting to airport, doing things in the airport, etc"
Air travel is a well-known thing, surely different from your bespoke product.
are you talking about the hashes (##, ###) etc in the subheadings? I think that's an intentional design thing, a bit of a nod to the back row, if you will.
There's another HN thread specifically asking people for links to their personal websites. I suspect an accidental typing-in-the-wrong-reply-box issue.
I don't think I've ever seen documentation from tech writers that was worth reading: if a tech writer can read code and understand it, why are they making half or less of what they would as an engineer? The post complains about AI making things up in subtle ways, but I've seen exactly the same thing happen with tech writers hired to document code: they documented what they thought should happen instead of what actually happened.
There are plenty of people who can read code who don't work as devs. You could ask the same about testers, ops, sysadmins, technical support, some of the more technical product managers etc. These roles all have value, and there are people who enjoy them.
Worth noting that the blog post isn't just about documenting code. There's a LOT more to tech writing than just that niche. I still remember the guy whose job was writing user manuals for large ship controls, as a particularly interesting example of where the profession can take you.
> they documented what they thought should happen instead of what actually happened.
The other way around. For example the Python C documentation is full of errors and omissions where engineers described what they thought should happen. There is a documentation project that describes what actually happens (look in the index for "Documentation Lacunae"): https://pythonextensionpatterns.readthedocs.io/en/latest/ind...
Yeah, but almost everyone wants money. You can see this by looking at what projects have the best documentation: they're all things like the man-pages project where the contributors aren't doing it as a job when they could be working a more profitable profession instead.
While I do appreciate man pages, I don't think they are something I would consider to be "the best documentation". Many of the authors of them are engineers, by the way.
A tech writer isn't a class of person. "Tech writer" is a role or assignment. You can be an engineer working as a tech writer.
Also, the primary task of a tech writer isn't to document code. They're supposed to write tutorials, user guides, how to guides, explanations, manuals, books, etc.
I'm currently in the middle of restructuring our website. 95% of the work is being done by codex. That includes content writing, design work, implementation work, etc. But it's a lot of work for me because I am critical about things like wording/phrasing and not hallucinating things we don't actually do. That's actually a lot of work. But it's editorial work and not writing work or programming work. But it's doing a pretty great job. Having a static website with a site generator means I can do lots of changes quickly via agentic coding.
My advise to tech writers would be to get really good at directing and orchestrating AI tools to do the heavy lifting of producing documentation. If you are stuck using content management systems or word processors, consider adopting a more code centric workflow. The AI tools can work with those a lot better. And you can't afford to be doing things manually that an AI does faster and better. Your value is making sure the right documentation gets written and produced correctly; correcting things that need correcting/perfecting. It's not in doing everything manually; you need to cherry pick where your skills still add value.
Another bit of insight is that a lot of technical documentation now has AIs as the main consumer. A friend of mine who runs a small SAAS service has been complaining that nobody actually reads his documentation (which is pretty decent) and instead relies on LLMs to do that for them. The more documentation you have, the less people will read all of it. Or any of it.
But you still need documentation. It's easier than ever to produce it. The quality standards for that documentation are high and increasing. There are very few excuses for not having great documentation.
I revise my local public transit guide every time I experience a foreign public transit system. I improve my writing by walking in my readers' shoes and experiencing their confusion. Empathy is the engine that powers my work.
Most of my information is carefully collected from a network of people I have a good relationship with, and from a large and trusting audience. It took me years to build the infrastructure to surface useful information. AI can only report what someone was bothered to write down, but I actually go out in the real world and ask questions.
I have built tools to collect people's experience at the immigration office. I have had many conversations with lawyers and other experts. I have interviewed hundreds of my readers. I have put a lot of information on the internet for the first time. AI writing is only as good as the data it feeds on. I hunt for my own data.
People who think that AI can do this and the other things have an almost insulting understanding of the jobs they are trying to replace.
reply