Until recently, if a draft was sent to me by a subject matter expert (SME), it might need significant edits, but I could generally assume the technical content was good. At Chrome, my team of experienced writers has a good pipeline for taking those drafts, shaping them up into readable docs, blog posts, and articles and publishing them. Even when I was in the business of commissioning articles from external sources, there were pretty obvious signs that content was plagiarised, or that the SME wasn’t quite the expert they thought they were. You get a good nose for this over time. Over the last year everything has changed.
There used to be an implicit contract between SME and editor. We receive technically accurate content, and we use our skills in developer communication to ensure the information lands well. In general the questions writers ask are clarifying ones, we’re essentially customer zero for the content, working through the tutorial and ensuring each step is as clear as can be. However, other than obvious typos we could assume the SME knows what they are talking about.
Generative AI has broken that contract. Increasingly writers receive content that looks polished, yet contains inaccuracies. This can be because the SME, while polishing their content using AI tools, has missed the fact that the tool has also modified some code or changed the meaning of text. It can also be that the drive for productivity with these tools has meant that people are being asked to cover broader subject areas, so are relying on AI tools for research rather than their own knowledge. AI can be very confidently wrong, and if the text seems clear, it’s possible to miss that it’s clearly nonsense.
This places a greater burden on the team editing and producing the content. Even with content handed to us from a known SME, we now need to review things with the assumption that they may be wrong. Does that interface really have those methods? Is that diagram inventing a brand new language? Can those quotes be attributed to those people? This relies on having a writing team who also have a level of expertise that allows them to catch these things. It also relies on having enough people in that writing team to deal with the increased workload.
I mention the GitHub flow example, not to take a dig at a fellow writing team, but as something we all need to learn from. I’m thankful that we’ve not had a similar thing happen so far at Chrome, credit is due to my excellent team and the care in which the broader developer relations team are taking as they adopt AI. But things are moving fast, and writers in giant companies are having to work out how to deal with it as much as anyone else. Separately, the back story of that Ars Technica article is wild.
The problem becomes bigger if you are relying on vendors and external contributors. You can put as many requirements into your contracts as you like, and reject obvious slop, but the level at which you have to treat what comes in as suspect is like nothing we’ve seen before.
If you are doing content operations at scale, it’s your job to put in place processes to deal with this new reality. People will be putting AI generated content through your pipeline. Even if it’s not completely generated, they may be unaware of how much AI polishing has changed their original words. How are you verifying things? The assumptions that were generally true two years ago don’t work now. Even in smaller operations, you can’t just rely on an experienced editor spotting issues, AI has broken much of the internal knowledge I’ve been able to rely on for years.
I’m not anti-AI, I’m increasingly using AI in my content operations pipeline, and will share some of that on this blog in future. However, as with any new technology, there’s the potential for positive and negative impacts. In this case a seemingly positive thing for the SMEs—help in drafting their content—is resulting in additional work for another team. But that’s how change happens, it doesn’t happen all at once, you have to work down the chain of problems, and understand where old patterns are no longer serving you. I imagine that we’ll see more unfortunate things shipped by content teams as we work through this. I’m dreading the point at which it’s my turn to be the person who LGTM’d the slop! We’re in a transitional time though, and I’m encouraged by the amount of discussion I’m seeing from other writers, as we work to redefine how we do content operations in a world of generative AI.








