Editorial assurance
in the age of AI

As use of generative AI accelerates exponentially, trust in information is increasingly scarce. We believe this offers leading companies an opportunity to build trust by publishing verified content.

Companies are increasingly using generative AI to create marketing and thought leadership content. But while the business benefits of using Claude or ChatGPT to research and write a LinkedIn post, marketing blog or white paper are clear, the risks may be less obvious. They range from reliance on generic arguments and poorly-sourced facts to telltale AI writing patterns and divergence from company messaging.

Building trust in content

HOLLIS & BEAN supports you with editorial assurance to safeguard against the reputational risks associated with using AI for content generation. We draw on our knowledge of our clients’ businesses and industries to challenge arguments, verify facts and edit AI-generated writing for style and coherence with company messaging.

Addressing the risks
of AI-generated content

Risk 1: Generic arguments…

AI chatbots such as Chat GPT and Claude are predictive models. When developing an argument, they draw on internalized patterns from a vast mix of sources—consulting decks, blog posts, reports, marketing copy—and produce a text that is likely to sound plausible and widely acceptable. The issue is that “safe” in this case is a synonym for “same.”

HOLLIS & BEAN applies a litmus test: could your content have been written by your competitor, or another large company? We draw on knowledge of your company, your industry and the macroeconomic context to identify and challenge generic arguments.

Risk 2: …that are still wrong

A common risk associated with AI use is hallucinations. Today’s generative AI models are less likely to get names and dates wrong, but argumentative hallucinations remain extremely common. Typical errors include inventing causal logic, overstating conclusions and fabricating coherence where it simply doesn’t exist.

HOLLIS & BEAN stress-tests your argument as part of a thorough content accuracy review. We carry out hallucination checking, looking for logical errors and exaggerations and highlighting where they exist.

Risk 3: Poorly-sourced facts

Corporate thought leadership and marketing material often back up their arguments with external proof points. AI chatbots can rapidly surface facts and figures, but users need to beware when using them. Common issues identified by H&B include facts that are:

  • Out-of-date: in a fast-moving industry, a source article from 2024 including a fact from 2022 is obsolete in 2026
  • Unacceptably biased: use of facts published by an NGO, or a think tank affiliated to a political party may carry reputational risk for a corporate citing them
  • Limited in scope: country-specific statistics need to be assessed for global relevance
  • Competitor data: whether published directly by your competitor, or by an industry body based on your competitors’ research
  • Wrong: while factual hallucinations may be less common in 2026, they still exist.

HOLLIS & BEAN tracks every fact back to its source, then verifies it for relevance and usability. Our AI content fact-checking service enables clients to have full confidence in the texts they publish.

Risk 4: AI writing patterns

Adjectives in sets of three. Contrast statements. Long lists of bullet points. These features, among others, have traditionally been considered hallmarks of good writing, yet today, their excessive use signals AI-generated text. AI writing also tends to combine an authoritative tone with a repetitive rhythm that leaves you wondering : what did I just read?

This brings a trust and credibility problem: when content feels machine-produced, its authority is diminished, regardless of the quality of the ideas. You don't want your investors, customers and partners to think that your thought leadership or new offer was devised by a robot.

HOLLIS & BEAN reviews AI-generated text to edit out common hallmarks and improve rhythm and fluidity.

Risk 5: Divergence from company messaging

AIs can be trained to write using company messaging – to an extent. Common issues include over-use, use that’s inappropriate to the context, or use of other phrasing where company messaging is more appropriate. And because they draw on popular web sources, AIs may even use competitor messaging when writing about your business.

HOLLIS & BEAN reviews AI-generated text against your company messaging and editorial style guide, proposing edits to bring your text in line with your house style.

Looking to build trust in your AI-generated content?

HOLLIS & BEAN provides a comprehensive editorial assurance service covering AI content fact-checking, style editing and AI copy validation. Reach out to our team for a review of your AI-generated content – whether a full audit or a simple style polish. Contact: hello@hollisbean.com 

FAQ

1. Is it possible to completely remove AI hallmarks from a piece of writing and make it sound human?

AI writing is visible both because of the arguments it contains and the style it is written in. That’s why it’s important to review and challenge both. At H&B we strongly believe that the only way to make a text “sound” human is to rewrite it. However, a comprehensive edit can go a long way to increasing the readability of an AI-generated text.

2. Can’t an AI model be trained to avoid these types of errors?

AI models are improving all the time, however hallucination errors remain common. It’s also possible to train your own GPT, asking it to use specific terminology and style rules, for example. However, this will not result in 100% accurate, well-written text. Working with an experienced human editor is the most reliable way to ensure that an argument is coherent and distinctive, that facts are correct and that your messaging is used well.

3. What is HOLLIS & BEAN’s experience of using AI?

HOLLIS & BEAN integrates use of generative AI into our work insofar as it supports our ambition to deliver excellent service and content backed by sharp insights. In practice, these means we don’t use it for writing or visual production as current AI models are unable to produce the quality our clients expect. However, we do use it as an accelerator to build knowledge on our clients and their industries, for research purposes and to boost productivity. All employees are required to sign our AI Charter.