"We use AI as a helper, not a stand-in. Anything it generates--notes, early drafts, visuals--gets looked over and shaped by someone who actually works in the field." I've slipped that line into a few releases, including one for a sustainability client earlier this year, and it consistently puts reporters at ease. It makes the hierarchy clear: AI can speed things up, but it doesn't set the narrative or make the assertions.
While transparency is not a tactic for marketing GPTZero, it is a core component of our solution offering. Journalists often want to know how does AI affect our work? and what job functions will AI fulfill? along with when will humans have control over their work? Based on our past experience, when vague answers are provided to these questions, the scepticism increases. To help remove some of this uncertainty regarding AI disclosures, we have created a simple, non-technical disclosure of how AI works and what AI is. By providing general audience disclosures, we address the above questions before they are posed by the journalist. The following provides an example of an AI complimentary disclosure: GPTZero employs machine learning to identify text patterns from written content. As such, GPTZero produces probabilistic, rather than deterministic conclusions and as such supports human evaluation of GPTZero's output. This is beneficial to journalists because the language is free of technical jargon and over-exaggerated marketing statements about GPTZero. The language sets realistic expectations for users, recognizes uncertainty of AI technology, and reiterates that human evaluation will continue to be necessary. As evidenced by our experience with developing transparent AI disclosures, it is critical to maintain a consistent tone when drafting such statements. AI transparency statements must provide a factual basis about what the AI does and does not do, and provide a narrow scope, as well as limit the statements to the capabilities of the AI. By accurately defining what your AI can and cannot do, you are providing an acceptable basis for credibility.
We've avoided defensive or technical language in favor of something that states clearly what our roles and responsibilities are--and builds the trust of our readers by demonstrating how we use a repeatable, human-led governance process rather than simply disclosing use of a tool. Journalists want to know that a human is accountable for the final output. We say, "This report was developed with the assistance of AI tools for data analysis and initial drafting. Our team of subject matter experts directed, reviewed, and verified everything for accuracy and adherence to editorial standards." This works because it allows us to move the conversation from "Did a machine write this?" to "What is your verification process?" It presents AI as a productivity tool that enables a faster first draft, like a calculator or spell-checker, while also clearly signaling that our human experts remain the ultimate arbiters of accuracy and context. This transparency around the human-in-the-loop is what we believe matters most for credibility.