Though not normally known for waxing philosophical, America’s Federal Trade Commission (FTC) this week published a post about the use of AI, musing on humanity’s fascination with tales of things being brought to life – and the possibility of being taken in by such tales.
“For generations we’ve told ourselves stories, using themes of magic and science, about inanimate things that we bring to life or imbue with power beyond human capacity,” writes Michael Atleson, an attorney with the FTC Division of Advertising Practices.
“Is it any wonder that we can be primed to accept what marketers say about new tools and devices that supposedly reflect the abilities and benefits of artificial intelligence (AI)?”
Along with noting the ambiguity and “many possible definitions” of the term “AI” he issues a warning about “hot marketing terms” saying, quite pointedly, “One thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”
“At the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.”
With their warning to marketers to “Keep your AI claims in check,” and to avoid making unsupported claims about “new tools and devices that supposedly reflect the abilities and benefits of AI,” the FTC has added to it previously issued AI guidance, which focused on fairness and equity along with the warning “not to overpromise what your algorithm or AI-based tool can deliver.”
The agency offered four questions for marketers to consider when talking about AI in their advertising:
Are you exaggerating what your AI product can do? Or even claiming it can do something beyond the current capability of any AI or automated technology? For example, we’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.
Are you promising that your AI product does something better than a non-AI product? It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.
Are you aware of the risks? You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.
Does the product actually use AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.
Atleson wraps up the post with a bit of humor to get the point across that if you do exaggerate claims the FTC will take action.
“Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.’
He adds that marketers should be aware that “false or unsubstantiated claims about a product’s efficacy are our bread and butter.”
Featured image Tara Winstead