The Federal Trade Commission (“FTC”) issued a warning on Monday about artificial intelligence (“AI”), telling businesses to “keep [their] AI claims in check” and reminding all industries that the Commission will be monitoring baseless or exaggerated marketing around AI. In the post published on the FTC’s website, an attorney for the FTC Division of Advertising Practices reminded businesses that “false or unsubstantiated claims about a product’s efficacy” is the FTC’s “bread and butter.” The FTC’s AI warning comes on the heels of the announcement of its new division, the Office of Technology, to combat “snake oil” in the tech industry.
The FTC’s post offers a roadmap for how regulators may scrutinize the increase in use of AI across products, and signals that deceptive claims may be of most importance. The Commission listed a series of questions that businesses should consider before marketing an AI product in order to ensure claims are not deceptive.
- Are you exaggerating what your AI product can do, or claiming it can do something beyond the current capability of such technology?
- Are you promising that your AI product does something better than a non-AI product?
- Are you aware of the risks if it fails or yields biased results?
- Does the product actually use AI at all?
This warning is not the first time the FTC has specifically addressed AI technology. The Commission has previously alerted businesses that it is on the lookout for discriminatory uses of AI, including whether “algorithms developed for benign purposes like healthcare resource allocation and advertising” can inadvertently lead to racial bias. It also comes in the wake of increased EEOC scrutiny of potential bias in AI systems.
With the newly established Office of Technology, the FTC has signaled that the agency plans to take a hard look at AI technologies and may be eyeing stricter enforcement. Businesses should evaluate whether marketing of AI is overpromising or misleading. And for purposes of FTC and EEOC compliance, companies should institute effective AI-oversight and governance that includes conducting a technical audit of AI systems, training data and outputs to ensure the absence of unintended but illegal bias. Baker McKenzie’s AI practice can assist with global AI compliance and governance issues in a rapidly evolving technological and regulatory landscape.