On August 7, the Federal Communications Commission (FCC) issued a Notice of Proposed Rulemaking (NPRM) in which it proposed new regulations governing the use of generative artificial intelligence (AI) to place robocalls and text messages.
As detailed in prior articles, the FCC is seeking to enact regulations that can help consumers avoid the potential harms posed by the use of AI in outbound calls, namely facilitating fraud, while at the same time protecting benign or helpful applications of the technology, such as assisting people with disabilities.
Proposed Definition
Included in the NPRM is an official definition for an “AI generated call,” which the FCC proposes to define as “a call that uses any technology or tool to generate an artificial or prerecorded voice or a text using computational technology or other machine learning, including predictive algorithms, and large language models, to process natural language and produce voice or text content to communicate with a called party over an outbound telephone call.”
Comments Requested: The FCC seeks comments from the general public and industry stakeholders on the proposed definition; specifically, whether it captures the potentially harmful uses of AI that consumers would want an opportunity to avoid while excluding the positive uses of AI that the FCC does not want to deter.
Consent Disclosures for AI-Generated Robocalls
The NPRM also calls for those seeking consent to contact a consumer using an artificial or prerecorded voice to include a clear and conspicuous disclosure that the consent being sought extends to AI-generated calls. The FCC also proposed a similar mandatory disclosure seeking consent to place automated text messages, namely that such consent extends to receiving AI-generated content.
Comments Requested: The FCC also seeks comments on this aspect of the NPRM. Namely, for calls that already require prior express consent, would it benefit consumers to require them to provide separate consent to receive AI-generated calls? And should the proposed changes apply only prospectively, or will companies that have already secured consent to contact consumers using a prerecorded or artificial voice need to seek additional consent to use AI?
AI Disclosure Statement at Outset of Call
Acting on the belief that consumers have a right to know they will be interacting with AI and to decide whether to continue with such interaction, the NPRM also proposes requiring callers using an AI-generated voice to clearly disclose that fact to the called party at the outset of a call.
Comments Requested: The FCC is seeking comments on the potential benefits and drawbacks of requiring such a disclosure at the outset of a call. Would consumers benefit from new disclosures that apply to “AI-generated calls,” but not to “artificial or prerecorded voice” calls? Are there different approaches that might make consumers aware of an AI-generated call while minimizing the burden of disclosure? In addition, the Commission requested comments as to whether specific types of AI- generated calls should be excluded from the pre-call consent or on-call disclosure requirements, such as those placed by persons with disabilities.
Comments were also invited on the following matters:
· Whether the disclosure at the outset of an AI- generated voice call should include a special tone and/or display (such as an icon or badge), and if so, what would be the most effective and cost-efficient method to make consumers aware of the nature of the call?
· Should callers be required to provide consumers with the option to opt out of future AI-generated voice calls? If so, how should that option be implemented to minimize the risk of abuse by requiring consumers to make multiple opt-out requests to stop unwanted calls?
The NPRM also addressed ways to safeguard uses of AI that help people with disabilities. The FCC observed that sweeping regulations might lead to those with speech or hearing disabilities at risk of losing a technology that facilitates communication and requested comments on these matters as well.
As expressed by FCC Commissioner Brendan Carr, there is serious concern within the Commission about over-regulating AI: “I don't think we should be regulating AI based purely on speculative harms that aren’t showing up in the real world… I think we need to be careful that we don't adopt AI-specific regulations when the concern isn't limited to things that appear in the AI space alone… we need to make sure we continue to support US innovation and leadership.”
Regulations are as Inevitable as AI itself
This latest NPRM represents an ongoing effort on the part of the FCC to keep pace with the rapidly evolving technologies underlying generative artificial intelligence as they are deployed in the marketplace. It comes on the heels of the FCC’s recent Notice of Proposed Rulemaking on the use of AI in political advertisements, which mandates disclosure if any aspect of a radio or TV political ad was made using AI. Earlier this year, the FCC issued a Declaratory Ruling that confirmed the TCPA’s restrictions on the use of “artificial or prerecorded voice” encompass AI-generated voices.
Finally, in the wake of an elaborate voter fraud scheme in New Hampshire, the Commission issued hefty fines against the parties that used AI-generated voice cloning technology to impersonate President Biden to spread electoral misinformation prior to that state’s primary election.
The final form of the regulations after comments after comments have been received and reviewed remains to be seen, but one can be certain they will be adopted in one form or another. Companies should be prepared to implement disclosures in any of their content that uses AI and should keep an eye out for further announcements from the FCC, which we will be covering here.