FTC calls on U.S. government to step in to regulate surge in fraud using artificial intelligence

The United States Federal Trade Commission (FTC) recently stated that, due to the increasing prevalence of scams involving artificial intelligence technologies such as ChatGPT, it will begin to pursue those who exploit such technologies for fraudulent activities.

In comparison to previous applications of artificial intelligence, contemporary generative AI technologies are capable of producing realistic images, more vivid prose, and even convincingly deceptive videos, websites, or programs. Consequently, numerous unscrupulous individuals have started to employ these technologies for scams.

ChatGPT Plus

Image credit: Future

During a congressional hearing, FTC Chair Lina Khan, along with Commissioners Rebecca Slaughter and Rebecca Slaughter, explained that AI technologies could potentially be employed for fraudulent purposes, leading to a sharp rise in consumer victimization. They expressed concerns that AI could exacerbate fraudulent behavior and called on the US government to regulate such technologies, lest they cause even greater harm.

The FTC maintains that technology companies cannot evade responsibility merely by citing the black-box nature of their algorithms. The commission emphasizes that technology companies should also be held accountable for their innovations.

Previously, the FTC had issued broad public guidance to technology companies, urging them to develop AI technologies responsibly. However, despite the majority of technology companies claiming to adopt responsible and transparent approaches to AI research and development, it is evident that many situations remain inadequately addressed.