Search
Close this search box.

Federal Agencies Assert Oversight of Advanced Technology

A picture of lighted blue symbols overlaying a computer and a person with a handheld device symbolizing data and artificial intelligence

Four federal agencies issued a joint statement Tuesday committing to enforcing laws and regulations against actions taken using advanced technology, or artificial intelligence.

The Federal Trade Commission (FTC), the Consumer Financial Protection Bureau (CFPB), the Civil Rights Division of the Justice Department and the U.S. Equal Employment Opportunity Commission joined in the statement’s release.

“We already see how AI tools can turbocharge fraud and automate discrimination, and we will not hesitate to use the full scope of our legal authorities to protect Americans from these threats,” FTC Chair Lina M. Khan said. “Technological advances can deliver critical innovation—but claims of innovation must not be cover for lawbreaking. There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.”

All four agencies previously expressed concerns about potentially harmful uses of automated systems.

“Technology marketed as AI has spread to every corner of the economy, and regulators need to stay ahead of its growth to prevent discriminatory outcomes that threaten families’ financial stability,” CFPB Director Rohit Chopra said. “Today’s joint statement makes it clear that the CFPB will work with its partner enforcement agencies to root out discrimination caused by any tool or system that enables unlawful decision making.”

In the statement, the agencies said many automated systems use vast data amounts to find patterns and correlations, then apply the patterns to new data, forming recommendations and predictions.

“While these tools can be useful,” the agencies said, “they also have the potential to produce outcomes that result in unlawful discrimination.”

Potential discrimination in automated systems may come from different sources, including problems with:

  • Data and Datasets: Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors. Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.
  • Model Opacity and Access: Many automated systems are “black boxes” whose internal workings are not clear to most people and, in some cases, even the tool’s developer. This lack of transparency often makes it more difficult to know whether an automated system is fair.
  • Design and Use: Developers do not always understand or account for the contexts in which private or public entities will use their automated systems.

“Today, our agencies reiterate our resolve to monitor the development and use of automated systems and promote responsible innovation,” the agencies said. “We also pledge to vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies.”

 

 

 

 

 

 

 

 

 

RV News magazine spread
If you are employed in the RV industry and not a member of the trade media, Subscribe for Free:
  • Daily business news on the RV industry and the companies and people that encompass it
  • Monthly printed and/or digital magazine filled with in-depth articles to increase profit margins
  • Statistics, data and other RV business trade information
X
Scroll to Top