Monday, November 25, 2024
HomeTechnologyFTC chair Lina Khan says she's on alert for abusive A.I. use

FTC chair Lina Khan says she’s on alert for abusive A.I. use

[ad_1]

FTC Chairwoman Lina Khan testifies during the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce hearing on the “FY2024 Federal Trade Commission Budget,” in Rayburn Building on Tuesday, April 18, 2023.

Tom Williams | Cq-roll Call, Inc. | Getty Images

The Federal Trade Commission is on alert for the ways that rapidly-advancing artificial intelligence could be used to violate antitrust and consumer protection laws it’s charged with enforcing, Chair Lina Khan wrote in a New York Times op-ed on Wednesday.

“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote, echoing a theme the agency shared in a joint statement with three other enforcers last week.

In the op-ed, Khan detailed several ways AI might be used to harm consumers or the market that she believes federal enforcers should be looking for. She also compared the current inflection point around AI to the earlier mid-2000s era in tech, when companies like Facebook and Google came to forever change communications, but with substantial implications on data privacy that weren’t fully realized until years later.

“What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” Khan wrote.

But, she said, “The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.”

One possible effect enforcers should look out for, according to Khan, is the impact of only a few firms controlling the raw materials needed to deploy AI tools. That’s because that type of control could enable dominant companies to leverage their power to exclude rivals, “picking winners and losers in ways that further entrench their dominance.”

Khan also warned that AI tools used to set prices “can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination.”

“The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition,” she wrote.

Khan also warned that generative AI “risks turbocharging fraud” by creating authentic-sounding messages. When it comes to scams and deceptive business practices, Khan said the FTC would not only look at ” fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.”

Finally, Khan said that existing laws about improper collection or use of personal data will apply to the massive datasets on which AI tools are trained, and laws prohibiting discrimination will also apply in cases where AI was used to make decisions.

Subscribe to CNBC on YouTube.

WATCH: The dangers of A.I: How will artificial intelligence affect the 2024 election?

The dangers of A.I: How will artificial intelligence affect the 2024 election?

[ad_2]

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments