Skip to main content
All
May 17, 2021

AI Under Regulatory Scrutiny: FTC Reminds Companies There Is a Cop on the Beat

Advisory

The European Commission describes its recently proposed legislation as “the first-ever legal framework on AI.”1 Implicit in this description is that other jurisdictions, the United States included, lack legal frameworks governing artificial intelligence. It would be a mistake, however, to conclude that AI is unregulated in this country.

The Federal Trade Commission (FTC), in particular, has been signaling that it will use its longstanding consumer-protection authority to rein in abusive AI practices. Twice in the last month, that agency has reminded businesses that develop, sell or use AI that this is not the Wild West, and that there is a sheriff in town.

Two days before the European Commission released its legislative proposal, an attorney in the FTC’s Bureau of Consumer Protection authored a blog post warning companies not to run afoul of Section 5 of the Federal Trade Commission Act,2 the Fair Credit Reporting Act (FCRA)3 or the Equal Credit Opportunity Act (ECOA)4 in using or marketing AI.5 While focused primarily on preventing discrimination against protected classes, the post provides six tips that apply broadly for ensuring safe and compliant AI.

  • “Start with the right foundation.” An AI system is only as good as the data on which it is trained and tested. “From the start, think about ways to improve your data set, design your model to account for data gaps, and—in light of any shortcomings—limit where or how you use the model.”
  • “Watch out for discriminatory outcomes.” Test for disparate impacts before deploying an AI system and continue throughout its lifespan.
  • “Embrace transparency and independence.” Consider how your company can let third parties help you identify problems with your AI system, so you can contain any harms (and your liability).
  • “Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results.”
  • “Tell the truth about how you use data.”
  • “Do more good than harm. To put it in the simplest terms, under the FTC Act, a practice is unfair if it causes more harm than good.”

The seventh tip was the sheriff’s warning to those inclined not to heed the first six: “Hold yourself accountable—or be ready for the FTC to do it for you.”

Reinforcing this point, the FTC announced on May 7 that it had finalized a settlement with Everalbum, Inc. over alleged misrepresentations by the photo-storage service, which trained its facial-recognition system with users’ photos.6 Preliminarily agreed to in January, the settlement was finalized after a period of public comment and a vote by the four current commissioners. The settlement resolved the FTC’s complaint accusing Everalbum of using facial recognition by default for a majority of users and not allowing them to disable the feature. The FTC did not object to the settings themselves, just their inconsistency with Everalbum’s claims that facial recognition would be inactive unless affirmatively enabled.7 The complaint also alleged that Everalbum was storing users’ photos and videos indefinitely, notwithstanding representations Everalbum would permanently delete users’ files upon deactivation of their accounts.8

Among other provisions, the FTC’s settlement with Everalbum prohibits the company from making a variety of misrepresentations about its handling of consumers’ personal information.9 In addition, the settlement requires Everalbum, before using consumers’ biometric information, to disclose all purposes for which it will use the information and to obtain affirmative consent before employing the data to train or develop facial-recognition algorithms.10

However, the settlement does not merely correct Everalbum’s conduct going forward. It also requires Everalbum to reverse its technological advances from the misconduct. Specifically, Everalbum must delete the facial-recognition models and algorithms Everalbum developed with biometric information from consumers who had not affirmatively consented to the practice. Everalbum also has to discard photos and videos in deactivated accounts as well as facial feature details derived from those images.11

Taken together, the blog post and settlement illustrate the FTC’s alertness to the harms AI can cause consumers when businesses aren’t careful. Without comprehensive regulation in the United States, the FTC still has powerful tools for policing what it perceives as bad actors.

The FTC is not the only sheriff in town, however. State consumer protection statutes arm attorneys general and plaintiffs’ lawyers as well and the Consumer Financial Protection Bureau and other agencies share enforcement of FCRA and ECOA alongside the FTC. State privacy laws like those in California and Virginia are beginning to regulate AI and other automated data processing,12 while a range of federal and state laws govern particular AI applications.13

Europe has gotten recent attention with the new legislative proposal—what two commentators somewhat hyperbolically proclaimed to be “the heavy hand of the regulatory state . . . undoubtedly arriv[ing] in AI.”14 But there already are plenty of avenues for regulatory enforcement against AI on this side of the pond. With agencies like the FTC increasingly taking notice, US businesses need to review, and possibly bolster, their compliance programs lest they attract unwanted scrutiny.

© Arnold & Porter Kaye Scholer LLP 2021 All Rights Reserved. This Advisory is intended to be a general summary of the law and does not constitute legal advice. You should consult with counsel to determine applicable legal requirements in a specific fact situation.

  1. EC Press Release, European Commission, Europe Fit for the Digital Age: Commission Proposes New Rules and Actions for Excellence and Trust in Artificial Intelligence (Apr. 21, 2021).

  2. 15 USC § 45.

  3. Id. § 1681b.

  4. Id. §§ 1691–1691f.

  5. Elisa Jillson, Bureau of Consumer Prot., FTC, Aiming for Truth, Fairness, and Equity in Your Company’s Use of AI, Business Blog (Apr. 19, 2021, 9:43 AM).

  6. FTC Press Release, FTC Finalizes Settlement with Photo App Developer Related to Misuse of Facial Recognition Technology (May 7, 2021) (Everalbum Settlement Press Release).

  7.  Everalbum, Inc., File No. 1923172, Complaint ¶¶ 9, 23-24 (FTC May 6, 2021).

  8. Id. ¶¶ 20–22, 25–26.

  9. Id.Decision and Order at 3–4 (FTC May 6, 2021).

  10. Id. at 4.

  11. Id. at 4–5.

  12.  See Cal. Civil Code §§ 1798.100(c), 1798.105, 1798.106, 1798.130, 1798.140(y), 1798.185(a)(16); 2021 Va. Acts ch. 36 (adding Va. Code Ann. §§ 59.1–571, 59.1–573(A)(2)–(3), (5), 59.1–574(A), (C), 59.1–576(A)(3)).

  13. Seee.g., 42 USC § 2000e-2(k)(1)(A)(i) (employment discrimination); Cal. Bus & Prof. Code §§ 17940–17943 (deceitful use of chatbots in certain commercial transactions or to influence voting); 740 Ill. Comp. Stat. 14/1 (Illinois Biometric Information Privacy Act); 820 Ill. Comp. Stat. 42/5 (Illinois Artificial Intelligence Video Interview Act); Div. of Inv. Mgmt., SEC, Robo-AdvisersIM Guidance Update No. 2017-02 (applicability of the Investment Advisers Act of 1940, 15 USC §§ 80b-1–80b-18c, and Investment Company Act of 1940, 15 USC §§ 80a-1–80a-64, to “robo-advisers”).

  14. Thomas Burri & Fredrik von Bothmer, The New EU Legislation on Artificial Intelligence: A Primer 6 (Apr. 21, 2021).