What We Can Learn from the FTC’s OpenAI Probe

The recent investigation launched by the U.S. Federal Trade Commission (FTC) into OpenAI is a sign of the growing regulatory scrutiny of AI technology and the potential risks it poses. As we build AI models and applications, we must proactively consider the questions listed in the FTC letter. These questions address the risks of AI and outline steps that can be taken to mitigate them. The FTC’s letter highlights the importance of transparency and accountability in AI development. They encourage us to think deeply about our data sources, our training processes, and the potential risks and ethical implications of our work. By voluntarily gathering and reviewing this information, we can ensure that our models are not only technically sound but also ethically responsible and compliant with privacy standards. This proactive approach can help us build trust with our users and stakeholders, and ensure the responsible use of AI technology.

In this rapidly evolving environment, both AI technology and its regulatory landscape are in constant flux, especially in the realm of Generative AI. I consulted with my friends at Luminos.Law to assemble a list of resources, providing helpful guidance to teams and helping them understand AI-related risks and regulations.

Existing Regulations.

The EU AI Act carries significant implications for teams crafting AI systems and applications. It mandates transparency, requiring users be made aware when they’re interacting with AI, including content-manipulating systems. It also tackles the use of copyrighted data in training and mandates a risk-based regulatory approach, with more stringent rules for high-risk systems. Interestingly, providers can market their systems if perceived safe, but may face sanctions if authorities disagree. The Act also bans biometric categorization, predictive policing, and software that scrapes facial images from the internet to build databases.

The Department of Consumer and Worker Protection (DCWP) in New York City has implemented a final rule concerning Automated Employment Decision Tools (AEDTs) usage by employers and employment agencies. Crucially for AI teams, the rule mandates annual bias audits for AEDTs, the results of which must be publicly disclosed. The rule specifies the requirements for bias audits, including calculating and comparing selection rates for all EEOC-reported race/ethnicity and sex categories. It stipulates the types of data permissible for audits, allows multiple employers to share a bias audit if they provide historical data, and prohibits AEDT use if its last audit is over a year old. The rule also outlines details to be included in audit summaries and provides examples when historical or test data can be used. AI teams must ensure compliance with these rules to promote fair, transparent AEDT use and avoid legal complications.

FTC Blog Posts.

The FTC has published a series of blog posts that I recommend to all businesses leveraging AI and algorithms (see [1], [2], [3]). These posts delve into essential guidelines that companies must adhere to, underlining the importance of four critical elements: transparency, accuracy, fairness, and accountability. They emphasize that deception about AI use, inaccurate data furnishing, discriminatory practices, and lack of compliance and ethics are not only unethical but can potentially lead to enforcement actions and significant fines. Designed to shed light on complex issues, these guidelines encourage responsible use of artificial intelligence.

NIST AI Risk Management Framework.

The U.S. National Institute of Standards and Technology (NIST) has published a comprehensive report on their AI Risk Management Framework (AI RMF). Key for AI teams, this guide offers in-depth information on understanding and tackling AI risks, such as risk measurement, tolerance, prioritization, and organization-wide risk management. The framework underscores the role of trustworthiness in AI systems, exploring elements like validity, safety, security, resilience, accountability, transparency, explainability, privacy, and fairness. The report details how to establish context, categorize AI systems, and document system requirements, organizational risk tolerances, and business value. This guide is an invaluable resource for AI teams to effectively understand and manage the complexities and risks involved in AI development and deployment.

FTC Letter to Open AI.

Given the increasing interest in LLMs, the FTC’s questions directed at OpenAI offer key insights for teams engaged in AI modeling and applications. It’s not just about acknowledging these questions; AI teams should actively prepare responses. Here are some specific elements highlighted by the FTC:

  • The FTC spotlights unique challenges, like model-induced “hallucinations” or the risk of inadvertently revealing personal information, which underline the necessity for strong safeguards. In terms of proactive measures, the FTC goes beyond general principles and advises on specifics, such as robust data refining and strict control over model responses, to ensure ethical AI usage.
  • The letter also emphasizes the importance of clear delineation of team roles and responsibilities, along with a meticulous understanding of user interactions. Addressing these specifics in the training and retraining process of LLMs can forestall the need for reactive measures down the line. By comprehending and addressing these questions, we can uphold responsible personal information handling and improve the overall reliability and safety of our AI systems.
A Mind Map of the questions posed by the FTC to OpenAI.

The risks associated with Generative AI models are real and serious. Teams that are building these models need to be aware of the risks and take steps to mitigate them. The time to act is now. These risks are only going to grow in the future. Teams that want to build safe and ethical AI systems need to start taking steps today.

Unfortunately, not enough AI teams are systematically documenting the steps they are taking to manage risks associated with Generative AI models. The NIST AI Risk Management Framework is a key resource, providing a structured approach for understanding and mitigating AI risks, a perfect starting point for teams exploring AI safety and regulation.

Additionally, Luminos.Law, with their extensive experience in AI audits, can assist teams in handling sensitive issues like bias, transparency, and privacy in their AI systems. Their audits are comprehensive, efficient, and cover a wide range of AI models.

Join us at the AI Conference in San Francisco (Sep 26-27) to network with the pioneers of AI and LLM applications and learn about the latest advances in Generative AI and ML.

If you enjoyed this post please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

%d bloggers like this: