Why Self-Regulation Is Best for Artificial Intelligence

As the Biden Administration seeks to get its arms around the global phenomenon that is artificial intelligence, it should recognize a few realities.  

First, artificial intelligence (AI) is more than an idea whose time has come – it is indelibly written into the fabric of our society. AI has grown from a theoretical, academic concept to an indispensable tool in just about every sector imaginable. It has become ubiquitous and universal, transforming commerce, culture, industry, and individual lives the world over, fostering a new era of innovation. 

Alan Turing, often regarded as the father of AI, posed the question in 1950: “Can machines think?” and his Turing Test became a foundational measure for machine intelligence. In 1956, John McCarthy coined the term “artificial intelligence” at Dartmouth Conference, marking the beginning of AI as a formal academic discipline. From there, artificial intelligence saw sporadic progress, with periods of intense excitement and support followed by periods of skepticism and reduced funding. 

In the early 2000s, AI took big leaps with the advent of innovations like deep learning and big data. Those developments led to AI breakthroughs, allowing machines to recognize speech and images, diagnose diseases, and create algorithms, language, and a wide range of tasks. 

Today, the myriad benefits of AI are becoming well-known to us all. Capable of analyzing vast amounts of data, AI helps with critical decisions in fields like finance, business, law, medicine, and even governance. AI algorithms can analyze complex biomedical data faster than humans, enabling early and accurate disease diagnosis, robotic surgeries, and enhanced healthcare services.

AI-powered robots can perform repetitive tasks more efficiently, leading to increased production rates and reduced human errors. Voice assistants like Siri, Alexa, and Google Assistant have made technology more accessible to the elderly and disabled, breaking down barriers of isolation. AI drives progress in many other areas, including climate modeling, drug discovery, and even art. 

For all of its benefits, AI is not without controversy, such as privacy and ethical concerns. With AI’s ability to analyze vast datasets, there is a risk of personal data misuse, leading to potential breaches of privacy. There are also valid concerns that AI could render certain jobs obsolete even while it creates new categories of work.

Decisions made by AI, especially in critical areas like healthcare and criminal justice, can pose ethical challenges, especially if they lack transparency. A heavy dependence on AI systems can lead to loss of basic human skills and a potential vulnerability if these systems fail. 

As the president and policymakers consider rules to govern AI, they face intractable challenges. Trying to regulate every expression of AI will be an elusive exercise; the technology is far too ingrained already in our lives to even think it possible. Equally important, government regulation, however well-intentioned, will always be several steps behind even in its best iteration. 

All things considered, self-regulation is the best policy course for AI regulation. Here is why: 

Industry Expertise

AI is vast, complex and rife with nuances. Technology experts immersed in its intricacies are best suited to draft and enforce guidelines. Even the most expert federal regulatory agencies, while motivated, are sure to lack the depth of understanding necessary to effectively oversee AI innovations. Those within the AI industry possess the hands-on experience and technical familiarity to establish guidelines that are both practical and effective. 

Agility and Responsiveness

The pace of AI advancement is dynamic and staggering. By the time traditional regulatory frameworks catch up with the latest development, several new ones will have emerged. Self-regulation offers a flexibility that can adapt swiftly to the ever-changing AI landscape. This agility ensures that guidelines remain relevant and fosters responsible innovation without stifling it.  


The bureaucratic nature of government-led regulations often entails prolonged processes and significant financial expenditures. In contrast, self-regulation, led by industry stakeholders, can be more streamlined and cost-effective, reducing the financial burden on both the industry and taxpayers.  

Preempting Stricter Oversight

In the absence of self-regulation, government might impose even stricter, potentially stifling regulations. By adopting a proactive stance, the AI industry can set standards that are both high and realistic, preempting the need for burdensome regimes. 

Building Trust Through Transparency

One of the critiques of AI is the perceived “black box” nature of algorithms, which can lead to mistrust among regulators and the public. By self-imposing transparency and ethical standards, the AI industry can build credibility and trust. 

To be fair, there are downsides to self-regulation that augur for government regulation. 

Proponents of federal regulation argue that the Federal Trade Commission (FTC) and Department of Justice would be more inclined than industry to protect individual rights, monitor and prevent abuses, and strictly enforce the rules. With concerns over AI’s influence on privacy, federal regulations can provide a standardized framework to ensure that personal data is not misused. In areas like autonomous vehicles and healthcare, AI regulations can ensure the development and deployment of safe, reliable AI technologies. 

Without proper oversight, AI systems can inadvertently perpetuate or even amplify biases present in the data they are trained on. Regulation can ensure systems are developed and deployed in a manner that mitigates these biases. Regulations can require companies to make their AI systems more understandable and transparent, which can help in building public trust. By setting rigorous standards, the United States might position itself as a producer of high-quality, reliable, and trustworthy AI solutions, which could become a selling point on the international stage. 

Just as the FDA ensures the safety of drugs and the FAA regulates aviation safety, AI regulation can be a way to protect consumers from potentially harmful or misleading AI-driven products and services. Regulations can also drive companies to adopt ethical AI practices, ensuring technology benefits humanity as a whole. 

But enforcing federal AI regulations can be challenging, given the complexity of AI itself. 

The risk of governmental overreach is as real as the infringement of individual freedom. As AI companies often operate globally, navigating a patchwork of different regulatory regimes across countries can be complex and costly. 

We must also be concerned about competition; if the United States over-regulates, it may fall behind in the global AI race. In addition, the economic impact of over-regulation might drive AI companies to relocate to more lenient jurisdictions, potentially leading to job losses and a downturn in economic contributions from a rapidly growing sector. 

Finally, ill-advised federal regulation has direct economic implications for small- and medium-sized businesses. Complying with regulations often comes with costs. While large corporations might easily absorb these, small businesses and startups could struggle, potentially hampering innovation at grassroots levels. 

To preserve and advance the many benefits of artificial intelligence, U.S. policymakers should encourage the industry to develop a robust and prudent self-regulatory regime that fosters accountability, ethical responsibility, innovation, and security.  

As AI continues to permeate our everyday lives, there is no doubt that regulation is necessary. But its direction should come from industry, not government, and be guided by a set of industry-derived principles and best practices that are fair, transparent, and enforceable. 

Adonis Hoffman is CEO of The Advisory Counsel LLC and a former adjunct professor at Georgetown University.  He served as chief of staff and senior legal advisor at the FCC and previously in senior roles in the U.S. House of Representatives.  He is a member of The Media Institute’s Board of Trustees and First Amendment Advisory Council.  This article appeared in The Hill.