Regulation of AI in the United States

  1. Introduction
  2. Efforts to regulate AI
  3. DeepFakes
  4. Concluding Remarks


  1. Introduction

The world of politics is complicated to say the least. There are so many huge issues that the United States is currently facing and no easy solutions to most of them. However, perhaps one of the biggest problems facing the US is artificial intelligence. This problem is not receiving enough attention from regulators who wish to turn a blind eye to it.

  1. Efforts to regulate AI

The United States is getting on board with AI regulation. The American Artificial Intelligence initiative was signed into law in February 2019. The initiative provides guidelines for advancing the technology as well as training citizens on how to adjust to the new entrance of AI into the workforce. Additionally, there are provisions that show how the privacy of American citizens will be protected so that trust is maximized in the AI. The initiative brings up a great point with the public trusting AI and it is a battle that needs to be won through examples of AI consistently being beneficial for the country. Another provision of the American AI initiative is that there must be international work completed in AI regulation (another HUGE thing that the initiative got right, in my opinion).

On March 19, 2019, the website AI.gov was created in order to track and document AI-related initiatives and progress. Along with this, the Department of Defense has recently released their AI strategy here. In the summary, they propose a rough outline for the principles they want to ensure are covered as AI continues to develop. In addition, the US has established the joint artificial intelligence center which will work to safely develop artificial intelligence for the purposes of defenses. The center now has over 60 government employees working at the center. Continued funding is limited and lawmakers in Washington keep restricting the amount of money that the center can receive. There will soon be guidance from the federal government on how companies should develop certain types of artificial intelligence.

  • Internal Defense and International Talks

The United States currently supports the G20 AI principles that were developed after a meeting in Japan. Among these international principles that serve as guidance for how countries should adapt to AI moving forward. An example of the guidance put forth is that artificial intelligence will be developed by sharing good practices and experiences.

The United States recognizes the implications for keeping up with other countries from a military standpoint as well. The federal government is tapping NIST to begin creating regulations against AI. President Trump required NIST to create a plan to defend against AI agent attacks from other countries in February 2019.

However, there is some opposition to development of military AI. A large number of well-known scientists, entrepreneurs, and researchers signed a petition to the United Nations that would limit the development of autonomous machines that were designed to kill.

  • Regulation against Artificial General Intelligence

One of the proposals being discussed right now is an AI kill switch. Such a switch would allow lawmakers to essentially put a stop to AI that got out of control. The artificial general intelligence scenario is consistently discussed, but the kill switch is really the only plausible way that we know of right now that could foreseeably stop the spread of a super-intelligent AI.

  • Ban on facial recognition in police body cams

Facial recognition technology can rapidly identify people – given the chance. California has become one of the first states to explicitly ban the technology. The bill will remain in effect for three years and will not apply to stationary cameras – just body cameras. Alongside California are Oregon and New Hampshire who already have a ban on the facial recognition technology in body cams.

  1. DeepFakes

DeepFakes hit the internet most notably in November 2017. At first, amusement set in and people were dumbfounded how others could be made to say seemingly crazy things.

In an attempt to curb the effect of DeepFakes, Google opened the doors on a huge training set of DeepFakes that are supposed to give developers the tools that they need to spot DeepFakes. Hilariously enough, this backfired in a huge way.

China is trying to curb the use of DeepFakes. This month they have rolled out new rules for distributing faked online content. The production and distribution of fake content will now be considered a criminal offense starting January 1st,2020. The Cyberspace administration of China states that the continued use of DeepFakes could endanger national security.

Many fear that the United States presidential election that will take place in 2020 will be influenced by DeepFakes. The introduction of DeepFakes could alter the political landscape of a candidate by making it appear that something was said that simply was not.

Internet companies as a whole are preparing to fight the DeepFake battle. Companies such as Facebook are intentionally making DeepFakes in order to build a training set that will allow AI to determine if an image or video is a DeepFake. Facebook will release the DeepFake training set at the end of 2019 at one of the prominent artificial intelligence conferences. The concept is a scary concept since it is essentially pitting two AI’s against each other and then seeing which will win. What if the generator is better than the AI that is supposed to be detecting?

  1. Concluding Remarks

Those in charge of regulating AI certainly some work ahead of them. There is absolutely no right answer at this time, as many experts state that we can’t conceive of how highly advanced AI would behave. Many different types of attacks can be orchestrated by AI including DeepFake or even killing.

There have even been attempts to try and get Congress educated about the dangers of AI through specialized training programs. Some say that much more is needed and that we need to take more drastic measures such as pooling our collective data with china in order to develop a universal training set. While artificial intelligence is an exciting field, there are ongoing problems that will need to be continually looked at.

Show More

Nick Allyn

Hello, my name is Nick Allyn. I am extremely passionate about the field of artificial intelligence. I believe that artificial intelligence will save millions of lives in the coming years due in higher cancer survival rates, cleaner air, as well as autonomous cars.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button