White House Unveils Initiatives to Address Risks of Artificial Intelligence amid Calls for Regulation

The White House made an announcement on Thursday, unveiling its first set of initiatives to address the risks associated with artificial intelligence (A.I.) following a surge in A.I.-powered chatbots, which has led to growing demands for regulatory measures.
According to White House officials, the National Science Foundation has allocated $140 million for the establishment of new A.I. research centers. The administration has also committed to releasing draft guidelines for government agencies to ensure that their use of A.I. prioritizes the protection of “the American people’s rights and safety.” Additionally, they revealed that several A.I. companies have agreed to make their products available for scrutiny at a cybersecurity conference in August.
These announcements were made shortly before a scheduled meeting between Vice President Kamala Harris, other administration officials, and the CEOs of major tech companies including Google, Microsoft, OpenAI (the creator of the popular ChatGPT chatbot), and Anthropic (an A.I. start-up), to discuss the future of A.I. A senior administration official stated on Wednesday that the White House intends to emphasize to these companies their responsibility in addressing the risks associated with new A.I. developments. The White House has faced mounting pressure to regulate A.I. systems capable of generating sophisticated prose and lifelike images. The surge of interest in A.I. technology began last year with the release of ChatGPT by OpenAI, which prompted immediate utilization for information retrieval, schoolwork, and job assistance. Subsequently, major tech companies rushed to incorporate chatbots into their products, leading to an acceleration in A.I. research, while venture capitalists invested heavily in A.I. start-ups.
However, the A.I. boom has raised significant questions regarding its impact on economies, geopolitical dynamics, and criminal activities. Critics have expressed concerns that many A.I. systems are powerful but lack transparency, with the potential to make discriminatory decisions, replace human workers, spread disinformation, and even violate laws independently. President Biden recently stated that it remains uncertain whether A.I. poses dangers, and some of his top appointees have pledged to intervene if the technology is used in harmful ways.
Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting, while a spokesman for Anthropic confirmed the company’s attendance. A spokeswoman for OpenAI did not respond to the request for comment.
These recent announcements build upon previous efforts by the administration to establish guidelines for A.I. Last year, the White House released a “Blueprint for an A.I. Bill of Rights,” emphasizing the need for automated systems to protect user data privacy, prevent discriminatory outcomes, and offer transparency in decision-making. In January, the Commerce Department unveiled a framework for mitigating risks in A.I. development, which had been under development for several years.
The introduction of chatbots like ChatGPT and Google’s Bard has exerted immense pressure on governments to take action. The European Union, which was already engaged in negotiations regarding A.I. regulations, now faces new demands to regulate a wider range of A.I. systems beyond those perceived as inherently high-risk. In the United States, members of Congress, including Senator Chuck Schumer, the majority leader, have begun drafting or proposing legislation to regulate A.I. However, concrete measures to control A.I. technology within the country may initially arise from law enforcement agencies in Washington.
In April, a coalition of government agencies pledged to “monitor the development and use of automated systems and promote responsible innovation,” while also enforcing laws to deter misuse of the technology.
In a guest essay published in The New York Times on Wednesday, Lina Khan, chair of the Federal Trade Commission, characterized the nation as being at a “key decision point” regarding A.I. She likened recent advancements in A.I. technology to the emergence of tech giants like Google and Facebook.