Gary Gensler, SEC Chairman, expresses concerns that AI could be the source of a market crisis.”
President Joe Biden signs an executive order on AI regulation
G7 adopts a declaration on AI regulation ahead of time
There is a growing global call to regulate artificial intelligence (AI). Following President Joe Biden’s executive order centered around AI regulation, the head of the U.S. securities regulator has also expressed concerns about the risks of AI. The Group of Seven (G7) is expected to adopt a code of conduct for AI regulation.
According to MarketWatch on May 31 (local time), Gary Gensler, Chairman of the U.S. Securities and Exchange Commission (SEC), said in an interview with MarketWatch that AI could be the root of market crises. Even amid concerns of a global economic slowdown this year, optimism about AI has propped up the stock market, but it could later pose a risk to the market.
He mentioned that among many concerns in establishing order and fairness in the U.S. capital market worth $100 trillion, the spread of AI is by far the most problematic.
Notably, as generative AI technologies like ChatGPT revolutionize investment methods, investors are using large data sets to predict things they couldn’t even imagine a decade ago, which carries a significant risk, he pointed out.
Chairman Gensler said, “The growing issue is that (AI) can lead to risk in the entire financial system,” adding that “herd behavior can occur as many financial participants depend on one, two, or three models.” If there is a flaw in the models that investors use, the impact could spread throughout the financial market, leading to rapid and unpredictable price movements.
He expressed concern about a situation similar to Black Monday in 1987, when the Dow Jones Index fell more than 20% in a day due to program trading, which led to an uncontrollable plunge.
Chairman Gensler pointed to the cloud computing and search engine markets, where one or two large companies dominate, as examples of the market, expressing concern that a similar concentration phenomenon could occur in the AI technology market. Especially in the United States, where the regulatory system is decentralized, this is a complex problem to address. Currently, the U.S. securities market is supervised by the SEC, while the futures market is overseen by the Commodity Futures Trading Commission (CFTC).
Nevertheless, Chairman Gensler said the SEC is discussing new legislation for AI regulation. He stated that existing regulations are sufficient for AI enforcement and in response to concerns from security. An industry where additional regulations would prevent customers from using the latest technology, he said, “It’s fine if they’re talking about a movie on a streaming app. But if it’s about financial support… we need to address this issue.”
Global AI Regulation Winds
Since the announcement of the AI chatbot ChatGPT at the end of last year, AI technology has been rapidly developing, and this year, there has been a global boom in AI. Despite the global economic downturn, the optimism about AI has driven U.S. tech giants to continue their upward trend, leading to the overall stock market rally. However, as the speed of AI technology development accelerates, so does the call for the need to address its risks.
Yesterday, President Biden announced an executive order focused on reducing the risks associated with AI. It mainly requires AI system developers, which pose risks to U.S. national security, economy, public health, and safety, to share safety test results with the U.S. government before making their systems public and to develop safe and reliable AI systems.
This executive order is in line with the Defense Production Act, which allows the government to order U.S. companies to produce materials needed for national security as a priority.
President Biden said, “To realize the potential of AI and avoid risks, we need to manage this technology,” adding, “If AI falls into the wrong hands, it will be easier for hackers to exploit the vulnerabilities of the software that runs our society.”
As a follow-up to this executive order, the U.S. Department of Commerce plans to establish guidelines for the certification and watermarking (anti-copying technology) of AI-generated content. In addition, Chuck Schumer, the Democratic Senate Majority Leader, said AI regulation legislation will be established within a few months.
Furthermore, participants at the G7 AI Security Summit, held in the UK from June 1 to 2, are expected to agree on establishing a code of conduct related to AI regulation, according to documents obtained by Reuters. The code of conduct is known to include measures related to the identification, measurement, and response to risks associated with AI.
The G7 countries have been working on the so-called ‘Hiroshima AI Process’ since May to establish an AI agreement. If the related code of conduct is adopted this time, it is expected to be a new milestone in the global AI regulation trend.
Therefore, as the speed of AI technology development increases, global AI regulation is also expected to accelerate.
Max Tegmark, director of the Future of Life Institute, a technology policy think tank, emphasized, “In fact, the U.S. is already behind Europe (in AI regulation),” adding that “policymakers, including Congress, need to protect citizens by addressing threats and implementing laws to protect the process.”
By. Jang Seong Won
Most Commented