Photo=Image Today |
The Biden Administration in the United States has instructed federal agencies to establish specific safety measures related to artificial intelligence (AI) by December.
According to CNN, the White House Office of Management and Budget (OMB) announced new policy rules on the 28th (local time) containing detailed requirements to prevent citizens from suffering unnecessary harm using AI technology. Vice President Kamala Harris described it as the “first federal policy to reduce the risks of AI technology while utilizing its benefits,” stating, “Government agencies will guarantee that AI tools do not jeopardize the rights and safety of citizens.” The regulation is set to be implemented from December 1st.
The White House cited as an example of AI safety guidelines that travelers using U.S. airports should be guaranteed the right to refuse the use of facial recognition by the Transportation Security Administration (TSA), which oversees security checks. Since last year, the TSA has been piloting AI facial recognition technology at some airports. Also, when providing medical services such as diagnosing diseases and prescribing medication using AI, they must be reviewed by a separate supervisor to corroborate there is no racial discrimination or wealth gap.
In addition, according to these guidelines, all federal agencies must post online a complete list of AI systems they use, the results of risk assessments for these systems, a list of possible side effects, and the reasons for their use. Each agency must also hire an AI Chief Officer to supervise how AI technology is used. CNN reported, “The U.S. government can indirectly regulate AI with these powerful measures.” This policy is a follow-up action to the Executive Order on Safe and Trustworthy AI signed by President Joe Biden last October.
Most Commented