Silicon Valley is following Europe's lead in regulating AI

                                          

Silicon Valley is following Europe's lead in regulating AI




                                              


The state of California, home to Silicon Valley and technology companies, is prepared to govern artificial intelligence legislation.


According to Agence France-Presse, the US state of California, which is home to Silicon Valley and many technology businesses, is attempting to impose curbs on artificial intelligence in response to European rules.
In mid-March, the European Parliament passed legislation governing artificial intelligence models and imposing constraints on transparency, intellectual property rights, and privacy protection.

 

"We're trying to learn from the Europeans and work with them to understand how to set rules for artificial intelligence," says David Harris, a consultant with the California Initiative for Technology and Democracy initiative.
This organization works to defend elections against the misuse of developing technologies.


According to Harris, more than 30 measures were submitted to the California Parliament, and American and European leaders engaged him on the matter.

The writings submitted to the California Legislature address a variety of issues with artificial intelligence.
One regulation would require technology corporations to publish the data used to construct artificial intelligence models.

 

Another suggestion calls for a prohibition on election campaign commercials that use generative artificial intelligence, which allows content to be created (text, image, audio) based on a simple request in a common language.

Several politicians seek to ensure that social media networks report any content, image, video, or audio clip made using generative artificial intelligence.
A poll performed in October by the University of Berkeley, which included voters from California, found that 73% of them support regulations to combat false information or deep fakes (deep fakes) and limit the use of artificial intelligence during election campaigns.


This is one of the few issues on which Republicans and Democrats agree.


A proposed rule would require technology corporations to reveal the data used to construct "advanced" AI models.

Concerns about "deepfakes" and fake texts generated by artificial intelligence are among the most pressing challenges, according to David Harris.



Gayle Pellerin, a Democratic lawmaker representing a district that contains a portion of Silicon Valley, proposes a bill that would prohibit "deep fakes" in political matters during the three months preceding the election.


"Bad faith actors using this technology are trying to cause chaos in the election," she goes on to explain.


NetChoice, a professional organization that represents digital businesses, advises against importing EU regulations into California.


 

"They are adopting the European approach to dealing with artificial intelligence, which wants to ban this technology," says Karl Szabo, legal director of the organization that advocates for the passage of laws with limited sanctions.

 

"Banning artificial intelligence will not stop anything," the attorney contends. He goes on to say: "It is a bad idea, because bad-faith actors do not respect the laws."

 


Dana Rao, the legal director at software publisher Adobe, looks to be more moderate. It praises the European Union's distinction between low-impact artificial intelligence, which includes "deepfakes" and fake texts, and "high-risk" artificial intelligence, which is utilized mostly in essential infrastructure or law enforcement.


"The final version of the text suits us," explains Dana Rao.

Adobe reveals that it has already begun undertaking research to assess the dangers connected with emerging artificial intelligence-based products.


"Attention should be paid to nuclear safety, cybersecurity and all times AI makes important human rights decisions," Rao goes on to say.
Adobe has produced a set of metadata in partnership with the Coalition for Content Proofability and Authenticity, which includes Microsoft and Google.



California lawmakers, like the state's corporations developing artificial intelligence, want to be at the forefront of regulatory efforts.
"People are watching what's happening in California," says Gayle Pellerin.

She adds, "It is a movement that affects all of us." We need to be one step ahead of those who wish to foment turmoil during elections.

 

 

Post a Comment

0 Comments