[ad_1]
LONDON, June 14 (AP) European lawmakers signed the world’s first comprehensive set of artificial intelligence rules on Wednesday, clearing a key hurdle in a race by global authorities to rein in artificial intelligence.
The vote in the European Parliament is one of the final steps before the rules become law, and it could serve as a model for other places that have enacted similar regulations.
Read also | Why are there so few organ donors in India?
Brussels’ years-long effort to draw up guardrails for AI has taken on added urgency as the rapid development of chatbots such as ChatGPT shows the benefits the emerging technology can bring, as well as the new dangers it poses.
Take a look at the EU’s AI bill:
How do the rules work?
The measure, first proposed in 2021, will apply to any product or service that uses an AI system. The bill would classify AI systems based on four levels of risk, from minimal to unacceptable.
Higher-risk applications, such as recruitment or technology targeting children, will face stricter requirements, including greater transparency and the use of accurate data.
Enforcing the rules will be up to the EU’s 27 member states. Regulators could force companies to withdraw their apps from the market.
In extreme cases, non-compliance can result in fines of up to 30 million euros ($33 million) or 6 percent of a company’s annual global revenue, and for tech companies like Google and Microsoft, fines can run into the billions.
what’s the risk?
One of the EU’s main objectives is to guard against any threat to health and safety from AI, and to protect fundamental rights and values.
This means some uses of AI are absolutely off-limits, such as “social scoring” systems that judge people based on their behavior.
Also prohibited are AIs that exploit vulnerable groups, including children, or use subliminal manipulation that could lead to harm, such as interactive talking toys that encourage dangerous behavior.
Predictive policing tools for predicting who will commit crimes are also outdated.
Lawmakers strengthened an original proposal by the European Commission, the EU’s executive branch, by expanding a ban on public real-time remote facial recognition and biometric identification. The technology scans passers-by and uses artificial intelligence to match their facial or other physical characteristics to a database.
A controversial amendment allowing exceptions to law enforcement, such as finding missing children or preventing terrorist threats, did not pass.
AI systems used in categories such as employment and education, which affect the course of a person’s life, face strict requirements such as being transparent to users and taking steps to assess and reduce the risk of algorithmic bias.
Most AI systems, such as video games or spam filters, fall into the low-risk or no-risk category, the committee said.
How about CHATGPT?
The initial measure made little mention of chatbots, mostly requiring them to be labeled so users know they are interacting with a machine. Negotiators later added provisions to cover popular AGI like ChatGPT, making the technology subject to some of the same requirements as high-risk systems.
A key addition is the requirement to thoroughly document any copyrighted material used to teach AI systems how to generate text, images, video and music similar to human works.
This will let content creators know if their blog posts, digital books, scientific articles or songs have been used to train the algorithms that power systems like ChatGPT. They can then decide whether their work has been copied and seek redress.
Why are EU rules so important?
The EU is not a significant player in cutting-edge AI development. This role is assumed by the United States and China. But Brussels has often been a trendsetter, with regulations often becoming the de facto global standard and has been a precursor to the power of big tech companies.
Experts say the sheer size of the EU’s single market, with 450 million consumers, makes it easier for companies to comply, rather than develop different products for different regions.
But it’s not just repression. By laying down common rules for AI, Brussels is also trying to grow the market by instilling confidence among users.
“This is enforceable regulation, and the fact that companies will be held accountable is significant” as other places such as the US, Singapore and the UK have provided only “guidance and advice,” said Chris R. Kris Shrisak said the Irish Civil Liberties Commission.
“Other countries may want to adapt and copy” EU rules, he said.
Business and industry groups have warned that Europe needs to strike the right balance.
“The EU will be a leader in regulating AI, but it remains to be seen whether it will lead in AI innovation,” said Boniface de Champris, policy manager for the Computer and Communications Industry Association, a tech lobby group for companies.
“Europe’s new AI rules need to effectively address well-defined risks while providing developers with enough flexibility to deliver useful AI applications for the benefit of all Europeans,” he said.
Sam Altman, CEO of ChatGPT maker OpenAI, has voiced support for some of the guardrails for artificial intelligence, signing on with other tech executives to warn about the risks it poses to humans. But he also said that “it would be a mistake to impose strict regulations on this area now”.
Others are playing catch-up to the AI ​​rules. Britain, which leaves the European Union in 2020, is vying for AI leadership. Prime Minister Rishi Sunak plans to host the World Summit on AI Safety this autumn.
“I want the UK to be not just the intellectual home, but the geographic home of global AI safety regulation,” Sunak told a tech conference this week.
What’s next?
It could take years for the rules to fully take effect. The next step is tripartite talks involving member states, parliament and the European Commission, which could face more changes as they try to agree on wording.
Final approval is expected by the end of the year, followed by a grace period for companies and organizations to adapt, usually around two years.
Brando Benifei, an Italian member of the European Parliament who co-led work on the AI ​​bill, said they would push for faster adoption of rules that generate rapidly evolving technologies such as AI.
To fill the gaps before the legislation takes effect, Europe and the US are drafting a voluntary code of conduct that officials promised in late May to draw up within weeks and potentially expand to other “like-minded countries”. (Associated Press)
(This is an unedited and auto-generated story from a Syndicated News feed, the body of content may not have been modified or edited by LatestLY staff)
share now
[ad_2]
Source link