[ad_1]
TORONTO, May 15 (Dialogue) For the most part, the focus of modern emergency management has been on natural, technological and man-made hazards such as floods, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber-attacks.
However, as the availability and capabilities of AI increase, we may soon see emerging public safety concerns associated with these technologies that we need to mitigate and prepare for.
Read also | Uganda: Indian national allegedly shot dead by police in Kampala, reports say.
Over the past 20 years, my colleagues and I, along with many other researchers, have been using AI to develop models and applications that can identify, assess, predict, monitor, and detect hazards to inform emergency response actions and Informed decision making.
We are now at an inflection point where AI is emerging as a potential source of risk at a scale that should be incorporated into the risk and emergency management phases – mitigation or prevention, preparedness, response and recovery.
Artificial Intelligence and Hazard Classification
AI hazards can be divided into two categories: intentional and unintentional. Accidental hazards are hazards caused by human error or technical failure.
As the use of AI increases, human errors in AI models or technical glitches in AI-based technologies will lead to more adverse events. These events can occur across a variety of industries, including transportation (such as drones, trains, or self-driving cars), electricity, oil and gas, finance and banking, agriculture, health, and mining.
Intentional AI hazards are the potential threats posed by the use of AI to harm people and property. AI can also gain illicit benefits by disrupting safety and security systems.
In my opinion, this simple classification of intentional and unintentional may not be enough for AI. Here we need to add an emerging class of threats—the possibility of artificial intelligence surpassing human control and decision-making. This may be triggered intentionally or unintentionally.
Many AI experts have warned about such potential threats. A recent open letter from researchers, scientists and others involved in the development of AI has called for a moratorium on its further development.
public safety risk
Public safety and emergency management experts use risk matrices to assess and compare risks. Using this approach, hazards can be assessed qualitatively or quantitatively based on their frequency and consequences, and their impact classified as low, medium, or high.
Hazards with low frequency and low consequence or impact are considered low risk and require no additional action to manage them. Hazards with medium consequences and medium frequency are considered medium risk. These risks need to be closely monitored.
Hazards with high frequency or high consequence or both high frequency and consequence are classified as high risk. These risks need to be reduced by taking additional risk reduction and mitigation measures. If proper measures are not taken immediately, severe personal and property damage could result.
So far, the hazards and risks of AI have not been added to the risk assessment matrix other than the organizational use of AI applications. Now is the time we should quickly start integrating potential AI risks into local, national and global risk and emergency management.
AI Risk Assessment
As AI technologies are increasingly used in institutions, organizations, and companies in different industries, the harms associated with AI are beginning to emerge.
In 2018, KPMG developed an “artificial intelligence risk and control matrix”. It highlights the risks to businesses using AI and urges them to be aware of these emerging risks. The report warns that AI technologies are developing very rapidly and risk-control measures must be put in place before they overwhelm systems.
Governments are also starting to develop some risk assessment guidelines for the use of AI-based technologies and solutions. However, these guidelines are limited to risks such as algorithmic bias and violations of individual rights.
At the government level, the Government of Canada issued the Directive on Automated Decision-Making to ensure that federal agencies minimize the risks associated with AI systems and establish appropriate governance mechanisms.
The primary goal of the Directive is to ensure that risks to customers, federal agencies and Canadian society are reduced when AI systems are deployed. According to the directive, each department must carry out a risk assessment to ensure that appropriate safeguards are in place in accordance with government security policies.
In 2021, Congress tasked the National Institute of Standards and Technology with developing an AI risk management framework for the Department of Defense. A proposed voluntary AI risk assessment framework recommends banning the use of AI systems that present unacceptable risks.
Threats and Competition Many national-level AI policies focus on national security and global competition—the national security and economic risks of AI lagging behind.
The National Security Council on Artificial Intelligence has highlighted the national security risks associated with AI. These are public threats not from the technology itself, but from the failure of other countries, including China, in the global race to develop AI.
In its 2017 Global Risks Report, the World Economic Forum highlighted that artificial intelligence is just one of the emerging technologies that could exacerbate global risks. In assessing the risks posed by AI, the report concluded that superintelligent AI systems remained a theoretical threat at the time.
However, the latest Global Risks Report 2023 does not even mention AI and AI-related risks, which means that leaders of global companies that contributed to the Global Risks Report do not see AI as an immediate risk.
faster than policy
AI is advancing much faster than government and corporate policy in understanding, anticipating, and managing risk. The current global situation, coupled with competition in the market for AI technologies, makes it difficult for governments to find opportunities to suspend and develop risk governance mechanisms.
While we should collectively and aggressively experiment with such governance mechanisms, we all need to be prepared for the significant and catastrophic impact AI will have on our systems and societies. (dialogue)
(This is an unedited and auto-generated story from a Syndicated News feed, the content body may not have been modified or edited by LatestLY staff)
share now
[ad_2]
Source link