What are the emerging risks of AI?

AI systems can have tremendous benefits for society with exponential productivity boosts, dramatic cost savings and increased profitability. However, there are inherent risks their use brings. The use of an AI that fails to perform as expected can lead to claims for property damage, personal injury, professional liability, reputational, medical malpractice, and cyber risks.

There has been a huge shift over the last five years for Fortune 500 companies from physical assets to intangible assets such as AI. A big part of the risk exposure for most companies is now in the form of intangible assets.

This poses a massive challenge for risk managers. A lot of AI risks are intangible and hard, if not impossible to fully understand.

AI malfunction

The simplest AI risk to quantify is that it simply fails to do the job for which it was deployed. For example, if a machinery predictive maintenance algorithm is not correct and the machine breaks down, there is the potential for severe property damage as well as the associated business interruption costs of downtime.

Accountacy AI fails to deliver

Accounting is one area where many companies have employed AI. According to Sage, 58% of accountants believe AI will help their business. It is helping with core tasks that involve a lot of data such as expenses processing, payroll and forecasting future cashflow. However, in some cases the AI has malfunctioned and ended up costing the companies using it more time and money to fix the problems it created.

One AI bookkeeper was marketed as being capable of replacing a whole accounting team. When deployed, it turned out to require humans to complete month-end accruals or deferrals, administer payroll, pay employees and bills, project cash flow and complete bank reconciliations. Some other accountancy AI systems have been found to have made double payments or have missed payments completely.

 

An AI malfunction is typically a failure to adequately align the machine’s objectives with those of the human creators. There is broad product and professional liability should AI fail to perform as expected.

The automotive sector is particularly exposed here. As more automated vehicles hit the road, it is likely at some point something will not work as expected on a large scale. For vehicle manufacturers, the liability in the event of an accident has traditionally sat with the driver. With automated vehicles however, the manufacturer is exposed to product liability suits as the liability shifts to the vehicle and its maker.

Self-driving car runs red lights

In 2016 a ride-hailing company was testing self-driving cars in California, when one of the vehicles allegedly failed to recognise six red lights and ran through at least one of them at an intersection where pedestrians were present. The company publicly blamed this on human error, but according to internal documents seen by The Times, the vehicle's mapping program failed to recognize the traffic lights.

 

Human harm caused by machinery as a result of malfunctioning AI is also a considerable risk in the manufacturing sector.

AI itself is not sentient and therefore cannot be 'evil', despite Hollywood's best attempts to make it appear so. However, AI has directly led to the death and injury of manufacturing employees.

AI makes catastrophic mistake

An employee of an automotive and defence manufacturer was crushed to death by a robot. The robot's AI algorithm erroneously identified him as a system blockage that was preventing the machine from completing its objective. It calculated that the most efficient way to deal with a blockage was to push it out of the way. However, in this case, the machine pushed the employee into a nearby grinding machine and then continued with its duties.

 

In the event of an AI malfunction leading to injury, or even death, the company deploying the AI would likely be found liable.

The devil is in the data

With all AI, the technology is only a good as the data the machine has been given. Where there are problems with this data, problems with the performance of the AI inevitably arise. AI is a very fast-growing industry and a huge number of companies across every industry are looking for the data science skillset. Amazon, Google and other technology giants already hire many data scientists and demand is quickly outstripping the talent force.

To get the best results the data scientists also need knowledge of the industry they are applying the AI to, which further compounds the skills shortage.

Medical AI puts patients at risk

In the recent past we have seen how AI can go catastrophically wrong due to poor training data. One oncology AI product was hailed by its engineers as a potential cure for cancer with the aim to uncover insights from the cancer centre’s patient and research databases.

However, the reality turned out to be somewhat different. Five years from the start of development, the AI was making erroneous, sometimes downright dangerous cancer treatment advice. Medical specialists identified examples of unsafe and incorrect treatment recommendations, including one case where the AI suggested that doctors give a cancer patient with severe bleeding a drug that could worsen the bleeding.

Internal documents saw the blame being placed on the software engineers who had trained the system on a small number of hypothetical cancer patients rather than real patient data.

 

Issues with data can have huge ramifications for the healthcare industry as it can lead to AI systems drawing inaccurate conclusions. This could not only have a big personal effect on patients but could also leave the healthcare and technology provider subject to medical malpractice claims.

Even where the data and programming for the AI appears perfect to the human eye, the AI can malfunction due to it picking up patterns in the data that were unforeseen during the training process.

Healthcare AI interprets the opposite of reality

One healthcare AI system was trained to learn which patients with pneumonia had a higher risk of death, so that they would be admitted to hospital. The system misclassified patients with asthma as having a lower risk because patients with pneumonia and a history of asthma had historically been sent straight to intensive care; significantly reducing their risk of dying. Machine learning interpreted this that asthma + pneumonia = lower risk of death, the opposite of reality.

 

AI picks up on patterns in the data that may not have been apparent to the humans programming them. The data itself can also leave companies open to professional liability lawsuits due to unforeseen inherent bias.

For example, in the event of a loan application being found to have been turned down or higher rates imposed due to bias, the bank would be liable as it is responsible for ensuring it is unbiased towards customers.

We have a societal construct that data is unbiased, but in reality, data is reflective of the biases in society. It also takes longer to reflect any changes. Therefore, as data is fundamentally biased, as we create more AI models, we risk persisting the bias.

As AI becomes more powerful, collecting more data from multiple sources then adding non-interpretative models with little to no human interaction, it is picking up on micro-signals humans cannot detect and even further exaggerating any existing biases.

Beware the bias

One retail, technology and distribution giant decided to start testing resume reading AI. They decided to test the tech when recruiting data engineers. Eventually the company ditched the project completely when the engineers realised that they had taught their own AI that male candidates were better. The technology was dismissing women at a higher rate as most of their data engineers employed in the past were male, as the AI had learned from the training data that white male engineers were best for the job. An implicit bias was in the data simply because white men were the most likely to have applied (and been appointed) for software engineering in the past.

When AI fails, or does not perform as expected, there is also potential financial and reputational ramifications for the business operating the technology.

Where AI deployed by a business is found to be at fault for an error causing damage to a third party, the company deploying it will be liable. This is especially challenging when the decision made by AI algorithms are at times quite opaque. It is hard to quantify the potential effects, especially as regulation in this area continues to evolve. We have not even touched on the possible socio-economic implications of the deployment of AI, with AI replacing human jobs, a topic widely discussed by economists and business leaders.

AI-enabled bias in decision making will likely emerge as one of the biggest risks of this century. As we have seen, data is inherently biased, and AI has the effect of amplifying and making these biases systemic.

Risks posed by the malicious use of AI by third parties

AI touches on almost everything we do, even when we are not the ones applying it. Deep fakes, where a person's image/video is seamlessly edited onto another individual, have developed as a direct result of AI. These have the potential to cause huge financial and reputational damage or even stock price manipulation as they become increasingly difficult to tell apart from genuine content.

The less scrupulous members of society are already taking advantage of this kind of AI technology. Videos frequently appear on social media which appear to show celebrities and trusted industry commentators appearing to endorse risky unregulated financial investments and scams. AI driven malware and bots also pose a significant risk to businesses. Those who are deploying AI themselves are especially at risk.

The more AI is integrated into key aspects of a company, the higher exposure they have to malicious cyber-attack.

We have seen plenty of examples of destructive malware and botnets. AI driven botnets written by one or two hackers can take over hundreds of thousands of computers and turn them into a “cyber canon” capable of launching Distributed Denial of Service (DDoS) attacks capable of taking entire portions of the Internet offline.

The Mirai botnet takes down the internet for the east coast US

On October 12, 2016, the Mirai botnet launched the largest DDoS attack ever recorded, targeting the Domain Name System (DNS) provider Dyn. As a DNS provider, Dyn provided end-users with ability to map an Internet domain name to its IP address. It is the equivalent of the Internet’s telephone book. Without this mapping capability, computers would have a difficult, if not impossible time connecting to many web sites.

At its peak, the attackers had compromised 2.5 million devices, mostly comprised of vulnerable Internet of Things (IoT) devices such as DVRs and CCTV cameras, turning them into 'zombies'. They then used this zombie army of computers to simultaneously attempt to connect to Dyn's DNS servers. The servers, overwhelmed by the 2.5m simultaneous connection requests, could not respond to legitimate requests, leaving much of the Internet inaccessible on the east coast of the United States.

 

Automated cars are essentially computers on wheels and are powered by insecure data systems that are communicating via hackable transmission protocols. It is entirely possible for a remote hacker to seize control of a car and operate it remotely.

Hackers are increasingly leveraging AI in creating and amplifying these kind of malware attacks. They are also increasingly focused on destruction and not just data theft. At Swiss Re we believe that management of AI will require a rethink of all elements of an organization. Broadly this falls into three categories: people, processes and technologies.

We have developed our NEAT© framework to help risk managers in their approach to AI. We named it NEAT as algorithms should be neutral, explainable, auditable, and trustable.

The basis of this framework is:

Tags

artificialintelligence digitalisation cyber

Contact

​​​Tyranny of Data

Join our next webinar "Tyranny of Data: A Technologist and Cyber Risk Engineer debate the dangers of AI" on 10 December

​What is AI and how is it being used?

​What can risk managers do to try and mitigate some of the risks posed by AI?