Insurance premiums may soon be decided by artificial intelligence and personal profiles created from social media data — if they aren’t already — according to researchers concerned about the industry’s potential misuse and lack of transparency regarding the new technologies.
These technologies, they say, could lead to people being disadvantaged for reasons that have nothing to do with their riskiness — such as the type of phone they use — or to indirectly discriminate against people for protected characteristics such as religion.
Insurance companies overseas are already using data from sources including social media to decide how much to charge customers and whether to underwrite policies. Whether Australian companies have adopted similar methods is not known.
That’s a problem, according to academics Dr Zofia Bednarz and Dr Kayleen Manwaring, who have been researching the impacts of the insurance industry collecting data from non-traditional places that Australians aren’t aware they’re sharing.
Dr Bednarz told Crikey it’s safe to assume that Australian insurance companies are watching their international counterparts and investigating or even already using new sources of data and analysis tools.
“The insurance industry has always been heavily interested in data. But now we’re living through a time in which there’s an increase in the sheer amount of data and the means to analyse it,” she said. The Insurance Council of Australia did not immediately respond to a request for comment.
Dr Bednarz pointed to the use of customer loyalty programs as an example of how insurance companies seek new forms of data about customers. Loyalty schemes as used by Coles and Qantas collect information such as social media accounts, locations, purchases, flight details, use of inflight entertainment systems and browsing history, which can be used to create profiles of an individual’s health, personality and behaviour.
If that alarms you, consider how advances in technology might provide similar insights without an individual consenting to be part of a rewards program. Technology like artificial intelligence and other tools slurping up huge amounts of data like social media posts is the next big thing for insurance companies, says Dr Bednarz.
She says the issue with these new technologies is that they’re opaque, sometimes even to the people running them and the companies using them, and they may be hurting people for unfair reasons. Dr Bednarz says that an individual might be charged more because they don’t have a large digital footprint for companies to base decisions on.
Artificial intelligence can be trained on data sets to recognise patterns that humans might not pick up. In the case of the insurance industry, it might provide a list of customers and their claims to predict who is likely to make more claims in the future. The issue with this technique is that an AI could base its decisions on spurious or even erroneous connections, and it’s very difficult to unpick how those decisions were made, even for people within a company — let alone by a customer.
Plus, there’s a possibility that establishing a connection between two factors may act as a proxy for protected characteristics. Take an example of an AI determining whether people living in a certain area were more likely to have car accidents. While they might be correlated, the causal factor could be the person’s religion — characteristics that are protected by anti-discrimination law — because there’s a community of people who live in that area. In making this decision, an insurance company could be illegally discriminating without being aware of it.
“Those models are extremely complicated. It’s impossible for people who design the algorithm to know what’s going into the decisions. In some cases, insurers could be using third-party models, so they don’t even know what’s happening,” Dr Bednarz said.
According to Dr Bednarz, the solutions to these potential issues involve restricting insurance companies from using external data and mandating transparency around the use of machine learning. While there are often problems regarding people and companies using information illegally (like facial-recognition company Clearview AI illegally scraping people’s faces off social media), ensuring that companies have to show their working for decisions would make a difference.
Regulating these companies might be welcomed by the industry. In the past, they’ve called for standards because, Dr Bednarz says, investing in these technologies is costly and risks being banned unless there are clear guidelines. The federal government’s Consumer Data Right program is one example.
“It’s a good idea, but you can’t feed more data into these companies in an insurance context without any kinds of restrictions,” she said.
This alleged use of AI is an appalling outrage on so many levels. Firstly, it would constitute a clear breach of privacy laws. The company collecting personal information about you must get your consent to do so, often way too easily, but crucially must be able to disclose that information to you so that you can correct errors. If masses of data that cannot all be clearly identified is fed into an AI machine, the collector can make any necessary disclosure and because of the nature of AI, it would be impossible for the subject to be able to identify errors.
Secondly, insurance decisions and premium setting based on AI would be very clearly involve countless moral choices, not empirical commercial determination of insurance risk.
Thirdly, insurance risk determination would necessarily be based on a much much information than the applicant gives on teh proposal form. Then, if the insurer has a mass of other information dredged up and pulped by AI, insurers will have many more opportunities to deny indemnity on the basis of some allegation that the insured has breached his, her to its obligation of utmost good faith under the Insurance Contact Act.
In short, this dystopian experiment must be stopped by legislation immediately. It is the stuff of nightmares and clear evidence that AI will be used more for bad than good.
We have, over centuries, relied on a veil of ignorance to deliver a degree of fairness via cross-subsidies (or “risk pool averaging”). Now that some people have the opportunity to access discounts by sharing more data about themselves, this approach is starting to fail.
If we are serious about the social policy objective of fairness, we can’t rely on ignorance – we need to be explicit about the way we apportion risk in the community. Restricting data sharing and the use of machine learning is impractical.