Every five minutes, a new instance of insurance fraud is uncovered, and each day, the amount of fraud detected totals £3.3 million. But, the reality is that much fraud flies under the radar, and the volume of fraudulent transactions that go undetected is likely to be double the amount that we know of. With fraud clearly being big business, there is a need for insurers to build solid defence frameworks to stop fraud in its tracks. Much of this defence could be strengthened, and more fraud detected, if insurers committed to sharing their data with their peers, enabling a comprehensive, overarching view of fraudulent transactions.
Insurers have historically been unwilling to share data with peers due to privacy and regulatory concern. However, new innovations exist to provide privacy assurance over the data-sharing that insurance firms need to commit to in order to build a comprehensive fraud detection framework and help stop fraud before it occurs.
A survey conducted by The Harris Poll on behalf of IBM showed that only 20% of US consumers “completely trust” the organisations they interact with to maintain the privacy of their data. Although data sharing across firms and departments would lead to many benefits, including improved customer service, reduced frictional costs and fraud identification, concerns surrounding data privacy remain front of mind.
For insurers unable to securely share data on fraudulent claims, this lack of collaboration is especially problematic when culprits report multiple claims for the same event at multiple insurers – a type of fraud known as “double-dipping”. Around 7.5% of insurance claims are estimated to be fraudulent, from which 5-10% are instances of double-dipping fraud.
Based on current findings, of the estimated $50 billion per year that insurance fraud costs the industry, double-dipping can be estimated to cost insurers $3 billion-$5 billion each year, resulting in increased premiums for customers. Double-dipping exists across multiple industries: for example, car accident claims at multiple insurers; a single medical procedure claimed at two insurers; and double mobile phone pay-outs.
Much of the challenge in identifying double dipping and preventing it from occurring in the first place stems from this reticence around sharing data between insurance firms.
Many insurance companies struggle to provide technical assurances around how individuals’ data is processed, which leaves them with two choices: either they trust the other party’s reputation and hand over their data; or they do not share confidential data with anyone.
Until now, there been no industry-wide platform that can solve this challenge. No solution existed that enabled the secured data sharing between insurers on a need-to-know basis, whilst simultaneously guaranteeing business privacy.
This is where confidential computing comes in. Confidential computing is a breakthrough technology which protects data while it is in use. It is enabling a new era of privacy in data sharing, that makes it possible for different organisations to combine data sets for analysis without accessing each other’s data.
With this assurance, insurance firms can “confidentially pool” their data, where only the results of agreed analysis are shared with necessary participants. The underlying, raw data is not visible to any counterparties. Put into the context of insurance fraud detection, this allows suspiciously similar claims be compared, identified, and investigated to address double-dip cases.
Insurance industry regulation is needed to mitigate ransomware attacks
Confidential computing and network enclaves
The three pillars of data security involve protecting data at rest, in transit, and in use. Confidential computing keeps data cryptographically secure at rest, in transit, and in processing.
This new way of securing data makes it possible for different organisations to pool data sets for analysis without accessing each other’s raw data. This is a simple yet revolutionary promise in transforming the way businesses unlock their own data and share it with each other.
This promise also ensures that the owner of the data is prevented from influencing the “enclave”, to do anything other than what it was written to do or see the data that it is operating on. A network enclave is a section of an internal network that is subdivided from the rest of the network.
With confidential computing, only agreed analysis is executed against the data, ensuring data is only used in the prescribed way, all while ensuring no one can see the raw data itself. As a result, firms are able to extract value from otherwise trapped data, by securely pooling it with peers for joint analysis.
What does this mean for double-dipping?
Confidential computing enables fraud detection software firms to access private data sets, providing assurances to customers that their data will not be viewed by the provider or the provider’s other customers, just processed. From here, insurers and software firms can build new solutions that aggregate data from multiple firms in a trustworthy manner, while reducing false positives and detecting new fraud.
For example, imagine that the same car accident claim is submitted at two different insurers with the intent to make a fraudulent profit. These two claims are processed using existing insurance systems. Before pay-out, the two datasets are compared on a ledger and given a similarity score. Metrics can include date and hour, location, and car model.
If there is a significant similarity score, the claims will be marked as suspicious and can then be formally investigated before pay out. These claims will then be stored on the ledger to prevent additional fraudulent claims, further optimising the models.
Confidential computing may be just what the insurance industry needs to answer concerns on data privacy while simultaneously building a strong defence against fraud. The promise of confidential computing does not stop at insurance, either. Any situation where an individual must give up valuable data in order to receive some broader valuable insight back in return is an example of where confidential computing can transform a critical industry challenge.
One example would be duplicate loans / finance with different banks against the same asset. 2021 will be the year it enters mainstream enterprise IT, enabling businesses in all industries to start securing data even when in use.