The Double-Edged Sword of AI in Fighting Fraud

As artificial intelligence [AI] continues to reshape industries, its impact on financial crime is particularly noteworthy. While AI offers powerful tools to combat fraud, it’s also being weaponized by criminals, creating a complex and evolving landscape.

Speaking at the recent 17th Annual ACFE Africa Conference and Exhibition, Stephanie Ora, Global Lead for Financial Crimes Analytics at SAS Institute, discussed how companies can use AI to counter the fraudsters who have adopted the technology as well.

Learn More About How SAS and the Commonwealth Promoted AI in Smaller Nations.

Fighting Financial Crime with AI

“AI has revolutionised the fight against financial crime,” says Ora. “But as we enhance our capabilities, so do fraudsters. They are leveraging AI to execute sophisticated schemes that are often difficult to detect using traditional methods. This double-edged nature of AI means that while it can help us identify and prevent fraud more effectively. It also presents new challenges that we must overcome.”

Fraud

The fight against fraud has evolved beyond rules-based systems centred on manual checks and pattern recognition. Today’s AI-driven methods include automated anomaly detection, predictive analytics and behavioural monitoring in real-time. However, the use of Generative AI [GenAI] in fraud schemes poses a particular challenge. It is not just about detecting fraud but understanding and replicating data behaviour.

Evolving Fraud Schemes Using Manufactured Synthetic Identities

Fraudsters can use GenAI, such as deepfake, to evolve their fraud schemes from manipulated to completely manufactured synthetic identities. It has also expanded its scam channels from emails and text messages to both audio and video calls. This has increased the amount of authorised push payments and account takeovers facilitated by account owners themselves, not realising the dark side of GenAI in fraud.

“Even though AI’s ability to detect anomalies and patterns in real-time is a game-changer. AI technologies and fraud schemes evolve at an accelerated rate,” says Ora. “It is critical for financial institutions to have scalable AI solutions allowing for agility and adaptability to emerging fraud threats. This should be complemented by enterprise-wide fraud awareness programs and collaborations with other financial institutions, regulatory bodies and law enforcement to build robust fraud defences and AI governance frameworks.”

“To fight AI-enabled fraud, a hybrid approach of combining rules-based and AI-based approach is key in achieving balance between effectiveness and explainability. Third party data, such as device, IP address, behavioural biometrics and watchlists also play an important role to make data-driven predictions early and potentially identify networks and hidden relationships,” says Ora.

Ethical Implications of Using AI to fight Financial Crime

 There are also ethical implications to consider for using AI in financial crime prevention.

“As we integrate AI into our operations. We must ensure that it does not compromise privacy, transparency and fairness of financial systems. Establishing ethical guidelines, clear accountability and oversight is crucial to ensure smooth execution whilst not causing unintended harm,” says Ora.

 The most significant issue is that AI will not fix itself.

“Despite the risks and ethical considerations of AI, it is even more important to start using AI now since AI is continuously learning from those who use it. In the fight against financial crime, it is crucial to enforce the human good in the AI loop since fraudsters have started to abuse it,” concludes Ora.

 As AI presents an incredible opportunity to enhance operational efficiency, improve customer experiences and mitigate risk, it also challenges organisations to rethink their approach to fraud prevention and adapt to a new landscape where the lines between man and machine are increasingly blurred.

Related Posts
Total
0
Share