You have all seen the news – many events have happened over the last few weeks that make it imperative to issue a special Fraud of The Day Vulnerability Alert to explain in more detail where the threat lies. Lately, fraudsters have been turning to artificial intelligence (AI) as their go-to tool for swindling millions of dollars from unwitting victims and government programs. Add to that the recent CI0Ps MOVEit hack of millions of Americans’ identity information and we are facing a very real problem. It’s now easier than ever to bypass legacy document authentication and “liveness checks” to fraudulently claim benefits, thanks to the readily available AI tools.
There is no government program that is safe anymore.
Put simply, artificial intelligence (AI) is the remarkable ability of machines, particularly computer systems, to replicate human intelligence processes. It encompasses a vast range of tasks that typically require a human hand, such as composing poetry, driving vehicles, or even performing complex surgeries. In fact, AI has the potential to excel in these areas and potentially outperform humans. However, like any powerful tool, AI can be exploited for illicit purposes – despite its potential to assist well-intentioned individuals in achieving their legitimate aims.
In the past, fraud primarily involved stolen or forged physical documents like wallets, birth certificates, or personal identifiable information (PII). However, the advent of AI has given rise to a new form of deception called “deepfakes.” Essentially, deepfakes are digital manipulations of photos, audio, or videos that convincingly replace one person’s likeness with another. Fraudsters can now employ AI tools with minimal technical expertise to download someone’s pictures from various online sources and generate realistic images, imitate their voice, or even create videos of them engaging in events that never actually occurred. That’s precisely what makes deepfakes dangerous – AI generated images create a world where humans often can’t tell what images are fake.
These threats pose a significant danger as they blur the line between real and fabricated images, voices, or even interactions, making it increasingly challenging for humans to discern the authenticity of visual and audio content. Naturally, fraudsters exploit this technological advancement, resulting in scams that jeopardize both the individuals being impersonated and essentially every taxpayer in the United States.AI makes it harder for existing verification systems to search for stolen identities because typical anomalies and flags do not exist. They appear to be real identities. CEO Haywood Talcove, an industry expert on the front lines against government fraud, addressed this in an outstanding article that articulates the power of AI and the strength it is giving fraudsters.
In addition, AI and deepfakes can bypass the IAL2 (Identity Assurance Level 2) facial recognition and liveness checks employed by many agencies with astonishing ease. These advanced technologies can seamlessly deceive systems designed to authenticate individuals based on their facial features and real-time interactions. Deepfakes, which utilize AI algorithms to manipulate visual and audio content, can generate highly realistic images and videos that imitate someone else’s appearance, voice and behavior. This enables fraudsters to trick facial recognition systems and liveness checks, as the sophisticated nature of deepfakes often goes undetected by these flawed security measures.
So, why do government agencies that oversee billions of dollars in critical benefits continue to use systems that are so easily fooled? While they are well-meaning, many have been sold a bill of goods that has been rendered obsolete by these fast-emerging technologies. A potential power lies in turning to AI systems on themselves to keep AI fraud in check by detecting patterns and anomalies that are often overlooked by humans.
While there is potential for AI itself to be used to prevent AI-enabled fraud in digital mano-a-mano combat, it is also time-tested solutions coupled with the latest in behavioral biometrics that represent the greatest hope in keeping this new scourge at bay. Financial industry-proven identity verification using referential data coupled with non-public and dynamic contributory data sets and cutting-edge behavioral biometrics hold the key.
If we don’t heed the warnings by Talcove, the U.S. taxpayer will once again be victim to billions of dollars in fraud – but not because of a loosely managed economic pandemic relief package. This time it will be because the government agencies are not being fully aware and are not prepared to fight fraud fueled by AI. And the government isn’t alone in its vulnerability. Families are directly impacted, and should take steps to protect themselves, as Talcove pointed out in another deepfake article.
No individual, corporation, agency, or even government is truly ready for this threat, however. At the pace the world is going in adapting AI initiated fraud, hopefully AI itself goes rogue and decides to fix the fraud problem for us.