Passwords may seem like a relatively recent phenomenon, peculiar to the Internet age, but the first digital password dates back to 1961. Other major events that year: Soviet cosmonaut Yuri Gagarin became the first person to orbit the Earth, construction began on the Berlin Wall in East Germany, and the Beatles played their first concert at Liverpool’s Cavern Club. The world has come a long way since 1961, and yet, after more than half a century of technological and social progress, the humble password remains our first line of defense against cybercriminals.
Passwords have never offered much protection from family, nosy colleagues, or, even less so, ambitious scammers. But the emergence of easily accessible and usable artificial intelligence (AI) tools has rendered the digital password as we know it all but obsolete. While created to accelerate creativity and innovation, generative AI also enables malicious actors to bypass password-based security, socially engineering their way (via deepfake videos, voice clones, and incredibly personalized scams) into our digital bank accounts.
A new survey of 600 fraud management, anti-money laundering, risk management and compliance officials around the world found that nearly 70% of respondents believed criminals were more adept at using artificial intelligence to commit financial crimes than banks were at using technology to stop them.
To combat this threat, financial institutions and banks must innovate.
Global Vice President of BioCatch.
The state of fraud and financial crime in 2024
The UK government currently estimates the cost of cybercrime at £27 billion a year. But a new report from BioCatch revealed that more than half (58%) of businesses say their organisations spent between $5m and $25m on combating AI-driven threats in 2023. Meanwhile, 56% of finance and security professionals surveyed said they saw an increase in financial crime activity last year. Worse still, almost half expect financial crime to rise in 2024 and anticipate the total value of losses due to fraud will also increase.
With the cyber threat landscape evolving rapidly by the day, it’s no surprise that fraud-fighting professionals expect tougher challenges on the horizon. We already see cybercriminals launching sophisticated attacks against businesses and creating convincing phishing emails, deepfake videos for social engineering, and fraudulent documents. They impersonate officials and our loved ones with chatbots and voice clones. And they create fake content to manipulate public opinion.
Artificial intelligence has rendered obsolete the senses we have used for thousands of years to distinguish between legitimate and fraudulent. Financial institutions must develop new approaches to keep up and fight back.
Focusing on zero trust
Last year, over 70% of financial services and banking companies identified the use of fake identities when onboarding new customers. In fact, 91% are already reconsidering the use of voice verification, given the risks of AI voice cloning. In this new era, even if something looks and sounds right, we can no longer guarantee that it is.
The first step to verification in the AI era is greater internal cooperation. More than 40% of professionals say their company deals with fraud and financial crime in separate departments that do not collaborate. Nearly 90% also say financial institutions and government authorities need to share more information to combat fraud and financial crime. But simply sharing information is unlikely to be enough. This new era of AI-driven cybercrime requires protective measures capable of distinguishing between the human and the technological, the legitimate and the fraudulent.
Enter behavioral biometric intelligence.
The difference is human
Behavioral biometric intelligence uses machine learning and artificial intelligence to analyze both physical behavior patterns (mouse movements and typing speed, for example) and cognitive signals (hesitation, segmented typing, etc.) for anomalies. A deviation in user behavior, especially one that matches known patterns of criminal activity, is often a very good indication that the online session is fraudulent. Once detected, these solutions can block the transaction and alert the appropriate banking officials in real time.
Behavioral biometric intelligence can also identify money mule accounts used in money laundering by monitoring behavioral anomalies and changes in activity trends. Research shows a 78% increase in money mule activity among people under 21, while a third of financial institutions cite a lack of resources to monitor mule activity.
Best of all, behavioral biometric intelligence is a non-intrusive, continuous method of risk assessment. It doesn’t slow down or interrupt the user experience. It simply enhances security by reviewing the different ways people perform everyday actions. Traditional controls will still be necessary to combat fraud and financial crime, but incorporating behavioral biometric intelligence can help banks achieve their fraud prevention and digital business goals more effectively.
It seems unlikely that we will completely abandon our trusted passwords, but passwords themselves are already dusty relics of the past. It is imperative that we add new solutions to our online banking security stack to ensure the protection of our personal information and digital interactions. Behavioural biometric intelligence must be one such solution, to help us stay safe in this new unpredictable era.
Introducing the best online cybersecurity course.
This article was produced as part of TechRadarPro's Expert Insights channel, where we showcase the brightest and brightest minds in the tech industry today. The views expressed here are those of the author, and not necessarily those of TechRadarPro or Future plc. If you're interested in contributing, find out more here: